Auto Scaling

Last updated: 2021-03-11 16:55:31

    Feature Overview

    When the demand for resources in your cluster grows or shrinks, the auto scaling feature automatically add or remove the task nodes, helping you save costs while meeting the requirement for cluster computing power. Auto scaling supports two kinds of scaling policies: load-based scaling and time-based scaling.

    Prerequisites

    1. The auto scaling feature scales task nodes only.
    2. The auto scaling feature is only available for Hadoop clusters.
    3. You can use either the load-based scaling policy or time-based scaling policy, but not both at the same time.
    4. Auto scaling instances are billed on the pay-as-you-go basis.

    Directions

    Log in to the EMR console, click a cluster ID in the cluster list to go to the cluster information page, and click Auto Scaling > Policy Management > Policy Restriction > Edit to configure a scaling policy.

    Setting policy restriction

    Policy restriction is used to specify the number of auto scaling task nodes allowed in a cluster when an auto scaling rule is triggered.

    • Minimum instances: the minimum number of auto scaling task nodes allowed in the cluster when a scale-in policy is triggered.
    • Maximum instances: the maximum number of auto scaling task nodes allowed in the cluster when a scale-out policy is triggered. The total number of instances added under a single or multiple rules cannot exceed the maximum instances.
    • Release all: clears all the nodes that are added in an auto scaling activity with one click, excluding non-auto scaling nodes.

    Setting scaling specifications

    Scaling specifications refer to node specifications for auto scaling. Each cluster can be configured with up to three scaling specifications. When a scale-out rule is triggered, the scaling will be carried out according to the specification priority. When the high-priority specification has less nodes than the number of nodes to be added, the specification of the next priority will be used. In order to maintain the linear change of the cluster load, you are advised to keep the CPU and memory of the scaling specifications consistent.

    • The scaling specifications support up to three kinds of instances, and the scaling priority order is 1 > 2 > 3.
    • You can add, delete, change, and query the instances in the scaling specifications, and adjust the priority of scaling specifications as needed.

    Setting scaling rules

    Scaling rules are business policies used to configure the triggering conditions of scaling activities and the number of nodes to be added or removed. Auto scaling supports two kinds of policies, load-based scaling and time-based scaling. You can choose to use either of the two kinds of policies, but not both at the same time. When you switch the policies, the original scaling rules will be retained in an invalid state and will not be triggered. Those already added nodes will also be retained unless a scale-in rule is triggered. Each policies can be configured with up to 10 scaling rules. When two rules are triggered at the same time, the rule of the higher priority will be executed first.

    To set a scaling rule, select Load-based scaling or Time-based scaling as the Auto Scaling Type in the Scaling Rule Management section and click Add Rule.

    Load-based scaling

    When it is impossible to accurately estimate the peak and trough of cluster computing, you can configure the scaling policy based on the load to ensure that important jobs are completed on time. The load is mainly based on the preset Yarn metric statistics rules and the task nodes are automatically adjusted when the preset conditions are triggered.

    To add a load-based scaling rule, select Load-based scaling as the Auto Scaling Type, click Add Rule, and configure the following fields:

    • Rule Name: the name of the scaling rule. The scaling rule names in the same cluster must be unique (including scale-out and scale-in rules).
    • Valid Time: the load-based scaling rule can be triggered only within the valid time. Unlimited is selected by default and you can customize the valid time.
    • Load Metric: Yarn load metric. You can set the condition rule for triggering the threshold based on the load metric selected here.
      EMR - Auto Scaling Metric Description
      AvailableVCores#root The number of available virtual cores (root queue)
      PendingVCores#root The number of pending virtual cores (root queue)
      AvailableMB#root The amount of memory available in MB (root queue)
      PendingMB#root The amount of memory pending in MB (root queue)
    • Statistical Rule: the cluster load metric selected by the user is triggered once when the threshold is reached according to the selected aggregated dimension (average value) within a specific statistical period.
    • Statistical Period: the statistical duration of the metric. Currently, three statistical periods are supported: 300 seconds, 600 seconds, and 900 seconds.
    • Repeat Count: the number of times that the threshold of the aggregated load metric is reached. When the repeat count is reached, the cluster will be automatically scaled.
    • Scale-out Quantity: the number of task nodes to be added each time when the rule is triggered.
    • Scale-in Quantity: the number of task nodes to be removed each time when the rule is triggered.
    • Cooldown Period: the interval (600 to 43,200 seconds) before carrying out the next auto scaling activity after the rule is successfully executed.
    • Rule Status: used to mark whether the rule is enabled. The default status of a rule is "enabled". When you don't want a rule to be executed but still wish to retain it, you can set the rule status to "disabled".

    Time-based scaling

    There are obvious peaks and troughs in cluster computing volume during a certain period. You can configure scaling policies based on time to ensure that important jobs are completed on time. Time–based scaling policies allows you to set to automatically add or remove task nodes daily, weekly, or monthly at a fixed time period.

    To add a time-based scaling rule, select Time-based scaling as the Auto Scaling Type, click Add Rule, and configure the following fields:

    • Rule Name: the name of the scaling rule. The scaling rule names in the same cluster must be unique (including scale-out and scale-in rules).
    • Once: triggers a scaling activity once at a specific time, accurate to the minute.
    • Recurring: triggers a scaling activity daily, weekly, or monthly at a specific time or time period.
    • Retry Time after Expiration: auto scaling may not be executed for various reasons at the specified time. After you set the retry time after expiration, the system will try to execute the scaling every 30 seconds within the time until it is executed when the conditions are met.
    • Valid To: the time-based scaling rule is valid until the end of the date specified here.
    • Scale-out Quantity: the number of task nodes to be added each time when the rule is triggered.
    • Scale-in Quantity: the number of task nodes to be removed each time when the rule is triggered.
    • Cooldown Period: the interval (600 to 43,200 seconds) before carrying out the next auto scaling activity after the rule is successfully executed.
    • Rule Status: used to mark whether the rule is enabled. The default status of a rule is "enabled". When you don't want a rule to be executed but still wish to retain it, you can set the rule status to "disabled".

    Viewing auto scaling records

    The records of auto scaling activities can be viewed in Scaling History.

    • Supports filtering the scaling records by time and searching by policy name.
    • Supports sorting by execution time, policy name, scaling type, execution status. You can click Details in the Operation column to view detailed information.
    • There are four execution statuses of auto scaling:
      • Executing: the auto scaling is being executed.
      • Successful: all nodes are added to or removed from the cluster.
      • Partially successful: some nodes are successfully added to or removed from the cluster and some failed to be added or removed due to disk quota management or CVM inventory.
      • Failed: no node is added to or removed from the cluster.

    Was this page helpful?

    Was this page helpful?

    • Not at all
    • Not very helpful
    • Somewhat helpful
    • Very helpful
    • Extremely helpful
    Send Feedback
    Help