A Sparkling cluster is the core Sparkling Data Warehouse Suite component. The size of the Sparkling cluster determines the upper limit of the storage capacity and computing power that Sparkling can provide. You can customize the cluster based on your business needs.
This documents describes how to quickly create a Sparkling cluster in the Sparkling console.
After receiving your application, we will review your service requirements. Our Sparkling team will contact you to confirm your preliminary needs and discuss your usage scenarios as well as other business issues. After the review is completed, we will approve your beta qualification.
|Storage Cluster Name||Name of the cluster||-|
|Region||Actual working region of the cluster||The current version only supports the Guangzhou, Shanghai, and Beijing regions.
The current region defaults to the region selected on the cluster management page. If you need to change it, please select a new region in the upper left corner of the cluster management page.
|Availability Zone||Select an availability zone in the region||You can check whether there are available models in the availability zone in **Master Node**.
|Network||Specify a VPC to connect to Sparkling||You can create a VPC in the console for query and planning. Once selected, the VPC and subnet cannot be changed. Subsequently added compute clusters can only be deployed in the same VPC and subnet.|
|Subnet Group||Specify a subnet connected to Sparkling||You can create a subnet in the console for query and planning.|
|Running Version||Sparkling's internal component version||The current version only supports 0.1.0 (Spark 2.3.2, Hadoop 2.7.3, Hive 2.1.0).|
|Master Node||D1||The current version supports selecting a Big Data D1 model with four types of configuration. Please select the memory and CPU core quantity of the master node as needed.|
|Core Compute Node||Responsible for the cluster's storage tasks||The current version supports selecting a Big Data D1 model with four types of configuration. You can select the memory and CPU core quantity of the node as needed.|
|Elastic Compute Node||Responsible for the cluster's computation tasks||The current version supports selecting a MEM Optimized model with various types of configuration. You can select the memory and CPU core quantity of the node as needed.|
|Min Node Quantity||Minimum number of nodes required||-|
|Max Node Quantity||Maximum number of nodes required||-|