Instance Types

Last updated:2018-09-11 19:07:19

PDF



When a Tencent Cloud CVM is created, the instance type specified by the user determines the host hardware configuration of the instance. Each instance type provides different computing, memory and storage functions. Users can choose a proper instance type according to the scale of application to be deployed. These instance families comprise varying combinations of CPU, memory, storage, heterogeneous hardware, and network bandwidth and give you the flexibility to choose the appropriate mix of resources for your applications.


If your demand is stable, we recommend that you choose the prepaid billing method, so your savings will increase with the length of usage. To react to spikes in demand, you can choose the postpaid billing method, which allows you to activate/terminate computing instances at any time and only pay for the actually consumed resources. CVM usage is billed in one-second increments to maximize your savings.

Instance Type

Tencent Cloud instance families are divided into the following types:

Standard Instance Family

This family provides a balance of computing, memory, and network resources to accommodate most applications.

Standard S3 Standard Network Optimized S2ne Standard Network Enhanced SN2 Standard S2 Standard S1

Memory Optimized Instance Family

This family features large memory and is suitable for applications that require extensive memory operations, searches, and computations, such as high-performance databases and distributed memory caching.

Memory Optimized M3 Memory Optimized M2 Memory Optimized M1

High I/O Instance Family

It features high random IOPS, high throughput and low latency and is suitable for I/O-intensive applications that require high disk read/write performance and low latency, such as high-performance databases.

High I/O I2 High I/O I1

Big Data Instance Family

This instance family is equipped with massive storage resources, features high throughput, and is suitable for throughput-intensive applications such as Hadoop distributed computing, massive log processing, distributed file systems, and large data warehouses.

Big Data D1

Compute Instance Family

This family comes with a base clock rate of 3.2GHz to provide the highest single-core computing performance. It is suitable for compute-intensive applications such as batch processing, high performance computing, and large game servers.

Computing CN3 Computing C3 Computing Network Enhanced CN2 Computing C2

Heterogeneous Computing Instance Family

This family is equipped with heterogeneous hardware such as GPU and FPGA to deliver real-time, fast parallel computing and floating-point computing capabilities. It is suitable for high-performance applications such as deep learning, scientific computing, video encoding/decoding, and graphics workstations.

FPGA FX2 GPU Computing GN8 GPU Computing GN2 GPU Rendering GA2

Batch-based Instance Family

With the lowest per core-hour cost, this family is suitable for compute-intensive applications that use super large computing nodes frequently in a short time, such as rendering, gene analysis, and crystal pharmacy.

Batch Computing BC1 Batch General Purpose BS1

Restrictions on Instances

  • The total number of instances that can be started in one zone is limited. For more information, please see Restrictions on CVM Instance Purchase.

  • Restrictions on system and data disks mounted on an instance: To ensure premium disk I/O performance, Tencent Cloud sets limits on the size and type of data disks purchased with an instance. For more information, please see the disk matching module of the respective instance family. You can also purchase separate cloud disks if you have higher disk requirements.

  • Note that the private network bandwidth capacity of an instance specification is the maximum private network bandwidth limit of the corresponding instance. If the CVM private network traffic exceeds this limit, random packet loss may happen within the private network for your instances.

  • The availability of instance specifications may vary from region to region. Some configurations may have been sold out. Please see the purchase page for accurate information.

Here is a full list of instance families for different applications.

Standard Instance Family

This family provides a balance of computing, memory, and network resources to support most applications.

Standard S3

Standard S3 instances are the latest generation of standard instances. This family provides a balance of computing, memory, and network resources, and it is a premium choice for many applications.

Standard S3 instances are equipped with the new Skylake Xeon® processors that perform 30% better and the latest DDR4 memories that perform 60% better than Standard S2 instances. Standard S3 instances support up to 10Gbps private network bandwidth.

Features

  • 2.5GHz Intel Xeon® Skylake 6133 processors with stable computing performance

  • The latest generation of 6-channel DDR4 memories with a memory bandwidth of 2,666 MT/s

  • Larger instance size, S3.20XLARGE320, offering 80 vCPUs and 320 GB of memory

  • 1:2 or 1:4 processor to memory ratio.

  • The network performance of an instance is determined by its specification. The higher the specification is, the greater the network forwarding performance and the higher the maximum private network bandwidth limit will be.

  • Support all kinds of cloud disks

Application Scenarios

Standard S3 instances are applicable to the following scenarios:

  • Enterprise applications of different types and sizes

  • Small and medium-sized database systems, caches, and search clusters

  • Computing clusters, memory-intensive data processing

Requirements

  • Both prepaid and postpaid billing methods are available for S3 instances.

  • S3 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for S3 instances. Please see the instance specifications on the right. Please make sure that the size of S3 instance you choose can meet the minimum CPU memory requirements of your operating system and applications. In many cases, operating systems with graphical user interface (such as Windows) consuming extensive memory and CPU resources may need larger instances. As the needs of your workload for memory and CPU increase, you can expand to higher configurations or choose other instance types.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

S3.SMALL1

1

1

1.5

S3.SMALL2

1

2

1.5

S3.SMALL4

1

4

1.5

S3.MEDIUM4

2

4

1.5

S3.MEDIUM8

2

8

1.5

S3.LARGE8

4

8

1.5

S3.LARGE16

4

16

1.5

S3.2XLARGE16

8

16

1.5

S3.2XLARGE32

8

32

1.5

S3.3XLARGE24

12

24

1.5

S3.3XLARGE48

12

48

1.5

S3.4XLARGE32

16

32

2.0

S3.4XLARGE64

16

64

2.0

S3.6XLARGE48

24

48

3.0

S3.6XLARGE96

24

96

3.0

S3.8XLARGE64

32

64

4.0

S3.8XLARGE128

32

128

4.0

S3.12XLARGE96

48

96

6.0

S3.12XLARGE192

48

192

6.0

S3.16XLARGE128

64

128

8.0

S3.16XLARGE256

64

256

8.0

S3.20XLARGE320

80

320

10.0

Standard Network Optimized S2ne

Standard Network Optimized S2ne instances are the best choice for applications that require sending and receiving massive network packets. They support sending and receiving up to millions of network packets per second. Standard Network Optimized S2ne instances are recommended for scenarios with high network PPS requirements, such as large game servers, videos, and live broadcasts.

  • This instance type is only available to users on the whitelist for now. Contact your pre-sales manager for the permission to purchase these instances.

Features

  • Intel Xeon E5-2680 Broadwell (v4) processors with a base clock rate of 2.4 GHz and DDR4 memories, offering stable computing performance

  • Up to 48 cores and 192 GB are available for sale.

  • 1:2 or 1:4 processor to memory ratio.

  • The network performance of an instance is determined by its specification. The higher the specification is, the greater the network forwarding performance and the higher the maximum private network bandwidth limit will be.

  • Support all kinds of cloud disks

Application Scenarios

  • Scenarios that require sending and receiving massive network packets, such as game services, video services, and financial analysis

  • Enterprise applications of different types and sizes

Requirements

  • Both prepaid and postpaid billing methods are available for S2ne instances.

  • S2ne instances can only be started in a VPC.

  • Configuration purchase is available for S2ne instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps) Network packet forwarding (pps) Number of queues

S2ne.LARGE8

4

8

1.5

500,000

4

S2ne.2LARGE16

8

16

2.0

800,000

8

S2ne.3LARGE24

12

24

2.5

1 million

8

S2ne.4LARGE32

16

32

3.5

1.6 million

8

S2ne.6LARGE48

24

48

5.0

2 million

8

S2ne.8LARGE64

32

64

7.0

2.5 million

8

S2ne.8LARGE64

48

192

10.0

4.5 million

8

Standard Network Enhanced SN2

Standard Network Enhanced SN2 instances come with a 25 GB network environment, featuring greater bandwidth, lower latency, stable computing performance, and the ability to send and receive up to 700,000 packets per second. They are suitable for scenarios that require sending and receiving massive network packets.

Features

  • 2.4 GHz Intel Xeon E5-2680 Broadwell (v4) processors, DDR4 memories

  • Up to 56 cores and 224 GB are available to meet the need for ultra-large CPUs/memories.

  • Support up to 25 Gbps of private network bandwidth to meet extremely high private network transmission requirements.

  • Support forwarding up to 700,000 network packets per second to allow for a greater number of concurrent users.

  • Support storage options of local disks, HDD cloud disks and SSD cloud disks.

Application Scenarios

  • Scenarios that require sending and receiving massive network packets, such as game services, video services, and financial analysis

  • Enterprise applications of different types and sizes

  • Small and medium-sized database systems, caches, and search clusters

  • Computing clusters, memory-intensive data processing

Requirements

  • The prepaid billing method is available for SN2 instances.

  • S2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for SN2 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

SN2.LARGE8

4

8

2.0

SN2.7XLARGE112

28

112

13.0

SN2.14XLARGE224

56

224

25.0

Standard S2

Standard S2 instances are a relatively new generation of instances. This family provides a balance of computing, memory, and network resources, and it is a good choice for many applications.

Standard S2 instances are equipped with Intel® Xeon® Broadwell processors that bring 40% better integer and floating-point computing performance and DDR4 memories that perform 30% better.

Features

  • Intel Xeon E5-2680 Broadwell (v4) processors with a base clock rate of 2.4 GHz and DDR4 memories

  • CPU performance is 20% higher than Standard S1

  • Up to 56 cores and 224 GB are available for sale.

  • 1:2 or 1:4 processor to memory ratio.

  • Balance of computing, memory, and network resources

Application Scenarios

This family is used for small and mid-size databases, data processing tasks that require additional memory and cache fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing and other enterprise applications.

Requirements

  • S2 instances support both prepaid and postpaid billing methods, and can also be used as production instances of standard host HS20 in CDHs.

  • S2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for S2 instances. Please see the instance specifications on the right. Please make sure that the size of S2 instance you choose can meet the minimum CPU memory requirements of your operating system and applications. In many cases, operating systems with graphical user interface (such as Windows) consuming extensive memory and CPU resources may need larger instances. As the needs of your workload for memory and CPU increase, you can expand to higher configurations or choose other instance types.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

S2.SMALL1

1

1

1.5

S2.SMALL2

1

2

1.5

S2.SMALL4

1

4

1.5

S2.MEDIUM2

2

2

1.5

S2.MEDIUM4

2

4

1.5

S2.MEDIUM8

2

8

1.5

S2.LARGE4

4

4

1.5

S2.LARGE8

4

8

1.5

S2.LARGE16

4

16

1.5

S2.2XLARGE16

8

16

1.5

S2.2XLARGE32

8

32

1.5

S2.3XLARGE24

12

24

2.5

S2.3XLARGE48

12

48

2.5

S2.4XLARGE32

16

32

3.0

S2.4XLARGE64

16

64

3.0

S2.6XLARGE48

24

48

4.5

S2.6XLARGE96

24

96

4.5

S2.8XLARGE64

32

64

6.0

S2.8XLARGE128

32

128

6.0

S2.14XLARGE224

56

224

10.0

Standard S1

The Standard S1 of Series 1 is a type of virtual machines with CPUs ranging from low to high core count. It features moderate prices and flexible options of configurations to meet different needs of users. It also offers the options of local disks, common cloud disks and SSD cloud disks for the data disks (The option may vary with different hardware specifications).

Features

Standard S1 instances have the following features:

  • With CPUs ranging from low to high core count, offering you flexible options to configure your CVM

  • Intel Xeon CPUs and DDR3 memories

  • Support storage options of local disks, HDD cloud disks and SSD cloud disks.

  • Balance of computing, memory, and network resources

Application Scenarios

Standard S1 instances are applicable to large-, medium-, and small-sized applications and databases.

Requirements

  • S1 instances support both prepaid and postpaid billing methods, and can also be used as production instances of standard host in CDHs.

  • S1 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for S1 instances. Please see the instance specifications on the right. Please make sure that the size of S1 instance you choose can meet the minimum CPU memory requirements of your operating system and applications. In many cases, operating systems with graphical user interface (such as Windows) consuming extensive memory and CPU resources may need larger instances. As the needs of your workload for memory and CPU increase, you can expand to higher configurations or choose other instance types.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

S1.SMALL1

1

1

1.5

S1.SMALL2

1

2

1.5

S1.SMALL4

1

4

1.5

S1.MEDIUM2

2

2

1.5

S1.MEDIUM4

2

4

1.5

S1.MEDIUM8

2

8

1.5

S1.MEDIUM12

2

12

1.5

S1.LARGE4

4

4

1.5

S1.LARGE8

4

8

1.5

S1.LARGE16

4

16

1.5

S1.2XLARGE8

8

8

2.0

S1.2XLARGE16

8

16

2.0

S1.2XLARGE32

8

32

2.0

S1.3XLARGE24

12

24

2.5

S1.3XLARGE48

12

48

2.5

S1.4XLARGE16

16

16

3.5

S1.4XLARGE32

16

32

3.5

S1.4XLARGE64

16

64

3.5

S1.6XLARGE48

24

48

5.0

S1.8XLARGE64

32

64

7.0

S1.12XLARGE96

48

96

10.0

Memory Optimized Instance Family

Memory optimized instances feature large memory and are suitable for applications that require extensive memory operations, searches, and computations, such as high-performance databases and distributed memory caching.

Memory Optimized M3

As the latest generation of memory optimized instances, Memory Optimized M3 instances are designed to deliver high performance for handling the workloads of large data sets in memory, and are the best choice for applications demanding high-memory computing.

Memory Optimized M3 instances come with the new Intel® Xeon® Skylake processors that bring 30% better performance than Standard S2 instance, and the latest DDR4 memories with an increase in performance at 60%. They support up to 10 Gbps private network bandwidth.

  • This instance type is only available to users on the whitelist for now. Contact your pre-sales manager for the permission to purchase these instances.

Application Scenarios

This type of instances is suitable for the following circumstances:

  • Applications that require extensive memory operations, searches, and computations, such as high-performance databases and distributed memory caching.

  • Users who build their own hadoop clusters or redis for gene computing and so on.

Features

  • 2.5 GHz Intel Xeon® Skylake 6133 processors with stable computing performance

  • The latest generation of 6-channel DDR4 memories with a bandwidth of 2,666 MT/sec

  • Larger instance size, M3.16XLARGE512, offering 64 vCPUs and 512 GB of memory

  • 1:8 or 1:12 processor to memory ratio

  • Price is lower than any other memory-optimized instances with the same memory

  • The network performance of an instance is determined by its specification. The higher the specification is, the greater the network forwarding performance and the higher the maximum private network bandwidth limit will be.

Requirements

  • Both prepaid and postpaid billing methods are available for M3 instances.

  • M3 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for M3 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

M3.SMALL8

1

8

1.5

M3.MEDIUM16

2

16

1.5

M3.LARGE32

4

32

1.5

M3.2XLARGE64

8

64

1.5

M3.3XLARGE96

12

96

1.5

M3.3XLARGE144

12

144

1.5

M3.4XLARGE128

16

128

2.0

M3.4XLARGE192

16

192

2.0

M3.8XLARGE256

32

256

4.0

M3.8XLARGE384

32

384

4.0

M3.16XLARGE512

64

512

8.0

Memory Optimized M2

Memory Optimized M2 instances are designed to deliver high performance for handling the workloads of large data sets in memory. They feature large memory, which makes them the perfect choice for applications demanding high memory computing.

Application Scenarios

This type of instances is suitable for the following circumstances: - Applications that require extensive memory operations, searches, and computations, such as high-performance databases and distributed memory caching. - Users who build their own hadoop clusters or redis for gene computing and so on.

Features

  • 2.4 GHz Intel Xeon® E5-2680v4 processors, DDR4 memories

  • Up to 448 GB memory available. M3.16XLARGE512, offering 56 vCPUs and 448 GB of memory

  • 1:8 processor to memory ratio

  • Price is lower than any other memory-optimized instances with the same memory

Requirements

  • M2 instances support both prepaid and postpaid billing methods, and can also be used as production instances of memory optimized host HM20 in CDHs.

  • M2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for M2 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

M2.SMALL8

1

8

1.5

M2.MEDIUM16

2

16

1.5

M2.LARGE32

4

32

1.5

M2.2XLARGE64

8

64

1.5

M2.3XLARGE96

12

96

2.5

M2.4XLARGE128

16

128

3.0

M2.6XLARGE192

24

192

4.5

M2.8XLARGE256

32

256

6.0

M2.12XLARGE384

48

384

9.0

M2.14XLARGE448

56

448

10.0

Memory Optimized M1

Memory Optimized M1 instances with about 1:8 CPU to memory ratio are applicable to applications that require extensive memory operations, searches, and computations, such as high-performance databases and distributed memory caching.

Features

  • 2.3 GHz Intel Xeon® E5-2670 v3 processors and DDR3 memories, providing larger-sized instances with stronger computing capacities.

  • Memory Intensive with a golden ratio, satisfying the needs for large-scale business deployment.

Application Scenarios

This type of instances is suitable for the following circumstances:

  • Applications that require extensive memory operations, searches, and computations, such as high-performance databases and distributed memory caching.

  • Users who build their own hadoop clusters or redis for gene computing and so on.

Requirements

  • Both prepaid and postpaid billing methods are available for M1 instances.

  • M1 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for M1 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

M1.SMALL8

1

8

1.5

M1.MEDIUM16

2

16

1.5

M1.LARGE32

4

32

1.5

M1.2XLARGE64

8

64

2.0

M1.3XLARGE96

12

96

2.5

M1.4XLARGE128

16

128

3.5

M1.6XLARGE192

24

192

5.0

M1.8XLARGE256

32

256

7.0

M1.12XLARGE368

48

368

10.0

High I/O Instance Family

It features high random IOPS, high throughput, low latency, etc. and is suitable for I/O-intensive applications that require fast hard disk read/write and low latency, such as high-performance databases.

High I/O I2

High I/O I2 instances are optimized to provide tens of thousands of low-latency random I/O operations per second (IOPS) to applications, which are the ideal choice for high IOPS scenarios.

High I/O I2 instances come with Intel® Broadwell processors that bring 40% better integer and floating-point computing performance and DDR4 memories that perform 30% better.

Application Scenarios

  • High-performance databases, NoSQL databases (e.g. MongoDB), and clustered databases

  • I/O intensive applications that require low latency, such as online transaction processing (OLTP) systems, and ElasticSearch.

Features

  • 2.4 GHz Intel Xeon E5-2680 Broadwell (v4) processors, DDR4 memories

  • CPU performance is 20% higher than Series 1 High IO I1

  • SSD is used for instance storage, and SSD local disk is adopted for all system disks

    • High random IOPS, with up to 75,000 random read IOPS (blocksize = 4k, iodepth = 32) and up to 10,000 random write IOPS (blocksize = 4k, iodepth = 32) in typical scenarios.

    • High throughput, with up to 250 MB/sec random read throughput (blocksize =4k, iodepth =32) in typical scenarios.

    • Low latency, with the access latency in sub-milliseconds (blocksize = 4k, iodepth = 1) in typical scenarios.

Requirements

  • High IO I2 instances support both prepaid and postpaid billing methods, and can also be used as production instances of high IO host HI20 in CDHs.

  • I2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for I2 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

I2.MEDIUM4

2

4

1.5

I2.MEDIUM8

2

8

1.5

I2.LARGE8

4

8

1.5

I2.LARGE16

4

16

1.5

I2.2XLARGE16

8

16

1.5

I2.2XLARGE32

8

32

1.5

I2.3XLARGE24

12

24

2.5

I2.3XLARGE48

12

48

2.5

I2.4XLARGE32

16

32

3.0

I2.4XLARGE64

16

64

3.0

I2.6XLARGE48

24

48

4.5

I2.6XLARGE96

24

96

4.5

I2.8XLARGE64

32

64

6.0

I2.8XLARGE128

32

128

6.0

I2.14XLARGE224

56

224

10.0

High I/O I1

High I/O I1 is a virtual machine mounted with a high-performance SSD local disk, which can meet the high requirements for fast disk read/write and low latency.

Features

  • High random IOPS, with up to 40,000 random read IOPS in typical scenarios (blocksize =4k, iodepth =32);

  • High throughput, with up to 250 MB/sec random read throughput (blocksize =4k, iodepth =32) in typical scenarios.

  • Low latency, with the access latency in sub-milliseconds.

Application Scenarios

  • High-performance databases, NoSQL databases (e.g. MongoDB), and clustered databases

  • I/O intensive applications that require low latency, such as online transaction processing (OLTP) systems, and Elasticsearch.

Requirements

  • Both prepaid and postpaid billing modes are available for High I/O I1 instances.

  • I1 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for I1 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

I1.MEDIUM4

2

4

1.5

I1.MEDIUM8

2

8

1.5

I1.MEDIUM16

2

16

1.5

I1.LARGE8

4

8

1.5

I1.LARGE16

4

16

1.5

I1.2XLARGE16

8

16

2.0

I1.2XLARGE32

8

32

2.0

I1.3XLARGE24

12

24

2.5

I1.3XLARGE48

12

48

2.5

I1.4XLARGE32

16

32

3.5

I1.4XLARGE64

16

64

3.5

I1.6XLARGE48

24

48

5.0

I1.6XLARGE96

24

96

5.0

I1.8XLARGE64

32

64

7.0

I1.8XLARGE128

32

128

7.0

I1.12XLARGE192

48

192

10.0

Big Data Instance Family

The big data instance family is equipped with massive storage resources, features high throughput, and is suitable for throughput-intensive applications such as Hadoop distributed computing, massive log processing, distributed file systems, and large data warehouses.

Big Data D1

The Big Data D1 instance is equipped with massive storage resources, can carry up to 48 TB SATA HDD local storage, and is suitable for throughput-intensive services, such as Hadoop distributed computing and parallel data processing.

  • This instance type is only available to users on the whitelist for now. Contact your pre-sales manager for the permission to purchase these instances.

Application Scenarios

  • Distributed computing services such as Hadoop MapReduce/HDFS/Hive/HBase.

  • Business scenarios such as Elasticsearch, log processing, and large data warehouse.

  • Customers in the Internet, finance, and other industries who require big data computing and storage analysis, as well as business scenarios of massive data storage and computing.

Features

  • 2.4 GHz Intel Xeon E5-2680v4 processors, DDR4 memories

  • Equipped with up to 48 TB local HDD storage

  • For a single disk, the sequential read throughput is 190+ MB/s, and sequential write throughput is 190+ MB/s (128 KB of block size and depth of 32)

  • For the machine, the throughput can reach up to 2.3 GB/s (128 KB of block size and depth of 32).

  • The read/write delay is as low as 2-5 ms.

  • The unit price of local storage is as low as 1/10 of that of S2. The total cost is similar to that of Hadoop clusters built in IDC.

  • The 1:4 processor to memory ratio is tailored for the big data scenarios.

Requirements

  • Big Data D1 instance uses local disk as its data disk, so there is a risk of data loss (for example, when host crashes). If your application does not have a data reliability architecture, it is highly recommended that you select an instance, which can use a cloud disk as its data disk.

    • After the local disk is damaged, you need to shut down the CVM instance before we can change the local disk.

    • If the CVM instance has crashed, we will inform you and make repairs.

  • Both prepaid and postpaid billing modes are available for Big Data D1 instances.

  • D1 instances can be launched in basic networks and VPCs.

  • D1 instance does not support the adjustment of configuration.

  • Configuration purchase is available for D1 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

D1.2XLARGE32

8

32

1.5

D1.4XLARGE64

16

64

3.0

D1.6XLARGE96

24

96

4.5

D1.8XLARGE128

32

128

6.0

D1.14XLARGE224

56

224

10.0

Compute Instance Family

This family comes with a base clock rate of 3.2 GHz to provide the highest single-core computing performance. It is suitable for compute-intensive applications such as batch processing, high performance computing, and large game servers.

Computing CN3 Instance

As the latest generation of computing instances, the Computing CN3 instances come with 25 GB ENI that is 2.5 times faster than that of normal computing instances, featuring greater bandwidth and lower latency. Computing CN3 instances provide the processor with the maximum base clock rate and the best cost performance among CVMs. It is an ideal choice for applications subject to computing, such as high computing performance and high concurrent read and write.

Computing CN3 instances are equipped with the new Skylake Xeon® processors, and support up to 10 Gbps private network bandwidth.

  • This instance type is only available to users on the whitelist for now. Contact your pre-sales manager for the permission to purchase these instances.

Application Scenarios

These instances are an ideal choice for:

  • Batch processing workloads and high performance computing (HPC)

  • High-traffic Web frontend server

  • Other compute-intensive services such as the massively multiplayer online (MMO) game server

Features

  • 3.2 GHz Intel Xeon® Skylake 6146 processors with up to 3.6 GHz of turbo frequency

  • New Intel Advanced Vector Extension (AXV-512) instruction set

  • Support up to 25 Gbps of private network bandwidth to meet extremely high private network transmission requirements.

  • The latest generation of 6-channel DDR4 memories with a memory bandwidth of 2,666 MT/s

  • 1:2 or 1:4 processor to memory ratio.

  • The network performance of an instance is determined by its specification. The higher the specification is, the greater the network forwarding performance and the higher the maximum private network bandwidth limit will be.

  • Support all kinds of cloud disks

Requirements

  • Both prepaid and postpaid billing methods are available for CN3 instances.

  • CN3 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for CN3 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

CN3.LARGE8

4

8

3.0

CN3.LARGE16

4

16

3.0

CN3.2XLARGE16

8

16

5.0

CN3.2XLARGE32

8

32

5.0

CN3.4XLARGE32

16

32

9.0

CN3.4XLARGE64

16

64

9.0

CN3.8XLARGE64

32

64

17.0

CN3.8XLARGE128

32

128

17.0

Computing C3 Instance

As the latest generation of computing instances, Computing CN3 instances provide the processor with the maximum base clock rate and the best cost performance among CVMs. It is an ideal choice for applications subject to computing, such as high computing performance and high concurrent read and write.

Computing C3 instances are equipped with the new Skylake Xeon® processors that perform 30% better and the latest DDR4 memories that perform 60% better than Computing C2 instances. Computing C3 instances support up to 10 Gbps private network bandwidth.

Application Scenarios

These instances are an ideal choice for:

  • Batch processing workloads and high performance computing (HPC)

  • High-traffic Web frontend server

  • Other compute-intensive services such as the massively multiplayer online (MMO) game server

Features

  • 3.2 GHz Intel Xeon® Skylake 6146 processors with up to 3.6 GHz of turbo frequency

  • New Intel Advanced Vector Extension (AXV-512) instruction set

  • The latest generation of 6-channel DDR4 memories with a memory bandwidth of 2,666 MT/s

  • 1:2 or 1:4 processor to memory ratio.

  • The network performance of an instance is determined by its specification. The higher the specification is, the greater the network forwarding performance and the higher the maximum private network bandwidth limit will be.

  • Support all kinds of cloud disks

Requirements

  • Both prepaid and postpaid billing methods are available for C3 instances.

  • C3 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for C3 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

C3.LARGE8

4

8

2.5

C3.LARGE16

4

16

2.5

C3.LARGE32

4

32

2.5

C3.2XLARGE16

8

16

3.0

C3.2XLARGE32

8

32

3.0

C3.4XLARGE32

16

32

4.5

C3.4XLARGE64

16

64

4.5

C3.8XLARGE64

32

64

8.0

C3.8XLARGE128

32

128

8.0

Computing Network Enhanced CN2

Computing Network Enhanced CN2 instances come with 25 GB ENI, providing a network performance that is 2.5 times faster than that of normal computing instances, featuring greater bandwidth, lower latency, and super-high clock rate. They are suitable for scenarios that require high computing resources and sending and receiving massive network packets.

Features

  • 3.2 GHz Intel Xeon® E5-2667v4 processors with up to 3.6 GHz of turbo frequency; DDR4 memories

  • 1:2 or 1:4 processor to memory ratio.

  • Support up to 25 Gbps of private network bandwidth to meet extremely high private network transmission requirements.

  • Support forwarding up to 700,000 network packets per second to allow for a greater number of concurrent users.

  • Support storage options of SSD local disks, HDD cloud disks and SSD cloud disks.

Application Scenarios

  • Scenarios that require sending and receiving massive network packets, such as game services, video services, and financial analysis

  • Batch processing workloads

  • High-traffic Web server, massively multiplayer online (MMO) game server

  • High performance computing (HPC)

Requirements

  • The prepaid billing method is available for CN2 instances.

  • CN2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for CN2 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

CN2.4XLARGE32

16

32

13.0

CN2.4XLARGE64

16

64

13.0

CN2.8XLARGE96

32

96

25.0

Computing C2 Instance

Computing CN3 instances provide the processor with the highest performance and the best cost performance among CVMs. It is an ideal choice for applications subject to computing, such as high computing performance and high concurrent read and write.

Application Scenarios

These instances are an ideal choice for:

  • Batch processing workloads

  • High-traffic Web server, massively multiplayer online (MMO) game server

  • High-performance computing (HPC) and other compute-intensive applications.

Features

  • 3.2 GHz Intel Xeon® E5-2667v4 processors with up to 3.6 GHz of turbo frequency; DDR4 memories

  • 1:2 or 1:4 processor to memory ratio.

  • The network performance of an instance is determined by its specification. The higher the specification is, the greater the network forwarding performance and the higher the maximum private network bandwidth limit will be.

  • Support all kinds of cloud disks

Requirements

  • C2 instances support both prepaid and postpaid billing methods, and can also be used as production instances of computing host HC20 in CDHs.

  • C2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for C2 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

C2.LARGE8

4

8

2.5

C2.LARGE16

4

16

2.5

C2.LARGE32

4

32

2.5

C2.2XLARGE16

8

16

3.5

C2.2XLARGE32

8

32

3.5

C2.4XLARGE32

16

32

6.0

C2.4XLARGE60

16

60

6.0

C2.4XLARGE64

16

64

6.0

C2.8XLARGE64

32

64

10

C2.8XLARGE96

32

96

10

C2.8XLARGE120

32

120

10

Heterogeneous Computing Instance Family

This family is equipped with heterogeneous hardware such as GPU and FPGA to deliver real-time, fast parallel computing and floating-point computing capabilities. It is suitable for high-performance applications such as deep learning, scientific computing, video encoding/decoding, and graphics workstations.

GPU Computing GN8

GPU Computing GN8 uses high-performance NVIDIA Tesla P40 and is applicable to generic GPU computing applications with CUDA and OpenCL programming models. The peak computing capacity of a single CVM exceeds 96 TFLOPS in single-precision floating point compute and 376 TOPS in INT8. Based on top-notch TFLOPS and INT8 performance, GPU Computing GN8 is the most cost-effective AI training and inference solution that perfectly meeting the needs of one-stop deep learning training and real-time inference.

Application Scenarios

These instances are an ideal choice for:

  • Deep learning such as image classification and recognition, speech recognition, and natural language processing

  • Scientific computing including computational fluid dynamics, computational finance, genomics research, environmental analysis, high-performance computing, and other server-side GPU computing workloads.

Features

  • NVIDIA Tesla P40 GPU compute cards

  • 2.4GHz Intel Xeon E5-2680v4 processors, DDDR4 memories

  • The peak computing capacity of a single CVM: over 96 TFLOPS (single-precision floating point compute); over 376 TOPS (INT8)

Requirements

  • Both prepaid and postpaid billing methods are available for GN8 instances.

  • GN8 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for GN8 instances. Please see the instance specifications on the right.

  • The configuration of GN8 instances cannot be changed.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

GN8.LARGE56

6

56

1.5

GN8.3XLARGE112

14

112

2.5

GN8.7XLARGE224

28

224

5.0

GN8.14XLARGE448

56

448

10.0

GPU Computing GN2

GPU Computing GN2 uses high-performance NVIDIA Tesla M40 and is applicable to generic GPU computing applications with CUDA and OpenCL programming models. It offers powerful single/double-precision floating point feature with 6,144 acceleration cores. The single-precision floating point compute capacity reaches14 TFlops.

Application Scenarios

These instances are an ideal choice for:

  • Deep learning such as image classification and recognition, speech recognition, and natural language processing

  • Scientific computing including computational fluid dynamics, computational finance, genomics research, environmental analysis, high-performance computing, and other server-side GPU computing workloads.

Features

  • NVIDIA Tesla M40 GPU compute cards, with 24GB GDDR5 video memory for each card

  • 2.4GHz Intel Xeon E5-2680v4 processors

  • Peak computing capacity for single machine: over 14TFlops (single-precision floating point); over 0.4TFlops (double-precision floating point).

Requirements

  • Both system disks and data disks use local SSD disks and can be mounted with cloud disks.

  • Both prepaid and postpaid billing methods are available for GN2 instances.

  • GN2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for GN2 instances. Please see the instance specifications on the right.

  • The configuration of GN2 instances cannot be changed.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

GN2.7XLARGE56

28

56

5.0

GN2.14XLARGE112

56

112

10.0

GPU Rendering GA2

GA2 instances are optimized for graphics-intensive applications and are suitable for generic GPU rendering applications. These instances are equipped with the latest AMD S7150 GPUs. A single GPU has 2,048 processor cores and provides a single-precision floating-point compute capacity of 3.77 TFLOPS. With powerful computing capacity and elastic expansion based on your business needs, GA2 instances are optimal for high-performance rendering and computing applications.

  • This instance type is only available to users on the whitelist for now. Contact your pre-sales manager for the permission to purchase these instances.

Application Scenarios

GA2 Instance is especially suitable for GPU computing workloads requiring high-performance rendering and excellent graphic processing capacities.

  • Graphic rendering scenarios such as 3D modeling, rendering, multimedia encoding/decoding, and nonlinear editing

  • Business scenarios that require a small amount of virtual GPU resources for optimal graphics performance, such as cloud gaming

Features

  • AMD FirePro™ S7150 GPU, with single-precision floating-point compute capacity of up to 3.77 TFlops for a single GPU

  • Intel Xeon E5-2680v4 2.5GHz processors; high-speed DDR4 memories

Requirements

  • Both prepaid and postpaid billing methods are available for GA2 instances.

  • GA2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for GA2 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

GA2.2XLARGE16

8

16

1.5

FPGA FX2

FPGA FX2 is an FPGA-based computing service equipped with a Xilinx KU115 accelerator. It is designed to accelerate compute-intensive algorithms, and achieve high throughput, low latency, and hardware programming. We recommend that you use it for high-performance computing services, such as genomics research, financial analysis, image compression and real-time video processing.

  • This instance type is only available to users on the whitelist for now. Contact your pre-sales manager for the permission to purchase these instances.

Application Scenarios

Ideal for scenarios that require large amounts of parallel computing and high throughput

  • Deep learning and inference scenarios such as natural language processing and image classification

  • Scenarios that require a large amount of analysis and computing such as genomics research and financial analysis

  • Large-scale image processing scenarios such as image compression and real-time video processing

Features

  • Use Xilinx Kintex UltraScale KU115 FPGA

  • Intel Xeon E5-2680v4 2.5GHz processors; high-speed DDR4 memories

Requirements

  • The prepaid billing method is available for FX2 instances.

  • FX2 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for FX2 instances. Please see the instance specifications on the right.

  • The configuration of FX2 instances cannot be changed.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

FX2.3XLARGE60

14

60

2.5

FX2.7XLARGE120

28

120

5.0

FX2.14XLARGE240

56

240

10.0

Batch-based Instance Family

With the lowest per core-hour cost, this family is suitable for compute-intensive applications that use super large computing nodes frequently in a short time, such as rendering, gene analysis, and crystal pharmacy.

  • This instance type is only available to users on the whitelist for now. Contact your pre-sales manager for the permission to purchase these instances.

Batch-based Computing BC1

The batch-based computing BC1 instance is an instance of ultra-high cost performance. It applies the core/hour billing method, which is accurate to seconds. The price is as low as 0.1 CNY/core. It is flexible and ready to use, and can be terminated when you do not need it. It supports a variety of specifications to meet the needs of computing-intensive users who frequently use super large computing nodes in short time, such as rendering, gene analysis, and crystal pharmacy.

Application Scenarios

  • Video/film rendering

  • Genomics, crystal pharmacy, etc.

  • HPC computing-intensive business such as weather forecasting and astronomy

Features

  • Cost-effective with the lowest price per hour among all instances with the same specification

  • 1:4 processor to memory ratio

Requirements

  • Only the postpaid billing method is available for batch-based computing BC1 instances.

  • BC1 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for BC1 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

BC1.LARGE16

4

16

2.0

BC1.3XLARGE48

12

48

5.0

BC1.6XLARGE96

24

96

10.0

Batch-based Computing BS1

The batch-based computing BS1 instance is an instance of ultra-high cost performance. It applies the core/hour billing method, which is accurate to seconds. The price is as low as 0.09 CNY/core. It is flexible and ready to use, and can be terminated when you do not need it. It supports a variety of specifications to meet the needs of computing-intensive users who frequently use super large computing nodes in short time, such as rendering, gene analysis, and crystal pharmacy.

Features

  • Cost-effective with the lowest price per hour among all instances with the same specification

  • 1:2 processor to memory ratio

Application Scenarios

  • Video/film rendering

  • Genomics, crystal pharmacy, etc.

  • HPC computing-intensive business such as weather forecasting and astronomy

Requirements

  • Only the postpaid billing method is available for batch-based computing BS1 instances.

  • BS1 instances can be launched in basic networks and VPCs.

  • Configuration purchase is available for BS1 instances. Please see the instance specifications on the right.

Model vCPU (core) Memory (GB) Private network bandwidth (Gbps)

BS1.LARGE8

4

8

2.0

BS1.3XLARGE24

12

24

5.0

BS1.6XLARGE48

24

48

10.0