Using a GPU Node

Last updated: 2019-08-12 20:03:13

PDF

Operation Scenario

If your business involves scenarios such as deep learning and high-performance computing, you can use TKE to support the GPU feature, which can help you quickly use a GPU container. If you need to activate the GPU feature, you can apply by submitting a ticket.
There are two ways to create a GPU CVM instance:

Usage Restrictions

  • The GPU support has to be activated separately by submitting a ticket for application.
  • For the added node, you should select a GPU model and an GPU-related image.
  • TKE supports GPU scheduling only if the Kubernetes version of the cluster is above 1.8.*.
  • GPUs are not shared among containers. A container can request one or more GPUs. However, it cannot request a portion of one GPU.

Steps

Creating a GPU CVM Instance

During the creation process, please note the following two points:
  • On the "Select a model" page, set "Model" in "Node model" to a GPU model. See the figure below:
  • On the "CVM configuration" page, TKE will automatically perform the initial processes such as GPU driver installation according to the selected model, and you do not need to care about the basic image.

Adding an Existing GPU CVM Instance

During the addition process, please note the following two points:
  • On the "Select a node" page, select the existing GPU node. See the figure below:
  • On the "CVM configuration" page, TKE will automatically perform the initial processes such as GPU driver installation according to the selected model, and you don't need to take care of the basic image.