This document describes how to streamline the TMP collection metrics to avoid unnecessary expenses.
Before configuring monitoring collection items, you need to perform the following operations:
TMP offers more than 100 free basic monitoring metrics as listed in Free Metrics in Pay-as-You-Go Mode.
Currently, TMP is billed by the number of monitoring data points. We recommend you optimize your collection configuration to collect only required metrics and filter out unnecessary ones. This will save costs and reduce the overall reported data volume. For more information on the billing mode and Tencent Cloud resource usage, see here.
The following describes how to add filters for ServiceMonitors, PodMonitors, and RawJobs to streamline custom metrics.
Log in to the TKE console and select TMP on the left sidebar.
On the instance list page, select the target instance to enter its details page.
On the Cluster Monitoring page, click Data Collection Configuration on the right of the cluster to enter the collection configuration list page.
Click on the right of the instance to view the metric details.
A ServiceMonitor and a PodMonitor use the same filtering fields, and this document uses a ServiceMonitor as an example.
Sample for ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.7
name: kube-state-metrics
namespace: kube-system
spec:
endpoints:
- bearerTokenSecret:
key: ""
interval: 15s # This parameter is the collection frequency. You can increase it to reduce the data storage costs. For example, you can set it to `300s` for less important metrics, which can reduce the amount of monitoring data collected by 20 times.
port: http-metrics
scrapeTimeout: 15s # This parameter is the collection timeout period. TMP configuration requires that this value not exceed the collection interval, i.e., `scrapeTimeout` <= `interval`.
jobLabel: app.kubernetes.io/name
namespaceSelector: {}
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
kube_node_info
and kube_node_role
metrics, you need to add the metricRelabelings
field to the Endpoint list of the ServiceMonitor. Note that it is metricRelabelings
but not relabelings
.metricRelabelings
:apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.7
name: kube-state-metrics
namespace: kube-system
spec:
endpoints:
- bearerTokenSecret:
key: ""
interval: 15s # This parameter is the collection frequency. You can increase it to reduce the data storage costs. For example, you can set it to `300s` for less important metrics, which can reduce the amount of monitoring data collected by 20 times.
port: http-metrics
scrapeTimeout: 15s
# The following four lines are added:
metricRelabelings: # Each collected item is subject to the following processing.
- sourceLabels: ["__name__"] # The name of the label to be detected. `__name__` indicates the name of the metric or any label that comes with the item.
regex: kube_node_info|kube_node_role # Whether the above label satisfies this regex. Here, `__name__` should satisfy the requirements of `kube_node_info` or `kube_node_role`.
action: keep # Keep the item if it meets the above conditions; otherwise, drop it.
jobLabel: app.kubernetes.io/name
namespaceSelector: {}
selector:
Click OK.
TMP will manage all the ServiceMonitors and PodMonitors in a cluster by default after the cluster is associated. If you want to block the monitoring under a namespace, you can label it with tps-skip-monitor: "true"
as instructed in Labels and Selectors.
TMP collects monitoring data by creating CRD resources of ServiceMonitor and PodMonitor types in your cluster. If you want to block the collection of the specified ServiceMonitor and PodMonitor resources, you can label these CRD resources with tps-skip-monitor: "true"
as instructed in Labels and Selectors.
Was this page helpful?