Ship to Tencent Cloud CLS
Ship to specified HTTP server
If you have not yet built your own data analysis system, Tencent Cloud provides Log Service (CLS) to help you complete the collection, shipping, and search analysis of real-time logs in a one-stop manner, reducing your development and maintenance costs. You can follow the steps below to ship real-time logs to Tencent Cloud CLS service:
Prerequisites
You have already activated Cloud Log Service (CLS) and granted permission to Tencent Cloud EdgeOne to create a logset. Note:
2. It is suggested to enable the service with the root admin account. If it is a sub-account or collaborator, you need to grant them the relevant permissions.
Directions
Create a shipping task
1. In Step ③, select the destination as Tencent Cloud Log Service (CLS) and click Next.
2. Fill in the relevant parameter information, the parameter explanation is as follows:
Region: Select the target region for shipping.
Target set name: Select the logset in the target region.
Note:
If this is empty or you need to create a new logset, click Create to create a logset in the selected region.
Log topic name: You can enter 1-200 characters, allowed characters are a-z, A-Z, 0-9, _, -
.
Log retention time: Please enter a positive integer between 1 and 366.
Related references
Log search
Log search supports various types of search analysis methods and chart analysis forms. For detailed explanations, please refer to Log Search. EdgeOne performs log search based on shipping tasks. On the Real-time logs page, select the shipping task you want to search, and click Search to enter the log search page.
You can later manage logsets and other modules through Tencent Cloud Log Service (CLS), such as modifying the logset name.
Logset
A logset (Logset) is a project management unit of Tencent Cloud Log Service (CLS), used to distinguish logs of different projects, and a logset corresponds to a collection. Tencent Cloud EdgeOne logset has the following basic attribute information:
Region: The region where the logset belongs. Logset name: Logset naming.
Log retention time: The retention period of data in the current logset.
Creation time: Logset creation time.
Log topic
A log topic (Topic) is a basic management unit of Tencent Cloud Log Service (CLS). A logset can contain multiple log topics. A log topic corresponds to a type of application or service, and it is recommended to collect the same type of logs from different machines into the same log topic. For example, a business project has three types of logs: operation logs, application logs, and access logs, and each type can create a corresponding log topic.
The log service system manages different log data of users based on log topics, and each log topic can be configured with different data sources, different index rules, and delivery rules. Therefore, the log topic is the basic unit for configuring and managing log data in the log service. After creating a log topic, you need to configure the relevant rules to effectively collect logs and use search analysis and delivery functions as scheduled.
From a functional perspective, log topics mainly provide:
Collect logs to log topics.
Store and manage logs in units of log topics.
Search and analyze logs in units of log topics.
Deliver logs from log topics to other platforms.
Download and consume logs from log topics.
Note
The above information is excerpted from the Cloud Log Service (CLS) product documentation. Please refer to the explanations on the Log Service (CLS) side. Each real-time log shipping task shipped to Tencent Cloud Log Service (CLS) will ship the logs of the selected subdomains to a corresponding log topic.
If you currently have your own built-in data source and need to ship real-time logs to a compatible Amazon Simple Storage Service bucket, you can refer to the following steps to continue the operation:
Note:
Currently, only support shipping site acceleration logs and L4 proxy logs to compatible Amazon S3 Storage Service buckets.
Directions
1. In Step ③, Select the destination as S3 compatible and click next.
2. Fill in the corresponding destination parameters:
Endpoint URL: URL that does not contain the bucket name or path, such as: https://storage.googleapis.com
, https://s3.ap-northeast-2.amazonaws.com
.
Bucket Region: The region where the bucket is located, such as: ap-northeast-2
.
Bucket: The bucket name and the corresponding log storage path: for example, your_bucket_name/EO-logs/
.
File Compression: Whether to use gzip compression for log files. If checked, the shipped log files will be compressed with gzip, and the file name will be changed to filename.log.gz
.
SecretId: Access Key ID used to access the bucket.
SecretKey: Secret key used to access the bucket.
Note:
2. File name description: Logs will be stored in the specified bucket path in the format of UploadTime_Random.log
, and logs will be archived in a folder by date (UTC +00:00), such as:logs/20230331/20230331T185917Z_2aadf5ce.log
.
UploadTime: Log file upload time, using ISO-8601 format, UTC+00:00 timezone.
Random: Random characters. In cases where there are large log volumes, there may be multiple log files with the same upload time, and this random character string is used to identify different files.
3. Click Push. After issuing the real-time log shipping task, EdgeOne will ship a test file to the target bucket path to verify connectivity. For instance, a file named 1699874755_edgeone_push_test.txt
will be shipped with the fixed string test
.
If you currently have a self-built data source, EdgeOne can call the backend interface address you provided by an HTTP POST request, transmitting the logs to your designated server within the HTTP body.
Note:
1. HTTP is plaintext transmission, so it is suggested that you use an encrypted HTTPS address for the API.
2. To further enhance the security verification of request sources, we provide a request authentication scheme. You can fill in the relevant authentication information in the push destination configuration, and the authentication algorithm can be found at:Request Authentication Algorithm. 3. The log shipping format comprises an array of multiple JSON objects, and each JSON object is a log.
Operation Guide
Create a shipping task
1. In Step ③, select the destination as HTTP service (POST) and click Next.
2. Fill in the relevant destination and parameter information, with the following parameter descriptions:
API address: Enter your data source API address, e.g., https://www.example.com/log
File compression: To reduce the size of log files and save traffic costs, you can enable file compression by checking "Compress log files with gzip". EdgeOne will use gzip format to compress logs before transmission and will add an HTTP request header content-encoding = gzip
to indicate the compression format.
Origin authentication: When selecting encryption authentication, the shipping logs will carry authentication information for the origin to verify, ensuring the security of the data source identity.
Custom HTTP request headers: Add the HTTP headers that need to be carried when EdgeOne initiates a request. For example, if you need to identify the log source vendor as EdgeOne, you can add a header log-source = EdgeOne
to identify the log source.
3. Click "ship" to issue a real-time log shipping task.
4. During the configuration phase of the real-time log shipping task, in order to verify the interface connectivity, an empty data will be sent to the interface address for verification. The data format is as follows:
[{
"BotClassAccountTakeOver": "-",
"BotClassAttacker": "-",
"BotClassMaliciousBot": "-",
"BotClassProxy": "-",
"BotClassScanner": "-",
"ClientDeviceType": "-",
"ClientIP": "-",
"ClientISP": "-",
"ClientRegion": "-",
"ClientState": "-",
"EdgeCacheStatus": "-",
"EdgeEndTime": "-",
"EdgeInternalTime": "-",
"EdgeResponseBodyBytes": "-",
"EdgeResponseBytes": "-",
"EdgeResponseStatusCode": "-",
"EdgeResponseTime": "-",
"EdgeServerID": "-",
"EdgeServerIP": "-",
"EdgeSeverRegion": "-",
"LogTime": "-",
"OriginDNSResponseDuration": "-",
"OriginIP": "-",
"OriginRequestHeaderSendDuration": "-",
"OriginResponseHeaderDuration": "-",
"OriginResponseStatusCode": "-",
"OriginSSLProtocol": "-",
"OriginTCPHandshakeDuration": "-",
"OriginTLSHandshakeDuration": "-",
"ParentRequestID": "-",
"RemotePort": "-",
"RequestBytes": "-",
"RequestHost": "-",
"RequestID": "-",
"RequestMethod": "-",
"RequestProtocol": "-",
"RequestRange": "-",
"RequestReferer": "-",
"RequestSSLProtocol": "-",
"RequestTime": "-",
"RequestUA": "-",
"RequestUrl": "-",
"RequestUrlQueryString": "-"
}]
Related References
Request Authentication Algorithm
If you have selected encryption signature in the origin authentication of the push destination information, you can custom input your custom Configuration SecretId and SecretKey. EdgeOne will add the signature auth_key
and access_key
in the Request URL. The details of the signature algorithm are as follows:
1. Request URL composition
As shown below, the Request URL will carry auth_key
and access_key
after the "?".
http://DomainName[:port]/[uri]?auth_key=timestamp-rand-md5hash&access_key=SecretID
Parameter description:
timestamp: The current time of the request, using a Unix 10-digit second-level timestamp.
rand: random number
access_key: used to identify the identity of the API requester, that is, your custom Configuration SecretID.
SecretKey: fixed Length 32, that is, your custom Configuration SecretKey.
uri: resource identifier, for example: /access_log/post
.
md5hash: md5hash = md5sum(string_to_sign)
, where string_to_sign = "uri-timestamp-rand-SecretKey"
. The verification string calculated by the md5 algorithm, a mixture of numbers 0-9 and lowercase English letters a-z, fixed Length 32.
2. Calculation example
Assuming the filled in parameters are:
API address: https://www.example.com/access_log/post
SecretID = YourID
SecretKey = YourKey
uri = /access_log/post
timestamp = 1571587200
rand = 0
string_to_sign = "/access_log/post-1571587200-0-YourKey"
Based on this string, calculate
md5hash=md5sum("/access_log/post-1571587200-0-YourKey")=1f7ffa7bff8f06bbfbe2ace0f14b7e16
The final push request URL is:
https://www.example.com
/cdnlog/post?auth_key=1571587200-0-1f7ffa7bff8f06bbfbe2ace0f14b7e16&access_key=YourID
After the service receives the push request, it extracts the value of auth_key
. Split the value of auth_key
to obtain timestamp
, rand
, and md5hash
. You can first check whether the timestamp is expired, the suggested expiration time is 300s
, and assemble the encryption string based on the above rules. Use SecretKey
to assemble the string to be encrypted, and compare the encrypted result with the md5hash
value in auth_key
. If they are the same, it means the authentication has passed.
3. Server-side authentication request resolution code example
import hashlib
from flask import Flask, request
app = Flask(__name__)
def get_rsp(msg, result={}, code=0):
return {
"respCode": code,
"respMsg": msg,
"result": result
}
def get_secret_key(access_key):
return "secret_key"
@app.route("/access_log/post", methods=['POST'])
def access_log():
if request.method == 'POST':
if request.content_type.startswith('application/json'):
current_time_ts, rand_num, md5hash = request.args.get("auth_key").split("-")
if time.time() - int(current_time_ts) > 300:
return get_rsp(msg="The request is out of time", code=-1)
access_key = request.args.get("access_key")
secret_key = get_secret_key(access_key)
raw_str = "%s-%s-%s-%s" % (request.path, current_time_ts, rand_num, secret_key)
auth_md5hash = hashlib.md5(raw_str.encode("utf-8")).hexdigest()
if auth_md5hash == md5hash:
if request.headers['content-encoding'] == 'gzip':
pass
return get_rsp("ok")
return get_rsp(msg="Please use content_type by application/json", code=-1)
return get_rsp(msg="The request method not find, method == %s" % request.method, code=-1)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8888, debug=True)python
package main
import (
"context"
"crypto/md5"
"fmt"
"log"
"net/http"
"os"
"os/signal"
"strings"
"syscall"
)
func main() {
mux := http.NewServeMux()
mux.Handle("/access_log/post", &logHandler{})
server := &http.Server{
Addr: ":5000",
Handler: mux,
}
done := make(chan os.Signal)
signal.Notify(done, os.Interrupt, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-done
if err := server.Shutdown(context.Background()); err != nil {
log.Fatal("Shutdown server:", err)
}
}()
err := server.ListenAndServe()
if err != nil {
if err == http.ErrServerClosed {
log.Print("Server closed under request")
} else {
log.Fatal("Server closed unexpected")
}
}
}
type logHandler struct{}
func (*logHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if r.Method == "POST" {
query := r.URL.Query()
authKey := query.Get("auth_key")
accessKey := query.Get("access_key")
authKeys := strings.Split(authKey, "-")
if len(authKeys) == 3 {
currentTimeTs := authKeys[0]
RandNum := authKeys[1]
md5Hash := authKeys[2]
secretKey := getSecretKey(accessKey)
authStr := fmt.Sprintf("%s-%s-%s-%s", "/access_log/post", currentTimeTs, RandNum, secretKey)
data := []byte(authStr)
has := md5.Sum(data)
authMd5 := fmt.Sprintf("%x", has)
if authMd5 == md5Hash {
if r.Header.Get("Content-Encoding") == "gzip" {
}
}
} else {
}
}
}
func getSecretKey(accessKey string) string {
if accessKey != "" {
return "secret_key"
}
return ""
}
Was this page helpful?