目录
强大的性能,无限的扩展
收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它都更有价值。借助 InfluxDB,排名第一的专为与 Telegraf 扩展而构建的时间序列平台。
了解入门方法
输入和输出集成概述
HTTP 插件允许从指定的 HTTP 端点收集指标,处理各种数据格式和身份验证方法。
Telegraf 的 SQL 插件有助于将指标存储在 SQL 数据库中。当配置为 Microsoft SQL Server 时,它支持特定的 DSN 格式和架构要求,从而实现与 SQL Server 的无缝集成。
集成详情
HTTP
HTTP 插件从一个或多个 HTTP(S) 端点收集指标,这些端点应具有以受支持的输入数据格式之一格式化的指标。它还支持来自密钥存储的密钥,用于各种身份验证选项,并包括全局支持的配置设置。
Microsoft SQL Server
Telegraf 的 Microsoft SQL Server SQL 输出插件旨在通过动态创建与传入数据结构匹配的表和列来捕获和存储指标数据。此集成利用 go-mssqldb 驱动程序,该驱动程序遵循通过 DSN 的 SQL Server 连接协议,DSN 包括服务器、端口和数据库详细信息。尽管由于单元测试有限,该驱动程序被认为是实验性的,但它为动态架构生成和数据插入提供了强大的支持,从而能够详细记录系统性能的时间戳记录。尽管其状态为实验性,但这种灵活性使其成为需要可靠且精细的指标日志记录的环境的宝贵工具。
配置
HTTP
[[inputs.http]]
## One or more URLs from which to read formatted metrics.
urls = [
"http://localhost/metrics",
"http+unix:///run/user/420/podman/podman.sock:/d/v4.0.0/libpod/pods/json"
]
## HTTP method
# method = "GET"
## Optional HTTP headers
# headers = {"X-Special-Header" = "Special-Value"}
## HTTP entity-body to send with POST/PUT requests.
# body = ""
## HTTP Content-Encoding for write request body, can be set to "gzip" to
## compress body or "identity" to apply no encoding.
# content_encoding = "identity"
## Optional Bearer token settings to use for the API calls.
## Use either the token itself or the token file if you need a token.
# token = "eyJhbGc...Qssw5c"
# token_file = "/path/to/file"
## Optional HTTP Basic Auth Credentials
# username = "username"
# password = "pa$$word"
## OAuth2 Client Credentials. The options 'client_id', 'client_secret', and 'token_url' are required to use OAuth2.
# client_id = "clientid"
# client_secret = "secret"
# token_url = "https://indentityprovider/oauth2/v1/token"
# scopes = ["urn:opc:idm:__myscopes__"]
## HTTP Proxy support
# use_system_proxy = false
# http_proxy_url = ""
## Optional TLS Config
## Set to true/false to enforce TLS being enabled/disabled. If not set,
## enable TLS only if any of the other options are specified.
# tls_enable =
## Trusted root certificates for server
# tls_ca = "/path/to/cafile"
## Used for TLS client certificate authentication
# tls_cert = "/path/to/certfile"
## Used for TLS client certificate authentication
# tls_key = "/path/to/keyfile"
## Password for the key file if it is encrypted
# tls_key_pwd = ""
## Send the specified TLS server name via SNI
# tls_server_name = "kubernetes.example.com"
## Minimal TLS version to accept by the client
# tls_min_version = "TLS12"
## List of ciphers to accept, by default all secure ciphers will be accepted
## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
## Use "all", "secure" and "insecure" to add all support ciphers, secure
## suites or insecure suites respectively.
# tls_cipher_suites = ["secure"]
## Renegotiation method, "never", "once" or "freely"
# tls_renegotiation_method = "never"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Optional Cookie authentication
# cookie_auth_url = "https://localhost/authMe"
# cookie_auth_method = "POST"
# cookie_auth_username = "username"
# cookie_auth_password = "pa$$word"
# cookie_auth_headers = { Content-Type = "application/json", X-MY-HEADER = "hello" }
# cookie_auth_body = '{"username": "user", "password": "pa$$word", "authenticate": "me"}'
## cookie_auth_renewal not set or set to "0" will auth once and never renew the cookie
# cookie_auth_renewal = "5m"
## Amount of time allowed to complete the HTTP request
# timeout = "5s"
## List of success status codes
# success_status_codes = [200]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
Microsoft SQL Server
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com), clickhouse (ClickHouse)
driver = "mssql"
## Data source name
## For Microsoft SQL Server, the DSN typically includes the server, port, username, password, and database name.
## Example DSN: "sqlserver://username:password@localhost:1433?database=telegraf"
data_source_name = "sqlserver://username:password@localhost:1433?database=telegraf"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE} ({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - table name as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL (optional)
init_sql = ""
## Maximum amount of time a connection may be idle. "0s" means connections are never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## Metric type to SQL type conversion
## You can customize the mapping if needed.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
输入和输出集成示例
HTTP
- 从本地主机收集指标: 该插件可以从 HTTP 端点(如
http://localhost/metrics
)获取指标,从而实现轻松的本地监控。 - 使用 Unix 域套接字: 您可以通过使用 http+unix 方案指定从 Unix 域套接字上的服务收集指标,例如,
http+unix:///path/to/service.sock:/api/endpoint
。
Microsoft SQL Server
-
企业应用程序监控:利用该插件捕获从 SQL Server 上运行的企业应用程序的详细性能指标。此设置允许 IT 团队分析系统性能、跟踪事务时间并识别复杂的多层环境中的瓶颈。
-
动态基础设施审计:部署该插件以在 SQL Server 中创建基础设施变更和性能指标的动态审计日志。此用例非常适合需要实时监控和系统性能历史分析以进行合规性和优化的组织。
-
自动化性能基准测试:使用该插件持续记录和分析 SQL Server 数据库的性能指标。这实现了自动化基准测试,将历史数据与当前性能进行比较,从而帮助快速识别服务中的异常或退化。
-
集成 DevOps 仪表板:将该插件与 DevOps 监控工具集成,以将来自 SQL Server 的实时指标馈送到集中式仪表板。这提供了应用程序健康状况的整体视图,使团队能够将 SQL Server 性能与应用程序级事件相关联,从而实现更快的故障排除和主动维护。
反馈
感谢您成为我们社区的一份子!如果您有任何一般反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提供意见。请在InfluxDB 社区 Slack 中提交您的反馈。
强大的性能,无限的扩展
收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它都更有价值。借助 InfluxDB,排名第一的专为与 Telegraf 扩展而构建的时间序列平台。
了解入门方法