目录
强大的性能,无限的扩展能力
收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它都更有价值。借助 InfluxDB,这个排名第一的时间序列平台可以与 Telegraf 一起扩展。
查看入门方法
输入和输出集成概述
此插件允许您实时从 Kafka 主题收集指标,增强 Telegraf 设置中的数据监控和收集能力。
Telegraf SQL 插件允许您将 Telegraf 的指标直接存储到 MySQL 数据库中,从而更轻松地分析和可视化收集的指标。
集成详情
Kafka
Kafka Telegraf 插件旨在从 Kafka 主题读取数据,并使用支持的输入数据格式创建指标。作为服务输入插件,它持续监听传入的指标和事件,这与以固定间隔运行的标准输入插件不同。此特定插件可以使用各种 Kafka 版本的功能,并且能够从指定主题消费消息,应用诸如使用 SASL 的安全凭证之类的配置,以及使用消息偏移和消费者组的选项管理消息处理。此插件的灵活性使其能够处理各种消息格式和用例,使其成为依赖 Kafka 进行数据摄取的应用程序的宝贵资产。
MySQL
Telegraf 的 SQL 输出插件旨在通过基于传入指标动态创建表和列,将指标数据无缝写入 SQL 数据库。当配置为 MySQL 时,该插件利用 go-sql-driver/mysql,这需要启用 ANSI_QUOTES SQL 模式以确保正确处理带引号的标识符。这种动态模式创建方法确保每个指标都存储在自己的表中,其结构源自其字段和标签,从而提供系统性能的详细时间戳记录。该插件的灵活性使其能够处理高吞吐量环境,使其成为需要强大、精细的指标日志记录和历史数据分析的场景的理想选择。
配置
Kafka
[[inputs.kafka_consumer]]
## Kafka brokers.
brokers = ["localhost:9092"]
## Set the minimal supported Kafka version. Should be a string contains
## 4 digits in case if it is 0 version and 3 digits for versions starting
## from 1.0.0 separated by dot. This setting enables the use of new
## Kafka features and APIs. Must be 0.10.2.0(used as default) or greater.
## Please, check the list of supported versions at
## https://pkg.go.dev/github.com/Shopify/sarama#SupportedVersions
## ex: kafka_version = "2.6.0"
## ex: kafka_version = "0.10.2.0"
# kafka_version = "0.10.2.0"
## Topics to consume.
topics = ["telegraf"]
## Topic regular expressions to consume. Matches will be added to topics.
## Example: topic_regexps = [ "*test", "metric[0-9A-z]*" ]
# topic_regexps = [ ]
## When set this tag will be added to all metrics with the topic as the value.
# topic_tag = ""
## The list of Kafka message headers that should be pass as metric tags
## works only for Kafka version 0.11+, on lower versions the message headers
## are not available
# msg_headers_as_tags = []
## The name of kafka message header which value should override the metric name.
## In case when the same header specified in current option and in msg_headers_as_tags
## option, it will be excluded from the msg_headers_as_tags list.
# msg_header_as_metric_name = ""
## Set metric(s) timestamp using the given source.
## Available options are:
## metric -- do not modify the metric timestamp
## inner -- use the inner message timestamp (Kafka v0.10+)
## outer -- use the outer (compressed) block timestamp (Kafka v0.10+)
# timestamp_source = "metric"
## Optional Client id
# client_id = "Telegraf"
## Optional TLS Config
# enable_tls = false
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Period between keep alive probes.
## Defaults to the OS configuration if not specified or zero.
# keep_alive_period = "15s"
## SASL authentication credentials. These settings should typically be used
## with TLS encryption enabled
# sasl_username = "kafka"
# sasl_password = "secret"
## Optional SASL:
## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
## (defaults to PLAIN)
# sasl_mechanism = ""
## used if sasl_mechanism is GSSAPI
# sasl_gssapi_service_name = ""
# ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
# sasl_gssapi_auth_type = "KRB5_USER_AUTH"
# sasl_gssapi_kerberos_config_path = "/"
# sasl_gssapi_realm = "realm"
# sasl_gssapi_key_tab_path = ""
# sasl_gssapi_disable_pafxfast = false
## used if sasl_mechanism is OAUTHBEARER
# sasl_access_token = ""
## SASL protocol version. When connecting to Azure EventHub set to 0.
# sasl_version = 1
# Disable Kafka metadata full fetch
# metadata_full = false
## Name of the consumer group.
# consumer_group = "telegraf_metrics_consumers"
## Compression codec represents the various compression codecs recognized by
## Kafka in messages.
## 0 : None
## 1 : Gzip
## 2 : Snappy
## 3 : LZ4
## 4 : ZSTD
# compression_codec = 0
## Initial offset position; one of "oldest" or "newest".
# offset = "oldest"
## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
# balance_strategy = "range"
## Maximum number of retries for metadata operations including
## connecting. Sets Sarama library's Metadata.Retry.Max config value. If 0 or
## unset, use the Sarama default of 3,
# metadata_retry_max = 0
## Type of retry backoff. Valid options: "constant", "exponential"
# metadata_retry_type = "constant"
## Amount of time to wait before retrying. When metadata_retry_type is
## "constant", each retry is delayed this amount. When "exponential", the
## first retry is delayed this amount, and subsequent delays are doubled. If 0
## or unset, use the Sarama default of 250 ms
# metadata_retry_backoff = 0
## Maximum amount of time to wait before retrying when metadata_retry_type is
## "exponential". Ignored for other retry types. If 0, there is no backoff
## limit.
# metadata_retry_max_duration = 0
## When set to true, this turns each bootstrap broker address into a set of
## IPs, then does a reverse lookup on each one to get its canonical hostname.
## This list of hostnames then replaces the original address list.
## resolve_canonical_bootstrap_servers_only = false
## Strategy for making connection to kafka brokers. Valid options: "startup",
## "defer". If set to "defer" the plugin is allowed to start before making a
## connection. This is useful if the broker may be down when telegraf is
## started, but if there are any typos in the broker setting, they will cause
## connection failures without warning at startup
# connection_strategy = "startup"
## Maximum length of a message to consume, in bytes (default 0/unlimited);
## larger messages are dropped
max_message_len = 1000000
## Max undelivered messages
## This plugin uses tracking metrics, which ensure messages are read to
## outputs before acknowledging them to the original broker to ensure data
## is not lost. This option sets the maximum messages to read from the
## broker that have not been written by an output.
##
## This value needs to be picked with awareness of the agent's
## metric_batch_size value as well. Setting max undelivered messages too high
## can result in a constant stream of data batches to the output. While
## setting it too low may never flush the broker's messages.
# max_undelivered_messages = 1000
## Maximum amount of time the consumer should take to process messages. If
## the debug log prints messages from sarama about 'abandoning subscription
## to [topic] because consuming was taking too long', increase this value to
## longer than the time taken by the output plugin(s).
##
## Note that the effective timeout could be between 'max_processing_time' and
## '2 * max_processing_time'.
# max_processing_time = "100ms"
## The default number of message bytes to fetch from the broker in each
## request (default 1MB). This should be larger than the majority of
## your messages, or else the consumer will spend a lot of time
## negotiating sizes and not actually consuming. Similar to the JVM's
## `fetch.message.max.bytes`.
# consumer_fetch_default = "1MB"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
MySQL
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
driver = "mysql"
## Data source name
## The format of the data source name is different for each database driver.
## See the plugin readme for details.
data_source_name = "username:password@tcp(host:port)/dbname"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE}({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - tablename as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL
init_sql = "SET sql_mode='ANSI_QUOTES';"
## Maximum amount of time a connection may be idle. "0s" means connections are
## never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections
## are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of the
## table
## Metric type to SQL type conversion
## The values on the left are the data types Telegraf has and the values on
## the right are the data types Telegraf will use when sending to a database.
##
## The database values used must be data types the destination database
## understands. It is up to the user to ensure that the selected data type is
## available in the database they are using. Refer to your database
## documentation for what data types are available and supported.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
# ## This setting controls the behavior of the unsigned value. By default the
# ## setting will take the integer value and append the unsigned value to it. The other
# ## option is "literal", which will use the actual value the user provides to
# ## the unsigned option. This is useful for a database like ClickHouse where
# ## the unsigned value should use a value like "uint64".
# # conversion_style = "unsigned_suffix"
输入和输出集成示例
Kafka
-
实时数据处理:使用 Kafka 插件将来自 Kafka 主题的实时数据馈送到监控系统。 这对于需要即时反馈性能指标或用户活动的应用程序尤其有用,从而使企业能够更快地对其环境中的变化条件做出反应。
-
动态指标收集:利用此插件根据 Kafka 中发生的事件动态调整正在捕获的指标。 例如,通过与其他服务集成,用户可以让插件即时重新配置自身,从而确保始终根据业务或应用程序的需求收集相关指标。
-
集中式日志记录和监控:实施集中式日志记录系统,使用 Kafka Consumer Plugin 将来自多个服务的日志聚合到统一的监控仪表板中。 此设置可以帮助识别不同服务之间的问题,并提高整体系统可观察性和故障排除能力。
-
异常检测系统:将 Kafka 与机器学习算法相结合进行实时异常检测。 通过不断分析流数据,此设置可以自动识别异常模式,从而更有效地触发警报和缓解潜在问题。
MySQL
-
实时 Web 分析存储:利用该插件捕获网站性能指标并将其存储在 MySQL 中。 此设置使团队能够监控用户交互,分析流量模式,并根据实时数据洞察动态调整站点功能。
-
物联网设备监控:利用该插件从物联网传感器网络收集指标,并将它们记录到 MySQL 数据库中。 此用例支持对设备健康和性能的持续监控,从而实现预测性维护和对异常的即时响应。
-
金融交易日志记录:记录具有精确时间戳的高频金融交易数据。 这种方法支持强大的审计跟踪、实时欺诈检测以及用于合规性和报告目的的全面历史分析。
-
应用程序性能基准测试:将该插件与应用程序性能监控系统集成,以将指标记录到 MySQL 中。 这有助于随时间推移进行详细的基准测试和趋势分析,使组织能够识别性能瓶颈并有效地优化资源分配。
反馈
感谢您成为我们社区的一份子!如果您有任何一般反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提出意见。 请在 InfluxDB 社区 Slack 中提交您的反馈。
强大的性能,无限的扩展能力
收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它都更有价值。借助 InfluxDB,这个排名第一的时间序列平台可以与 Telegraf 一起扩展。
查看入门方法