目录
输入和输出集成概述
此插件允许您实时从 Kafka 主题收集指标,增强 Telegraf 设置中的数据监控和收集能力。
此插件使 Telegraf 能够将指标实时直接流式传输到 Grafana 仪表板,利用 Grafana Live 进行即时数据可视化和操作洞察。
集成详情
Kafka
Kafka Telegraf 插件旨在从 Kafka 主题读取数据,并使用支持的输入数据格式创建指标。 作为一个服务输入插件,它持续监听传入的指标和事件,这与以固定间隔运行的标准输入插件不同。 此插件可以使用各种 Kafka 版本的功能,并且能够从指定主题消费消息,应用诸如使用 SASL 的安全凭证等配置,以及使用消息偏移量和消费者组选项管理消息处理。 此插件的灵活性使其能够处理各种消息格式和用例,使其成为依赖 Kafka 进行数据摄取的应用程序的宝贵资产。
Grafana
Telegraf 可以使用 Websocket 输出插件将实时数据发送到 Grafana。 Telegraf 收集的指标会立即推送到 Grafana 仪表板,从而实现实时可视化和分析。 此插件非常适合需要低延迟、实时数据可视化的用例,例如操作监控、实时分析和即时事件响应场景。 它支持身份验证标头、可自定义的数据序列化格式(如 JSON)以及通过 TLS 的安全通信,从而在动态、交互式仪表板环境中提供灵活性和易于集成性。
配置
Kafka
[[inputs.kafka_consumer]]
## Kafka brokers.
brokers = ["localhost:9092"]
## Set the minimal supported Kafka version. Should be a string contains
## 4 digits in case if it is 0 version and 3 digits for versions starting
## from 1.0.0 separated by dot. This setting enables the use of new
## Kafka features and APIs. Must be 0.10.2.0(used as default) or greater.
## Please, check the list of supported versions at
## https://pkg.go.dev/github.com/Shopify/sarama#SupportedVersions
## ex: kafka_version = "2.6.0"
## ex: kafka_version = "0.10.2.0"
# kafka_version = "0.10.2.0"
## Topics to consume.
topics = ["telegraf"]
## Topic regular expressions to consume. Matches will be added to topics.
## Example: topic_regexps = [ "*test", "metric[0-9A-z]*" ]
# topic_regexps = [ ]
## When set this tag will be added to all metrics with the topic as the value.
# topic_tag = ""
## The list of Kafka message headers that should be pass as metric tags
## works only for Kafka version 0.11+, on lower versions the message headers
## are not available
# msg_headers_as_tags = []
## The name of kafka message header which value should override the metric name.
## In case when the same header specified in current option and in msg_headers_as_tags
## option, it will be excluded from the msg_headers_as_tags list.
# msg_header_as_metric_name = ""
## Set metric(s) timestamp using the given source.
## Available options are:
## metric -- do not modify the metric timestamp
## inner -- use the inner message timestamp (Kafka v0.10+)
## outer -- use the outer (compressed) block timestamp (Kafka v0.10+)
# timestamp_source = "metric"
## Optional Client id
# client_id = "Telegraf"
## Optional TLS Config
# enable_tls = false
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Period between keep alive probes.
## Defaults to the OS configuration if not specified or zero.
# keep_alive_period = "15s"
## SASL authentication credentials. These settings should typically be used
## with TLS encryption enabled
# sasl_username = "kafka"
# sasl_password = "secret"
## Optional SASL:
## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
## (defaults to PLAIN)
# sasl_mechanism = ""
## used if sasl_mechanism is GSSAPI
# sasl_gssapi_service_name = ""
# ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
# sasl_gssapi_auth_type = "KRB5_USER_AUTH"
# sasl_gssapi_kerberos_config_path = "/"
# sasl_gssapi_realm = "realm"
# sasl_gssapi_key_tab_path = ""
# sasl_gssapi_disable_pafxfast = false
## used if sasl_mechanism is OAUTHBEARER
# sasl_access_token = ""
## SASL protocol version. When connecting to Azure EventHub set to 0.
# sasl_version = 1
# Disable Kafka metadata full fetch
# metadata_full = false
## Name of the consumer group.
# consumer_group = "telegraf_metrics_consumers"
## Compression codec represents the various compression codecs recognized by
## Kafka in messages.
## 0 : None
## 1 : Gzip
## 2 : Snappy
## 3 : LZ4
## 4 : ZSTD
# compression_codec = 0
## Initial offset position; one of "oldest" or "newest".
# offset = "oldest"
## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
# balance_strategy = "range"
## Maximum number of retries for metadata operations including
## connecting. Sets Sarama library's Metadata.Retry.Max config value. If 0 or
## unset, use the Sarama default of 3,
# metadata_retry_max = 0
## Type of retry backoff. Valid options: "constant", "exponential"
# metadata_retry_type = "constant"
## Amount of time to wait before retrying. When metadata_retry_type is
## "constant", each retry is delayed this amount. When "exponential", the
## first retry is delayed this amount, and subsequent delays are doubled. If 0
## or unset, use the Sarama default of 250 ms
# metadata_retry_backoff = 0
## Maximum amount of time to wait before retrying when metadata_retry_type is
## "exponential". Ignored for other retry types. If 0, there is no backoff
## limit.
# metadata_retry_max_duration = 0
## When set to true, this turns each bootstrap broker address into a set of
## IPs, then does a reverse lookup on each one to get its canonical hostname.
## This list of hostnames then replaces the original address list.
## resolve_canonical_bootstrap_servers_only = false
## Strategy for making connection to kafka brokers. Valid options: "startup",
## "defer". If set to "defer" the plugin is allowed to start before making a
## connection. This is useful if the broker may be down when telegraf is
## started, but if there are any typos in the broker setting, they will cause
## connection failures without warning at startup
# connection_strategy = "startup"
## Maximum length of a message to consume, in bytes (default 0/unlimited);
## larger messages are dropped
max_message_len = 1000000
## Max undelivered messages
## This plugin uses tracking metrics, which ensure messages are read to
## outputs before acknowledging them to the original broker to ensure data
## is not lost. This option sets the maximum messages to read from the
## broker that have not been written by an output.
##
## This value needs to be picked with awareness of the agent's
## metric_batch_size value as well. Setting max undelivered messages too high
## can result in a constant stream of data batches to the output. While
## setting it too low may never flush the broker's messages.
# max_undelivered_messages = 1000
## Maximum amount of time the consumer should take to process messages. If
## the debug log prints messages from sarama about 'abandoning subscription
## to [topic] because consuming was taking too long', increase this value to
## longer than the time taken by the output plugin(s).
##
## Note that the effective timeout could be between 'max_processing_time' and
## '2 * max_processing_time'.
# max_processing_time = "100ms"
## The default number of message bytes to fetch from the broker in each
## request (default 1MB). This should be larger than the majority of
## your messages, or else the consumer will spend a lot of time
## negotiating sizes and not actually consuming. Similar to the JVM's
## `fetch.message.max.bytes`.
# consumer_fetch_default = "1MB"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
Grafana
[[outputs.websocket]]
## Grafana Live WebSocket endpoint
url = "ws://localhost:3000/api/live/push/custom_id"
## Optional headers for authentication
# [outputs.websocket.headers]
# Authorization = "Bearer YOUR_GRAFANA_API_TOKEN"
## Data format to send metrics
data_format = "influx"
## Timeouts (make sure read_timeout is larger than server ping interval or set to zero).
# connect_timeout = "30s"
# write_timeout = "30s"
# read_timeout = "30s"
## Optionally turn on using text data frames (binary by default).
# use_text_frames = false
## TLS configuration
# tls_ca = "/path/to/ca.pem"
# tls_cert = "/path/to/cert.pem"
# tls_key = "/path/to/key.pem"
# insecure_skip_verify = false
输入和输出集成示例
Kafka
-
实时数据处理:使用 Kafka 插件将来自 Kafka 主题的实时数据馈送到监控系统。 这对于需要即时反馈性能指标或用户活动的应用尤其有用,使企业能够更快地对其环境中的变化条件做出反应。
-
动态指标收集:利用此插件根据 Kafka 内发生的事件动态调整正在捕获的指标。 例如,通过与其他服务集成,用户可以让插件即时重新配置自身,确保始终根据业务或应用程序的需求收集相关指标。
-
集中式日志记录和监控:使用 Kafka Consumer Plugin 实现集中式日志记录系统,以将来自多个服务的日志聚合到统一的监控仪表板中。 此设置可以帮助识别不同服务之间的问题,并提高整体系统可观察性和故障排除能力。
-
异常检测系统:将 Kafka 与机器学习算法结合使用以进行实时异常检测。 通过不断分析流数据,此设置可以自动识别异常模式,触发警报并更有效地缓解潜在问题。
Grafana
-
实时基础设施仪表板:部署 Telegraf 将服务器健康指标直接流式传输到 Grafana 仪表板,使 IT 团队能够实时可视化基础设施性能。 此设置允许立即检测和响应关键系统事件。
-
交互式物联网监控:集成 Telegraf 收集的物联网设备指标,并将实时数据推送到 Grafana,为监控智慧城市项目或制造过程创建动态和交互式仪表板。 这种实时可见性显着提高了响应速度和运营效率。
-
即时应用程序性能分析:将来自生产环境的应用程序指标实时流式传输到 Grafana 仪表板,使开发团队能够在部署期间快速检测和诊断性能瓶颈或异常,从而最大限度地减少停机时间并提高可靠性。
-
实时事件分析:在重大现场活动期间,利用 Telegraf 捕获和流式传输实时受众或系统指标,直接推送到 Grafana 仪表板。 活动组织者可以动态监控并对变化的情况或趋势做出反应,从而显着提高受众参与度和运营决策能力。
反馈
感谢您成为我们社区的一份子! 如果您有任何一般反馈或在这些页面上发现任何错误,我们欢迎并鼓励您的输入。 请在 InfluxDB 社区 Slack 中提交您的反馈。