Kafka 和 Loki 集成

强大的性能和简单的集成,由 InfluxData 构建的开源数据连接器 Telegraf 提供支持。

info

对于大规模实时查询,这不是推荐的配置。 为了实现查询和压缩优化、高速摄取和高可用性,您可能需要考虑Kafka 和 InfluxDB

50 亿+

Telegraf 下载量

#1

时间序列数据库
来源:DB Engines

10 亿+

InfluxDB 下载量

2,800+

贡献者

目录

强大的性能,无限的扩展

收集、组织和处理大量高速数据。 当您将任何数据视为时间序列数据时,它会更有价值。 借助 InfluxDB,排名第一的时间序列平台,旨在与 Telegraf 一起扩展。

查看入门方法

输入和输出集成概述

此插件允许您从 Kafka 主题实时收集指标,从而增强 Telegraf 设置中的数据监控和收集能力。

Loki 插件允许用户将日志发送到 Loki 以进行聚合和查询,从而利用 Loki 高效的存储能力。

集成详情

Kafka

Kafka Telegraf 插件旨在从 Kafka 主题读取数据,并使用支持的输入数据格式创建指标。 作为服务输入插件,它持续监听传入的指标和事件,这与以固定间隔运行的标准输入插件不同。 此特定插件可以使用各种 Kafka 版本的功能,并且能够使用 SASL 等配置安全凭据,并使用消息偏移量和消费者组选项管理消息处理,从而从指定主题中使用消息。 此插件的灵活性使其能够处理各种消息格式和用例,使其成为依赖 Kafka 进行数据摄取的应用程序的宝贵资产。

Loki

此 Loki 插件与 Grafana Loki 集成,Grafana Loki 是一个强大的日志聚合系统。 通过以与 Loki 兼容的格式发送日志,此插件可以高效地存储和查询日志。 每个日志条目都以键值格式结构化,其中键表示字段名称,值表示相应的日志信息。 按时间戳对日志进行排序可确保通过 Loki 查询日志流时保持时间顺序。 此插件对密钥的支持使安全地管理身份验证参数变得更容易,而 HTTP 标头、gzip 编码和 TLS 配置选项增强了日志传输的适应性和安全性,从而满足各种部署需求。

配置

Kafka


[[inputs.kafka_consumer]]
              ## Kafka brokers.
              brokers = ["localhost:9092"]

              ## Set the minimal supported Kafka version. Should be a string contains
              ## 4 digits in case if it is 0 version and 3 digits for versions starting
              ## from 1.0.0 separated by dot. This setting enables the use of new
              ## Kafka features and APIs.  Must be 0.10.2.0(used as default) or greater.
              ## Please, check the list of supported versions at
              ## https://pkg.go.dev/github.com/Shopify/sarama#SupportedVersions
              ##   ex: kafka_version = "2.6.0"
              ##   ex: kafka_version = "0.10.2.0"
              # kafka_version = "0.10.2.0"

              ## Topics to consume.
              topics = ["telegraf"]

              ## Topic regular expressions to consume.  Matches will be added to topics.
              ## Example: topic_regexps = [ "*test", "metric[0-9A-z]*" ]
              # topic_regexps = [ ]

              ## When set this tag will be added to all metrics with the topic as the value.
              # topic_tag = ""

              ## The list of Kafka message headers that should be pass as metric tags
              ## works only for Kafka version 0.11+, on lower versions the message headers
              ## are not available
              # msg_headers_as_tags = []

              ## The name of kafka message header which value should override the metric name.
              ## In case when the same header specified in current option and in msg_headers_as_tags
              ## option, it will be excluded from the msg_headers_as_tags list.
              # msg_header_as_metric_name = ""

              ## Set metric(s) timestamp using the given source.
              ## Available options are:
              ##   metric -- do not modify the metric timestamp
              ##   inner  -- use the inner message timestamp (Kafka v0.10+)
              ##   outer  -- use the outer (compressed) block timestamp (Kafka v0.10+)
              # timestamp_source = "metric"

              ## Optional Client id
              # client_id = "Telegraf"

              ## Optional TLS Config
              # enable_tls = false
              # tls_ca = "/etc/telegraf/ca.pem"
              # tls_cert = "/etc/telegraf/cert.pem"
              # tls_key = "/etc/telegraf/key.pem"
              ## Use TLS but skip chain & host verification
              # insecure_skip_verify = false

              ## Period between keep alive probes.
              ## Defaults to the OS configuration if not specified or zero.
              # keep_alive_period = "15s"

              ## SASL authentication credentials.  These settings should typically be used
              ## with TLS encryption enabled
              # sasl_username = "kafka"
              # sasl_password = "secret"

              ## Optional SASL:
              ## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
              ## (defaults to PLAIN)
              # sasl_mechanism = ""

              ## used if sasl_mechanism is GSSAPI
              # sasl_gssapi_service_name = ""
              # ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
              # sasl_gssapi_auth_type = "KRB5_USER_AUTH"
              # sasl_gssapi_kerberos_config_path = "/"
              # sasl_gssapi_realm = "realm"
              # sasl_gssapi_key_tab_path = ""
              # sasl_gssapi_disable_pafxfast = false

              ## used if sasl_mechanism is OAUTHBEARER
              # sasl_access_token = ""

              ## SASL protocol version.  When connecting to Azure EventHub set to 0.
              # sasl_version = 1

              # Disable Kafka metadata full fetch
              # metadata_full = false

              ## Name of the consumer group.
              # consumer_group = "telegraf_metrics_consumers"

              ## Compression codec represents the various compression codecs recognized by
              ## Kafka in messages.
              ##  0 : None
              ##  1 : Gzip
              ##  2 : Snappy
              ##  3 : LZ4
              ##  4 : ZSTD
              # compression_codec = 0
              ## Initial offset position; one of "oldest" or "newest".
              # offset = "oldest"

              ## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
              # balance_strategy = "range"

              ## Maximum number of retries for metadata operations including
              ## connecting. Sets Sarama library's Metadata.Retry.Max config value. If 0 or
              ## unset, use the Sarama default of 3,
              # metadata_retry_max = 0

              ## Type of retry backoff. Valid options: "constant", "exponential"
              # metadata_retry_type = "constant"

              ## Amount of time to wait before retrying. When metadata_retry_type is
              ## "constant", each retry is delayed this amount. When "exponential", the
              ## first retry is delayed this amount, and subsequent delays are doubled. If 0
              ## or unset, use the Sarama default of 250 ms
              # metadata_retry_backoff = 0

              ## Maximum amount of time to wait before retrying when metadata_retry_type is
              ## "exponential". Ignored for other retry types. If 0, there is no backoff
              ## limit.
              # metadata_retry_max_duration = 0

              ## When set to true, this turns each bootstrap broker address into a set of
              ## IPs, then does a reverse lookup on each one to get its canonical hostname.
              ## This list of hostnames then replaces the original address list.
              ## resolve_canonical_bootstrap_servers_only = false

              ## Strategy for making connection to kafka brokers. Valid options: "startup",
              ## "defer". If set to "defer" the plugin is allowed to start before making a
              ## connection. This is useful if the broker may be down when telegraf is
              ## started, but if there are any typos in the broker setting, they will cause
              ## connection failures without warning at startup
              # connection_strategy = "startup"

              ## Maximum length of a message to consume, in bytes (default 0/unlimited);
              ## larger messages are dropped
              max_message_len = 1000000

              ## Max undelivered messages
              ## This plugin uses tracking metrics, which ensure messages are read to
              ## outputs before acknowledging them to the original broker to ensure data
              ## is not lost. This option sets the maximum messages to read from the
              ## broker that have not been written by an output.
              ##
              ## This value needs to be picked with awareness of the agent's
              ## metric_batch_size value as well. Setting max undelivered messages too high
              ## can result in a constant stream of data batches to the output. While
              ## setting it too low may never flush the broker's messages.
              # max_undelivered_messages = 1000

              ## Maximum amount of time the consumer should take to process messages. If
              ## the debug log prints messages from sarama about 'abandoning subscription
              ## to [topic] because consuming was taking too long', increase this value to
              ## longer than the time taken by the output plugin(s).
              ##
              ## Note that the effective timeout could be between 'max_processing_time' and
              ## '2 * max_processing_time'.
              # max_processing_time = "100ms"

              ## The default number of message bytes to fetch from the broker in each
              ## request (default 1MB). This should be larger than the majority of
              ## your messages, or else the consumer will spend a lot of time
              ## negotiating sizes and not actually consuming. Similar to the JVM's
              ## `fetch.message.max.bytes`.
              # consumer_fetch_default = "1MB"

              ## Data format to consume.
              ## Each data format has its own unique set of configuration options, read
              ## more about them here:
              ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
              data_format = "influx"

Loki

[[outputs.loki]]
  ## The domain of Loki
  domain = "https://loki.domain.tld"

  ## Endpoint to write api
  # endpoint = "/loki/api/v1/push"

  ## Connection timeout, defaults to "5s" if not set.
  # timeout = "5s"

  ## Basic auth credential
  # username = "loki"
  # password = "pass"

  ## Additional HTTP headers
  # http_headers = {"X-Scope-OrgID" = "1"}

  ## If the request must be gzip encoded
  # gzip_request = false

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"

  ## Sanitize Tag Names
  ## If true, all tag names will have invalid characters replaced with
  ## underscores that do not match the regex: ^[a-zA-Z_:][a-zA-Z0-9_:]*.
  # sanitize_label_names = false

  ## Metric Name Label
  ## Label to use for the metric name to when sending metrics. If set to an
  ## empty string, this will not add the label. This is NOT suggested as there
  ## is no way to differentiate between multiple metrics.
  # metric_name_label = "__name"

输入和输出集成示例

Kafka

  1. 实时数据处理:使用 Kafka 插件将来自 Kafka 主题的实时数据馈送到监控系统。 这对于需要即时反馈性能指标或用户活动的应用程序特别有用,使企业能够更快地对其环境中的变化条件做出反应。

  2. 动态指标收集:利用此插件根据 Kafka 中发生的事件动态调整正在捕获的指标。 例如,通过与其他服务集成,用户可以让插件即时重新配置自身,确保始终根据业务或应用程序的需求收集相关指标。

  3. 集中式日志记录和监控:使用 Kafka Consumer Plugin 实施集中式日志记录系统,以将来自多个服务的日志聚合到统一的监控仪表板中。 此设置可以帮助识别跨不同服务的问题,并提高整体系统可观察性和故障排除能力。

  4. 异常检测系统:将 Kafka 与机器学习算法相结合,进行实时异常检测。 通过不断分析流数据,此设置可以自动识别异常模式,触发警报并更有效地缓解潜在问题。

Loki

  1. 微服务的集中式日志记录:使用 Loki 插件收集来自 Kubernetes 集群中运行的多个微服务的日志。 通过将日志定向到集中的 Loki 实例,开发人员可以在一个位置监控、搜索和分析来自所有服务的日志,从而更轻松地进行故障排除和性能监控。 此设置简化了操作,并支持对分布式应用程序中的问题做出快速响应。

  2. 实时日志异常检测:将 Loki 与监控工具相结合,实时分析日志输出中可能表明系统错误或安全威胁的异常模式。 在日志流上实施异常检测使团队能够主动识别和响应事件,从而提高系统可靠性并增强安全态势。

  3. 使用 Gzip 压缩增强日志处理:配置 Loki 插件以使用 gzip 压缩进行日志传输。 这种方法可以减少带宽使用并提高传输速度,这在网络带宽可能受到限制的环境中尤其有益。 这对于高容量日志记录应用程序尤其有用,在这些应用程序中,每个字节都很重要,并且性能至关重要。

  4. 使用自定义标头的多租户支持:利用添加自定义 HTTP 标头的功能来隔离多租户应用程序环境中的不同租户的日志。 通过使用 Loki 插件为每个租户发送不同的标头,运营商可以确保正确的日志管理并遵守数据隔离要求,使其成为 SaaS 应用程序的通用解决方案。

反馈

感谢您成为我们社区的一份子! 如果您有任何一般性反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提供意见。 请在 InfluxDB 社区 Slack 中提交您的反馈。

强大的性能,无限的扩展

收集、组织和处理大量高速数据。 当您将任何数据视为时间序列数据时,它会更有价值。 借助 InfluxDB,排名第一的时间序列平台,旨在与 Telegraf 一起扩展。

查看入门方法

相关集成

HTTP 和 InfluxDB 集成

HTTP 插件从一个或多个 HTTP(S) 端点收集指标。 它支持各种身份验证方法和数据格式的配置选项。

查看集成

Kafka 和 InfluxDB 集成

此插件从 Kafka 读取消息,并允许根据这些消息创建指标。 它支持各种配置,包括不同的 Kafka 设置和消息处理选项。

查看集成

Kinesis 和 InfluxDB 集成

Kinesis 插件允许从 AWS Kinesis 流中读取指标。 它支持多种输入数据格式,并提供 DynamoDB 检查点功能,以实现可靠的消息处理。

查看集成