Kafka 和 Datadog 集成

强大的性能和简单的集成,由 InfluxData 构建的开源数据连接器 Telegraf 提供支持。

info

对于大规模实时查询,这不是推荐的配置。 为了进行查询和压缩优化、高速摄取和高可用性,您可能需要考虑 Kafka 和 InfluxDB

50 亿+

Telegraf 下载量

#1

时间序列数据库
来源:DB Engines

10 亿+

InfluxDB 下载量

2,800+

贡献者

目录

强大的性能,无限的扩展能力

收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它会变得更有价值。 InfluxDB 是排名第一的时间序列平台,旨在通过 Telegraf 进行扩展。

查看入门方法

输入和输出集成概述

此插件允许您从 Kafka 主题实时收集指标,从而增强 Telegraf 设置中的数据监控和收集能力。

Datadog Telegraf 插件支持将指标提交到 Datadog Metrics API,通过可靠的指标摄取过程促进高效的监控和数据分析。

集成详情

Kafka

Kafka Telegraf 插件旨在从 Kafka 主题读取数据,并使用受支持的输入数据格式创建指标。 作为服务输入插件,它持续监听传入的指标和事件,这与以固定间隔运行的标准输入插件不同。 此特定插件可以使用来自各种 Kafka 版本的功能,并且能够使用 SASL 等配置的安全凭证以及使用消息偏移量和消费者组的消息处理选项,来消费来自指定主题的消息。 此插件的灵活性使其能够处理各种消息格式和用例,使其成为依赖 Kafka 进行数据摄取的应用程序的宝贵资产。

Datadog

此插件写入 Datadog Metrics API,使用户能够发送指标以进行监控和性能分析。 通过使用 Datadog API 密钥,用户可以将插件配置为与 Datadog 的 v1 API 建立连接。 该插件支持各种配置选项,包括连接超时、HTTP 代理设置和数据压缩方法,确保适应不同的部署环境。 将计数指标转换为速率的能力增强了 Telegraf 与 Datadog 代理的集成,这对于依赖实时性能指标的应用程序尤其有益。

配置

Kafka


[[inputs.kafka_consumer]]
              ## Kafka brokers.
              brokers = ["localhost:9092"]

              ## Set the minimal supported Kafka version. Should be a string contains
              ## 4 digits in case if it is 0 version and 3 digits for versions starting
              ## from 1.0.0 separated by dot. This setting enables the use of new
              ## Kafka features and APIs.  Must be 0.10.2.0(used as default) or greater.
              ## Please, check the list of supported versions at
              ## https://pkg.go.dev/github.com/Shopify/sarama#SupportedVersions
              ##   ex: kafka_version = "2.6.0"
              ##   ex: kafka_version = "0.10.2.0"
              # kafka_version = "0.10.2.0"

              ## Topics to consume.
              topics = ["telegraf"]

              ## Topic regular expressions to consume.  Matches will be added to topics.
              ## Example: topic_regexps = [ "*test", "metric[0-9A-z]*" ]
              # topic_regexps = [ ]

              ## When set this tag will be added to all metrics with the topic as the value.
              # topic_tag = ""

              ## The list of Kafka message headers that should be pass as metric tags
              ## works only for Kafka version 0.11+, on lower versions the message headers
              ## are not available
              # msg_headers_as_tags = []

              ## The name of kafka message header which value should override the metric name.
              ## In case when the same header specified in current option and in msg_headers_as_tags
              ## option, it will be excluded from the msg_headers_as_tags list.
              # msg_header_as_metric_name = ""

              ## Set metric(s) timestamp using the given source.
              ## Available options are:
              ##   metric -- do not modify the metric timestamp
              ##   inner  -- use the inner message timestamp (Kafka v0.10+)
              ##   outer  -- use the outer (compressed) block timestamp (Kafka v0.10+)
              # timestamp_source = "metric"

              ## Optional Client id
              # client_id = "Telegraf"

              ## Optional TLS Config
              # enable_tls = false
              # tls_ca = "/etc/telegraf/ca.pem"
              # tls_cert = "/etc/telegraf/cert.pem"
              # tls_key = "/etc/telegraf/key.pem"
              ## Use TLS but skip chain & host verification
              # insecure_skip_verify = false

              ## Period between keep alive probes.
              ## Defaults to the OS configuration if not specified or zero.
              # keep_alive_period = "15s"

              ## SASL authentication credentials.  These settings should typically be used
              ## with TLS encryption enabled
              # sasl_username = "kafka"
              # sasl_password = "secret"

              ## Optional SASL:
              ## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
              ## (defaults to PLAIN)
              # sasl_mechanism = ""

              ## used if sasl_mechanism is GSSAPI
              # sasl_gssapi_service_name = ""
              # ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
              # sasl_gssapi_auth_type = "KRB5_USER_AUTH"
              # sasl_gssapi_kerberos_config_path = "/"
              # sasl_gssapi_realm = "realm"
              # sasl_gssapi_key_tab_path = ""
              # sasl_gssapi_disable_pafxfast = false

              ## used if sasl_mechanism is OAUTHBEARER
              # sasl_access_token = ""

              ## SASL protocol version.  When connecting to Azure EventHub set to 0.
              # sasl_version = 1

              # Disable Kafka metadata full fetch
              # metadata_full = false

              ## Name of the consumer group.
              # consumer_group = "telegraf_metrics_consumers"

              ## Compression codec represents the various compression codecs recognized by
              ## Kafka in messages.
              ##  0 : None
              ##  1 : Gzip
              ##  2 : Snappy
              ##  3 : LZ4
              ##  4 : ZSTD
              # compression_codec = 0
              ## Initial offset position; one of "oldest" or "newest".
              # offset = "oldest"

              ## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
              # balance_strategy = "range"

              ## Maximum number of retries for metadata operations including
              ## connecting. Sets Sarama library's Metadata.Retry.Max config value. If 0 or
              ## unset, use the Sarama default of 3,
              # metadata_retry_max = 0

              ## Type of retry backoff. Valid options: "constant", "exponential"
              # metadata_retry_type = "constant"

              ## Amount of time to wait before retrying. When metadata_retry_type is
              ## "constant", each retry is delayed this amount. When "exponential", the
              ## first retry is delayed this amount, and subsequent delays are doubled. If 0
              ## or unset, use the Sarama default of 250 ms
              # metadata_retry_backoff = 0

              ## Maximum amount of time to wait before retrying when metadata_retry_type is
              ## "exponential". Ignored for other retry types. If 0, there is no backoff
              ## limit.
              # metadata_retry_max_duration = 0

              ## When set to true, this turns each bootstrap broker address into a set of
              ## IPs, then does a reverse lookup on each one to get its canonical hostname.
              ## This list of hostnames then replaces the original address list.
              ## resolve_canonical_bootstrap_servers_only = false

              ## Strategy for making connection to kafka brokers. Valid options: "startup",
              ## "defer". If set to "defer" the plugin is allowed to start before making a
              ## connection. This is useful if the broker may be down when telegraf is
              ## started, but if there are any typos in the broker setting, they will cause
              ## connection failures without warning at startup
              # connection_strategy = "startup"

              ## Maximum length of a message to consume, in bytes (default 0/unlimited);
              ## larger messages are dropped
              max_message_len = 1000000

              ## Max undelivered messages
              ## This plugin uses tracking metrics, which ensure messages are read to
              ## outputs before acknowledging them to the original broker to ensure data
              ## is not lost. This option sets the maximum messages to read from the
              ## broker that have not been written by an output.
              ##
              ## This value needs to be picked with awareness of the agent's
              ## metric_batch_size value as well. Setting max undelivered messages too high
              ## can result in a constant stream of data batches to the output. While
              ## setting it too low may never flush the broker's messages.
              # max_undelivered_messages = 1000

              ## Maximum amount of time the consumer should take to process messages. If
              ## the debug log prints messages from sarama about 'abandoning subscription
              ## to [topic] because consuming was taking too long', increase this value to
              ## longer than the time taken by the output plugin(s).
              ##
              ## Note that the effective timeout could be between 'max_processing_time' and
              ## '2 * max_processing_time'.
              # max_processing_time = "100ms"

              ## The default number of message bytes to fetch from the broker in each
              ## request (default 1MB). This should be larger than the majority of
              ## your messages, or else the consumer will spend a lot of time
              ## negotiating sizes and not actually consuming. Similar to the JVM's
              ## `fetch.message.max.bytes`.
              # consumer_fetch_default = "1MB"

              ## Data format to consume.
              ## Each data format has its own unique set of configuration options, read
              ## more about them here:
              ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
              data_format = "influx"

Datadog

[[outputs.datadog]]
  ## Datadog API key
  apikey = "my-secret-key"

  ## Connection timeout.
  # timeout = "5s"

  ## Write URL override; useful for debugging.
  ## This plugin only supports the v1 API currently due to the authentication
  ## method used.
  # url = "https://app.datadoghq.com/api/v1/series"

  ## Set http_proxy
  # use_system_proxy = false
  # http_proxy_url = "http://localhost:8888"

  ## Override the default (none) compression used to send data.
  ## Supports: "zlib", "none"
  # compression = "none"

  ## When non-zero, converts count metrics submitted by inputs.statsd
  ## into rate, while dividing the metric value by this number.
  ## Note that in order for metrics to be submitted simultaenously alongside
  ## a Datadog agent, rate_interval has to match the interval used by the
  ## agent - which defaults to 10s
  # rate_interval = 0s

输入和输出集成示例

Kafka

  1. 实时数据处理:使用 Kafka 插件将来自 Kafka 主题的实时数据馈送到监控系统。 这对于需要即时反馈性能指标或用户活动的应用程序特别有用,使企业能够更快地对其环境中的变化条件做出反应。

  2. 动态指标收集:利用此插件根据 Kafka 中发生的事件动态调整正在捕获的指标。 例如,通过与其他服务集成,用户可以让插件即时重新配置自身,确保始终根据业务或应用程序的需求收集相关指标。

  3. 集中式日志记录和监控:实施集中式日志记录系统,使用 Kafka Consumer 插件将来自多个服务的日志聚合到统一的监控仪表板中。 此设置可以帮助识别不同服务之间的问题,并提高整体系统可观察性和故障排除能力。

  4. 异常检测系统:将 Kafka 与机器学习算法结合使用以进行实时异常检测。 通过不断分析流数据,此设置可以自动识别异常模式,触发警报并更有效地缓解潜在问题。

Datadog

  1. 实时基础设施监控:使用 Datadog 插件通过将 CPU 使用率和内存统计信息直接发送到 Datadog 来实时监控服务器指标。 这种集成使 IT 团队能够在集中的仪表板中可视化和分析系统性能指标,从而能够主动响应任何新出现的问题,例如资源瓶颈或服务器过载。

  2. 应用程序性能跟踪:利用此插件将应用程序特定的指标(例如请求计数和错误率)提交到 Datadog。 通过与应用程序监控工具集成,团队可以将基础设施指标与应用程序性能相关联,从而提供洞察力,使其能够优化代码性能并改善用户体验。

  3. 指标异常检测:配置 Datadog 插件以发送指标,这些指标可以根据 Datadog 的机器学习功能检测到的异常模式触发警报和通知。 这种主动监控有助于团队在客户受到影响之前迅速对潜在的停机或性能下降做出反应。

  4. 与云服务集成:通过利用 Datadog 插件从云资源发送指标,IT 团队可以了解云应用程序的性能。 监控延迟和错误率等指标有助于确保满足服务级别协议 (SLA),并有助于优化跨云环境的资源分配。

反馈

感谢您成为我们社区的一份子! 如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提供意见。 请在 InfluxDB 社区 Slack 中提交您的反馈。

强大的性能,无限的扩展能力

收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它会变得更有价值。 InfluxDB 是排名第一的时间序列平台,旨在通过 Telegraf 进行扩展。

查看入门方法

相关集成

HTTP 和 InfluxDB 集成

HTTP 插件从一个或多个 HTTP(S) 端点收集指标。 它支持各种身份验证方法和数据格式的配置选项。

查看集成

Kafka 和 InfluxDB 集成

此插件从 Kafka 读取消息,并允许基于这些消息创建指标。 它支持各种配置,包括不同的 Kafka 设置和消息处理选项。

查看集成

Kinesis 和 InfluxDB 集成

Kinesis 插件允许从 AWS Kinesis 流中读取指标。 它支持多种输入数据格式,并提供带有 DynamoDB 的检查点功能,以实现可靠的消息处理。

查看集成