目录
输入和输出集成概述
AMQP Consumer 输入插件允许您从符合 AMQP 0-9-1 标准的消息代理(如 RabbitMQ)中摄取数据,从而为监控和分析目的实现无缝数据收集。
Graylog 插件允许您将 Telegraf 指标发送到 Graylog 服务器,利用 GELF 格式进行结构化日志记录。
集成详情
AMQP
此插件为 AMQP 0-9-1 提供了一个消费者,RabbitMQ 是其一个重要的实现。 AMQP 或高级消息队列协议最初是为了在网络中不同的系统之间实现可靠的、可互操作的消息传递而开发的。 该插件使用配置的队列和绑定键从主题交换中读取指标,从而提供了一种灵活高效的方式,用于从符合 AMQP 标准的消息传递系统中收集数据。 这使得用户能够利用现有的 RabbitMQ 实现来有效地监控他们的应用程序,通过捕获详细的指标来进行分析和告警。
Graylog
Graylog 插件旨在用于使用 GELF(Graylog 扩展日志格式)格式将指标发送到 Graylog 实例。 GELF 有助于标准化日志记录数据,使系统更易于发送和分析日志。 该插件遵循 GELF 规范,该规范规定了有效负载中特定字段的要求。 值得注意的是,时间戳必须是 UNIX 格式,如果存在,插件会将时间戳原样发送到 Graylog,不做任何更改。 如果省略,它会自动生成时间戳。 此外,任何规范中未明确定义的额外字段都将以 underscore 为前缀,这有助于保持数据井井有条并符合 GELF 的要求。 此功能对于实时监控应用程序和基础设施的用户尤其有价值,因为它允许跨多个系统进行无缝集成和提高可见性。
配置
AMQP
[[inputs.amqp_consumer]]
## Brokers to consume from. If multiple brokers are specified a random broker
## will be selected anytime a connection is established. This can be
## helpful for load balancing when not using a dedicated load balancer.
brokers = ["amqp://localhost:5672/influxdb"]
## Authentication credentials for the PLAIN auth_method.
# username = ""
# password = ""
## Name of the exchange to declare. If unset, no exchange will be declared.
exchange = "telegraf"
## Exchange type; common types are "direct", "fanout", "topic", "header", "x-consistent-hash".
# exchange_type = "topic"
## If true, exchange will be passively declared.
# exchange_passive = false
## Exchange durability can be either "transient" or "durable".
# exchange_durability = "durable"
## Additional exchange arguments.
# exchange_arguments = { }
# exchange_arguments = {"hash_property" = "timestamp"}
## AMQP queue name.
queue = "telegraf"
## AMQP queue durability can be "transient" or "durable".
queue_durability = "durable"
## If true, queue will be passively declared.
# queue_passive = false
## Additional arguments when consuming from Queue
# queue_consume_arguments = { }
# queue_consume_arguments = {"x-stream-offset" = "first"}
## A binding between the exchange and queue using this binding key is
## created. If unset, no binding is created.
binding_key = "#"
## Maximum number of messages server should give to the worker.
# prefetch_count = 50
## Max undelivered messages
## This plugin uses tracking metrics, which ensure messages are read to
## outputs before acknowledging them to the original broker to ensure data
## is not lost. This option sets the maximum messages to read from the
## broker that have not been written by an output.
##
## This value needs to be picked with awareness of the agent's
## metric_batch_size value as well. Setting max undelivered messages too high
## can result in a constant stream of data batches to the output. While
## setting it too low may never flush the broker's messages.
# max_undelivered_messages = 1000
## Timeout for establishing the connection to a broker
# timeout = "30s"
## Auth method. PLAIN and EXTERNAL are supported
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
## described here: https://rabbitmq.cn/plugins.html
# auth_method = "PLAIN"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Content encoding for message payloads, can be set to
## "gzip", "identity" or "auto"
## - Use "gzip" to decode gzip
## - Use "identity" to apply no encoding
## - Use "auto" determine the encoding using the ContentEncoding header
# content_encoding = "identity"
## Maximum size of decoded message.
## Acceptable units are B, KiB, KB, MiB, MB...
## Without quotes and units, interpreted as size in bytes.
# max_decompression_size = "500MB"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
Graylog
[[outputs.graylog]]
## Endpoints for your graylog instances.
servers = ["udp://127.0.0.1:12201"]
## Connection timeout.
# timeout = "5s"
## The field to use as the GELF short_message, if unset the static string
## "telegraf" will be used.
## example: short_message_field = "message"
# short_message_field = ""
## According to GELF payload specification, additional fields names must be prefixed
## with an underscore. Previous versions did not prefix custom field 'name' with underscore.
## Set to true for backward compatibility.
# name_field_no_prefix = false
## Connection retry options
## Attempt to connect to the endpoints if the initial connection fails.
## If 'false', Telegraf will give up after 3 connection attempt and will
## exit with an error. If set to 'true', the plugin will retry to connect
## to the unconnected endpoints infinitely.
# connection_retry = false
## Time to wait between connection retry attempts.
# connection_retry_wait_time = "15s"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
输入和输出集成示例
AMQP
-
将应用程序指标与 AMQP 集成:使用 AMQP Consumer 插件收集发布到 RabbitMQ 交换机的应用程序指标。 通过配置插件以监听特定队列,团队可以深入了解应用程序性能,实时跟踪请求率、错误计数和延迟指标。 这种设置不仅有助于异常检测,而且还为容量规划和系统优化提供了有价值的数据。
-
事件驱动的监控:配置 AMQP Consumer 以在应用程序中满足某些条件时触发特定的监控事件。 例如,如果收到指示错误率高的消息,插件可以将此数据馈送到监控工具中,生成警报或扩展事件。 这种集成可以提高对问题的响应速度并自动化部分操作工作流程。
-
跨平台数据聚合:利用 AMQP Consumer 插件整合来自分布在不同平台上的各种应用程序的指标。 通过使用 RabbitMQ 作为集中式消息代理,组织可以统一其监控数据,从而通过 Telegraf 进行全面的分析和仪表板展示,从而在异构环境中保持可见性。
-
实时日志处理:扩展 AMQP Consumer 的使用,以捕获发送到 RabbitMQ 交换机的日志数据,实时处理日志以进行监控和告警。 此应用程序通过分析日志模式、趋势和异常情况(在它们发生时)来确保及时检测和解决操作问题。
Graylog
-
增强云应用程序的日志管理:使用 Graylog Telegraf 插件聚合来自跨多个服务器的云部署应用程序的日志。 通过集成此插件,团队可以集中管理日志记录数据,从而更轻松地排除问题、监控应用程序性能并保持符合日志记录标准。
-
实时安全监控:利用 Graylog 插件收集安全相关的指标和日志并将其发送到 Graylog 服务器以进行实时分析。 这使安全团队能够通过关联来自基础设施内各种来源的日志,快速识别异常、跟踪潜在的漏洞并及时响应事件。
-
动态警报和通知系统:实施 Graylog 插件以增强基础设施中的警报机制。 通过将指标发送到 Graylog,团队可以根据日志模式或意外行为设置动态警报,从而实现主动监控和快速事件响应策略。
-
跨平台日志整合:使用 Graylog 插件促进跨平台日志整合,跨越本地、混合和云等多种环境。 通过以 GELF 格式标准化日志记录,组织可以确保一致的监控和故障排除实践,无论其服务托管在哪里。
反馈
感谢您成为我们社区的一份子! 如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提出意见。 请在 InfluxDB 社区 Slack 中提交您的反馈。