目录
输入和输出集成概述
AMQP Consumer 输入插件允许您从符合 AMQP 0-9-1 标准的消息代理(如 RabbitMQ)摄取数据,从而实现无缝数据收集,以用于监控和分析目的。
Telegraf SQL 插件允许您将 Telegraf 的指标直接存储到 MySQL 数据库中,从而更轻松地分析和可视化收集的指标。
集成详情
AMQP
此插件为 AMQP 0-9-1 提供了一个消费者,RabbitMQ 是其一个突出的实现。AMQP,即高级消息队列协议,最初开发是为了实现网络中不同系统之间可靠的、可互操作的消息传递。该插件使用配置的队列和绑定键从主题交换中读取指标,提供了一种灵活高效的方式,用于从符合 AMQP 标准的消息传递系统中收集数据。这使用户能够利用现有的 RabbitMQ 实现,通过捕获详细的指标进行分析和警报,从而有效地监控其应用程序。
MySQL
Telegraf 的 SQL 输出插件旨在通过根据传入的指标动态创建表和列,将指标数据无缝写入 SQL 数据库。当配置为 MySQL 时,该插件利用 go-sql-driver/mysql,这需要启用 ANSI_QUOTES SQL 模式以确保正确处理带引号的标识符。这种动态模式创建方法确保每个指标都存储在自己的表中,其结构源自其字段和标签,从而提供系统性能的详细、带时间戳的记录。该插件的灵活性使其能够处理高吞吐量环境,使其成为需要强大、精细的指标日志记录和历史数据分析的场景的理想选择。
配置
AMQP
[[inputs.amqp_consumer]]
## Brokers to consume from. If multiple brokers are specified a random broker
## will be selected anytime a connection is established. This can be
## helpful for load balancing when not using a dedicated load balancer.
brokers = ["amqp://localhost:5672/influxdb"]
## Authentication credentials for the PLAIN auth_method.
# username = ""
# password = ""
## Name of the exchange to declare. If unset, no exchange will be declared.
exchange = "telegraf"
## Exchange type; common types are "direct", "fanout", "topic", "header", "x-consistent-hash".
# exchange_type = "topic"
## If true, exchange will be passively declared.
# exchange_passive = false
## Exchange durability can be either "transient" or "durable".
# exchange_durability = "durable"
## Additional exchange arguments.
# exchange_arguments = { }
# exchange_arguments = {"hash_property" = "timestamp"}
## AMQP queue name.
queue = "telegraf"
## AMQP queue durability can be "transient" or "durable".
queue_durability = "durable"
## If true, queue will be passively declared.
# queue_passive = false
## Additional arguments when consuming from Queue
# queue_consume_arguments = { }
# queue_consume_arguments = {"x-stream-offset" = "first"}
## A binding between the exchange and queue using this binding key is
## created. If unset, no binding is created.
binding_key = "#"
## Maximum number of messages server should give to the worker.
# prefetch_count = 50
## Max undelivered messages
## This plugin uses tracking metrics, which ensure messages are read to
## outputs before acknowledging them to the original broker to ensure data
## is not lost. This option sets the maximum messages to read from the
## broker that have not been written by an output.
##
## This value needs to be picked with awareness of the agent's
## metric_batch_size value as well. Setting max undelivered messages too high
## can result in a constant stream of data batches to the output. While
## setting it too low may never flush the broker's messages.
# max_undelivered_messages = 1000
## Timeout for establishing the connection to a broker
# timeout = "30s"
## Auth method. PLAIN and EXTERNAL are supported
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
## described here: https://rabbitmq.cn/plugins.html
# auth_method = "PLAIN"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Content encoding for message payloads, can be set to
## "gzip", "identity" or "auto"
## - Use "gzip" to decode gzip
## - Use "identity" to apply no encoding
## - Use "auto" determine the encoding using the ContentEncoding header
# content_encoding = "identity"
## Maximum size of decoded message.
## Acceptable units are B, KiB, KB, MiB, MB...
## Without quotes and units, interpreted as size in bytes.
# max_decompression_size = "500MB"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
MySQL
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
driver = "mysql"
## Data source name
## The format of the data source name is different for each database driver.
## See the plugin readme for details.
data_source_name = "username:password@tcp(host:port)/dbname"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE}({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - tablename as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL
init_sql = "SET sql_mode='ANSI_QUOTES';"
## Maximum amount of time a connection may be idle. "0s" means connections are
## never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections
## are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of the
## table
## Metric type to SQL type conversion
## The values on the left are the data types Telegraf has and the values on
## the right are the data types Telegraf will use when sending to a database.
##
## The database values used must be data types the destination database
## understands. It is up to the user to ensure that the selected data type is
## available in the database they are using. Refer to your database
## documentation for what data types are available and supported.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
# ## This setting controls the behavior of the unsigned value. By default the
# ## setting will take the integer value and append the unsigned value to it. The other
# ## option is "literal", which will use the actual value the user provides to
# ## the unsigned option. This is useful for a database like ClickHouse where
# ## the unsigned value should use a value like "uint64".
# # conversion_style = "unsigned_suffix"
输入和输出集成示例
AMQP
-
将应用程序指标与 AMQP 集成:使用 AMQP Consumer 插件收集发布到 RabbitMQ 交换机的应用程序指标。通过配置插件以侦听特定队列,团队可以深入了解应用程序性能,实时跟踪请求率、错误计数和延迟指标。此设置不仅有助于异常检测,还为容量规划和系统优化提供有价值的数据。
-
事件驱动的监控:配置 AMQP Consumer 以在应用程序中满足特定条件时触发特定的监控事件。例如,如果收到指示错误率高的消息,插件可以将此数据馈送到监控工具中,从而生成警报或扩展事件。这种集成可以提高对问题的响应速度,并自动化部分操作工作流程。
-
跨平台数据聚合:利用 AMQP Consumer 插件整合来自分布在不同平台上的各种应用程序的指标。通过利用 RabbitMQ 作为集中式消息代理,组织可以统一其监控数据,从而通过 Telegraf 进行全面的分析和仪表板展示,从而在异构环境中保持可见性。
-
实时日志处理:扩展 AMQP Consumer 的使用,以捕获发送到 RabbitMQ 交换机的日志数据,实时处理日志以用于监控和警报目的。此应用程序确保通过分析日志模式、趋势和异常情况来快速检测和解决操作问题。
MySQL
-
实时 Web 分析存储:利用该插件捕获网站性能指标并将其存储在 MySQL 中。此设置使团队能够监控用户交互、分析流量模式,并根据实时数据洞察动态调整站点功能。
-
IoT 设备监控:利用该插件从 IoT 传感器网络收集指标,并将它们记录到 MySQL 数据库中。此用例支持对设备健康状况和性能的持续监控,从而实现预测性维护和对异常情况的即时响应。
-
金融交易日志记录:记录带有精确时间戳的高频金融交易数据。这种方法支持强大的审计跟踪、实时欺诈检测以及全面的历史分析,以用于合规性和报告目的。
-
应用程序性能基准测试:将该插件与应用程序性能监控系统集成,以将指标记录到 MySQL 中。这有助于长期进行详细的基准测试和趋势分析,使组织能够识别性能瓶颈并有效地优化资源分配。
反馈
感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提出意见。请在 InfluxDB 社区 Slack 中提交您的反馈。