AMQP 和 PostgreSQL 集成

通过 Telegraf(由 InfluxData 构建的开源数据连接器)提供支持,实现强大性能和轻松集成。

info

对于大规模实时查询,这不是推荐的配置。为了进行查询和压缩优化、高速摄取和高可用性,您可能需要考虑 AMQP 和 InfluxDB

50 亿+

Telegraf 下载量

#1

时间序列数据库
来源:DB Engines

10 亿+

InfluxDB 下载量

2,800+

贡献者

目录

强大的性能,无限的扩展能力

收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它都更有价值。借助 InfluxDB,排名第一的时间序列平台,旨在与 Telegraf 一起扩展。

查看入门方法

输入和输出集成概览

AMQP Consumer 输入插件允许您从符合 AMQP 0-9-1 标准的消息代理(如 RabbitMQ)摄取数据,从而实现无缝数据收集,用于监控和分析目的。

Telegraf PostgreSQL 插件允许您高效地将指标写入 PostgreSQL 数据库,同时自动管理数据库架构。

集成详情

AMQP

此插件为 AMQP 0-9-1 提供了一个消费者,RabbitMQ 是其一个突出的实现。AMQP,或高级消息队列协议,最初是为了实现网络中不同系统之间可靠的、可互操作的消息传递而开发的。该插件使用配置的队列和绑定键从主题交换中读取指标,从而提供了一种灵活高效的方式,从符合 AMQP 标准的消息传递系统中收集数据。这使用户能够利用现有的 RabbitMQ 实现来有效地监控其应用程序,通过捕获详细的指标进行分析和告警。

PostgreSQL

PostgreSQL 插件使用户能够将指标写入 PostgreSQL 数据库或兼容数据库,为架构管理提供强大的支持,通过自动更新缺失的列。该插件旨在促进与监控解决方案的集成,允许用户高效地存储和管理时间序列数据。它为连接设置、并发和错误处理提供了可配置的选项,并支持高级功能,如用于标签和字段的 JSONB 存储、外键标记、模板化架构修改以及通过 pguint 扩展支持无符号整数数据类型。

配置

AMQP

[[inputs.amqp_consumer]]
  ## Brokers to consume from.  If multiple brokers are specified a random broker
  ## will be selected anytime a connection is established.  This can be
  ## helpful for load balancing when not using a dedicated load balancer.
  brokers = ["amqp://localhost:5672/influxdb"]

  ## Authentication credentials for the PLAIN auth_method.
  # username = ""
  # password = ""

  ## Name of the exchange to declare.  If unset, no exchange will be declared.
  exchange = "telegraf"

  ## Exchange type; common types are "direct", "fanout", "topic", "header", "x-consistent-hash".
  # exchange_type = "topic"

  ## If true, exchange will be passively declared.
  # exchange_passive = false

  ## Exchange durability can be either "transient" or "durable".
  # exchange_durability = "durable"

  ## Additional exchange arguments.
  # exchange_arguments = { }
  # exchange_arguments = {"hash_property" = "timestamp"}

  ## AMQP queue name.
  queue = "telegraf"

  ## AMQP queue durability can be "transient" or "durable".
  queue_durability = "durable"

  ## If true, queue will be passively declared.
  # queue_passive = false

  ## Additional arguments when consuming from Queue
  # queue_consume_arguments = { }
  # queue_consume_arguments = {"x-stream-offset" = "first"}

  ## A binding between the exchange and queue using this binding key is
  ## created.  If unset, no binding is created.
  binding_key = "#"

  ## Maximum number of messages server should give to the worker.
  # prefetch_count = 50

  ## Max undelivered messages
  ## This plugin uses tracking metrics, which ensure messages are read to
  ## outputs before acknowledging them to the original broker to ensure data
  ## is not lost. This option sets the maximum messages to read from the
  ## broker that have not been written by an output.
  ##
  ## This value needs to be picked with awareness of the agent's
  ## metric_batch_size value as well. Setting max undelivered messages too high
  ## can result in a constant stream of data batches to the output. While
  ## setting it too low may never flush the broker's messages.
  # max_undelivered_messages = 1000

  ## Timeout for establishing the connection to a broker
  # timeout = "30s"

  ## Auth method. PLAIN and EXTERNAL are supported
  ## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
  ## described here: https://rabbitmq.cn/plugins.html
  # auth_method = "PLAIN"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Content encoding for message payloads, can be set to
  ## "gzip", "identity" or "auto"
  ## - Use "gzip" to decode gzip
  ## - Use "identity" to apply no encoding
  ## - Use "auto" determine the encoding using the ContentEncoding header
  # content_encoding = "identity"

  ## Maximum size of decoded message.
  ## Acceptable units are B, KiB, KB, MiB, MB...
  ## Without quotes and units, interpreted as size in bytes.
  # max_decompression_size = "500MB"

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

PostgreSQL

# Publishes metrics to a postgresql database
[[outputs.postgresql]]
  ## Specify connection address via the standard libpq connection string:
  ##   host=... user=... password=... sslmode=... dbname=...
  ## Or a URL:
  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
  ## See https://postgresql.ac.cn/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
  ##
  ## All connection parameters are optional. Environment vars are also supported.
  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
  ## All supported vars can be found here:
  ##  https://postgresql.ac.cn/docs/current/libpq-envars.html
  ##
  ## Non-standard parameters:
  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
  ##   pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
  # connection = ""

  ## Postgres schema to use.
  # schema = "public"

  ## Store tags as foreign keys in the metrics table. Default is false.
  # tags_as_foreign_keys = false

  ## Suffix to append to table name (measurement name) for the foreign tag table.
  # tag_table_suffix = "_tag"

  ## Deny inserting metrics if the foreign tag can't be inserted.
  # foreign_tag_constraint = false

  ## Store all tags as a JSONB object in a single 'tags' column.
  # tags_as_jsonb = false

  ## Store all fields as a JSONB object in a single 'fields' column.
  # fields_as_jsonb = false

  ## Name of the timestamp column
  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
  # timestamp_column_name = "time"

  ## Type of the timestamp column
  ## Currently, "timestamp without time zone" and "timestamp with time zone"
  ## are supported
  # timestamp_column_type = "timestamp without time zone"

  ## Templated statements to execute when creating a new table.
  # create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
  # ]

  ## Templated statements to execute when adding columns to a table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
  ## containing fields for which there is no column will have the field omitted.
  # add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## Templated statements to execute when creating a new tag table.
  # tag_table_create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
  # ]

  ## Templated statements to execute when adding columns to a tag table.
  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
  # tag_table_add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
  ## unsigned 64-bit integer type).
  ## The value can be one of:
  ##   numeric - Uses the PostgreSQL "numeric" data type.
  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
  # uint64_type = "numeric"

  ## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
  ## controls the maximum backoff duration.
  # retry_max_backoff = "15s"

  ## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
  ## This is an optimization to skip inserting known tag IDs.
  ## Each entry consumes approximately 34 bytes of memory.
  # tag_cache_size = 100000

  ## Enable & set the log level for the Postgres driver.
  # log_level = "warn" # trace, debug, info, warn, error, none

输入和输出集成示例

AMQP

  1. 集成应用程序指标与 AMQP:使用 AMQP Consumer 插件收集发布到 RabbitMQ 交换机的应用程序指标。通过配置插件监听特定的队列,团队可以深入了解应用程序性能,跟踪请求率、错误计数和延迟指标,所有这些都是实时的。这种设置不仅有助于异常检测,还为容量规划和系统优化提供了有价值的数据。

  2. 事件驱动的监控:配置 AMQP Consumer 在应用程序中满足特定条件时触发特定的监控事件。例如,如果收到指示高错误率的消息,插件可以将此数据馈送到监控工具,生成告警或扩展事件。这种集成可以提高对问题的响应速度,并自动化部分操作工作流程。

  3. 跨平台数据聚合:利用 AMQP Consumer 插件整合来自分布在不同平台上的各种应用程序的指标。通过利用 RabbitMQ 作为集中式消息代理,组织可以统一其监控数据,从而通过 Telegraf 实现全面的分析和仪表板,从而在异构环境中保持可见性。

  4. 实时日志处理:扩展 AMQP Consumer 的使用,以捕获发送到 RabbitMQ 交换机的日志数据,实时处理日志以进行监控和告警。此应用程序确保通过分析日志模式、趋势和异常情况(在它们发生时),可以快速检测和解决操作问题。

PostgreSQL

  1. 使用复杂查询进行实时分析:利用 PostgreSQL 插件将来自各种来源的指标存储在 PostgreSQL 数据库中,从而可以使用复杂查询进行实时分析。这种设置可以帮助数据科学家和分析师发现模式和趋势,因为他们可以在多个表之间操作关系数据,同时利用 PostgreSQL 强大的查询优化功能。具体而言,用户可以创建跨不同指标表的 JOIN 操作的复杂报告,从而揭示通常隐藏在嵌入式系统中的见解。

  2. 与 TimescaleDB 集成以处理时间序列数据:在 TimescaleDB 实例中使用 PostgreSQL 插件来高效地处理和分析时间序列数据。通过实施超表,用户可以在时间维度上实现更高的性能和主题分区。这种集成允许用户对大量时间序列数据运行分析查询,同时保留 PostgreSQL SQL 查询的全部功能,确保指标分析的可靠性和效率。

  3. 数据版本控制和历史分析:实施使用 PostgreSQL 插件维护指标不同版本的策略。用户可以设置一个不可变的数据表结构,其中保留旧版本的表,从而实现轻松的历史分析。这种方法不仅提供了对数据演变的洞察,还有助于遵守数据保留策略,确保数据集的历史完整性保持不变。

  4. 不断发展的指标的动态架构管理:使用插件的模板功能创建动态变化的架构,以响应指标变化。此用例允许组织随着指标的发展而调整其数据结构,添加必要的字段并确保遵守数据完整性策略。通过利用模板化的 SQL 命令,用户可以在无需手动干预的情况下扩展其数据库,从而促进敏捷数据管理实践。

反馈

感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提出意见。请在 InfluxDB 社区 Slack 中提交您的反馈。

强大的性能,无限的扩展能力

收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它都更有价值。借助 InfluxDB,排名第一的时间序列平台,旨在与 Telegraf 一起扩展。

查看入门方法

相关集成

HTTP 和 InfluxDB 集成

HTTP 插件从一个或多个 HTTP(S) 端点收集指标。它支持各种身份验证方法和数据格式的配置选项。

查看集成

Kafka 和 InfluxDB 集成

此插件从 Kafka 读取消息,并允许基于这些消息创建指标。它支持各种配置,包括不同的 Kafka 设置和消息处理选项。

查看集成

Kinesis 和 InfluxDB 集成

Kinesis 插件允许从 AWS Kinesis 流中读取指标。它支持多种输入数据格式,并为可靠的消息处理提供带有 DynamoDB 的检查点功能。

查看集成