AMQP 和 TimescaleDB 集成

通过 Telegraf(由 InfluxData 构建的开源数据连接器)提供支持,实现强大的性能和简单的集成。

info

这不是实时大规模查询的推荐配置。为了实现查询和压缩优化、高速摄取和高可用性,您可能需要考虑AMQP 和 InfluxDB

50 亿+

Telegraf 下载量

#1

时间序列数据库
来源:DB Engines

10 亿+

InfluxDB 下载量

2,800+

贡献者

目录

强大的性能,无限的扩展性

收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它会更有价值。InfluxDB 是排名第一的时间序列平台,旨在与 Telegraf 一起扩展。

查看入门方法

输入和输出集成概述

AMQP Consumer 输入插件允许您从符合 AMQP 0-9-1 标准的消息代理(如 RabbitMQ)摄取数据,从而为监控和分析目的实现无缝数据收集。

此输出插件为将 Telegraf 收集的指标直接路由到 TimescaleDB 提供了可靠高效的机制。通过利用 PostgreSQL 强大的生态系统以及 TimescaleDB 的时间序列优化,它支持高性能数据摄取和高级查询功能。

集成详情

AMQP

此插件为 AMQP 0-9-1 提供了一个消费者,RabbitMQ 是其一个突出的实现。AMQP 或高级消息队列协议最初是为了实现网络中不同系统之间可靠的、可互操作的消息传递而开发的。该插件使用配置的队列和绑定键从主题交换中读取指标,从而提供了一种灵活高效的方式,用于从符合 AMQP 标准的消息传递系统中收集数据。这使用户能够利用现有的 RabbitMQ 实现来有效地监控他们的应用程序,通过捕获详细的指标以进行分析和告警。

TimescaleDB

TimescaleDB 是一个开源时间序列数据库,构建为 PostgreSQL 的扩展,旨在高效处理大规模、面向时间的数据。TimescaleDB 于 2017 年推出,是为了响应对强大、可扩展的解决方案日益增长的需求,该解决方案可以管理海量数据,并具有高插入速率和复杂查询。通过利用 PostgreSQL 熟悉的 SQL 接口,并使用专门的时间序列功能对其进行增强,TimescaleDB 迅速在希望将时间序列功能集成到现有关系数据库中的开发人员中流行起来。它的混合方法允许用户受益于 PostgreSQL 的灵活性、可靠性和生态系统,同时为时间序列数据提供优化的性能。

该数据库在需要快速摄取数据点以及对历史时期进行复杂分析查询的环境中尤其有效。TimescaleDB 具有许多创新功能,例如将数据透明地分区为可管理区块的超表,以及内置的连续聚合。这些功能可以显著提高查询速度和资源效率。

配置

AMQP

[[inputs.amqp_consumer]]
  ## Brokers to consume from.  If multiple brokers are specified a random broker
  ## will be selected anytime a connection is established.  This can be
  ## helpful for load balancing when not using a dedicated load balancer.
  brokers = ["amqp://localhost:5672/influxdb"]

  ## Authentication credentials for the PLAIN auth_method.
  # username = ""
  # password = ""

  ## Name of the exchange to declare.  If unset, no exchange will be declared.
  exchange = "telegraf"

  ## Exchange type; common types are "direct", "fanout", "topic", "header", "x-consistent-hash".
  # exchange_type = "topic"

  ## If true, exchange will be passively declared.
  # exchange_passive = false

  ## Exchange durability can be either "transient" or "durable".
  # exchange_durability = "durable"

  ## Additional exchange arguments.
  # exchange_arguments = { }
  # exchange_arguments = {"hash_property" = "timestamp"}

  ## AMQP queue name.
  queue = "telegraf"

  ## AMQP queue durability can be "transient" or "durable".
  queue_durability = "durable"

  ## If true, queue will be passively declared.
  # queue_passive = false

  ## Additional arguments when consuming from Queue
  # queue_consume_arguments = { }
  # queue_consume_arguments = {"x-stream-offset" = "first"}

  ## A binding between the exchange and queue using this binding key is
  ## created.  If unset, no binding is created.
  binding_key = "#"

  ## Maximum number of messages server should give to the worker.
  # prefetch_count = 50

  ## Max undelivered messages
  ## This plugin uses tracking metrics, which ensure messages are read to
  ## outputs before acknowledging them to the original broker to ensure data
  ## is not lost. This option sets the maximum messages to read from the
  ## broker that have not been written by an output.
  ##
  ## This value needs to be picked with awareness of the agent's
  ## metric_batch_size value as well. Setting max undelivered messages too high
  ## can result in a constant stream of data batches to the output. While
  ## setting it too low may never flush the broker's messages.
  # max_undelivered_messages = 1000

  ## Timeout for establishing the connection to a broker
  # timeout = "30s"

  ## Auth method. PLAIN and EXTERNAL are supported
  ## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
  ## described here: https://rabbitmq.cn/plugins.html
  # auth_method = "PLAIN"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Content encoding for message payloads, can be set to
  ## "gzip", "identity" or "auto"
  ## - Use "gzip" to decode gzip
  ## - Use "identity" to apply no encoding
  ## - Use "auto" determine the encoding using the ContentEncoding header
  # content_encoding = "identity"

  ## Maximum size of decoded message.
  ## Acceptable units are B, KiB, KB, MiB, MB...
  ## Without quotes and units, interpreted as size in bytes.
  # max_decompression_size = "500MB"

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

TimescaleDB

# Publishes metrics to a TimescaleDB database
[[outputs.postgresql]]
  ## Specify connection address via the standard libpq connection string:
  ##   host=... user=... password=... sslmode=... dbname=...
  ## Or a URL:
  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
  ## See https://postgresql.ac.cn/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
  ##
  ## All connection parameters are optional. Environment vars are also supported.
  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
  ## All supported vars can be found here:
  ##  https://postgresql.ac.cn/docs/current/libpq-envars.html
  ##
  ## Non-standard parameters:
  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
  ##   pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
  # connection = ""

  ## Postgres schema to use.
  # schema = "public"

  ## Store tags as foreign keys in the metrics table. Default is false.
  # tags_as_foreign_keys = false

  ## Suffix to append to table name (measurement name) for the foreign tag table.
  # tag_table_suffix = "_tag"

  ## Deny inserting metrics if the foreign tag can't be inserted.
  # foreign_tag_constraint = false

  ## Store all tags as a JSONB object in a single 'tags' column.
  # tags_as_jsonb = false

  ## Store all fields as a JSONB object in a single 'fields' column.
  # fields_as_jsonb = false

  ## Name of the timestamp column
  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
  # timestamp_column_name = "time"

  ## Type of the timestamp column
  ## Currently, "timestamp without time zone" and "timestamp with time zone"
  ## are supported
  # timestamp_column_type = "timestamp without time zone"

  ## Templated statements to execute when creating a new table.
  # create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
  # ]

  ## Templated statements to execute when adding columns to a table.
  ## Set to an empty list to disable. Points containing tags for which there is
  ## no column will be skipped. Points containing fields for which there is no
  ## column will have the field omitted.
  # add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## Templated statements to execute when creating a new tag table.
  # tag_table_create_templates = [
  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
  # ]

  ## Templated statements to execute when adding columns to a tag table.
  ## Set to an empty list to disable. Points containing tags for which there is
  ## no column will be skipped.
  # tag_table_add_column_templates = [
  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
  # ]

  ## The postgres data type to use for storing unsigned 64-bit integer values
  ## (Postgres does not have a native unsigned 64-bit integer type).
  ## The value can be one of:
  ##   numeric - Uses the PostgreSQL "numeric" data type.
  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
  # uint64_type = "numeric"

  ## When using pool_max_conns > 1, and a temporary error occurs, the query is
  ## retried with an incremental backoff. This controls the maximum duration.
  # retry_max_backoff = "15s"

  ## Approximate number of tag IDs to store in in-memory cache (when using
  ## tags_as_foreign_keys). This is an optimization to skip inserting known
  ## tag IDs. Each entry consumes approximately 34 bytes of memory.
  # tag_cache_size = 100000

  ## Cut column names at the given length to not exceed PostgreSQL's
  ## 'identifier length' limit (default: no limit)
  ## (see https://postgresql.ac.cn/docs/current/limits.html)
  ## Be careful to not create duplicate column names!
  # column_name_length_limit = 0

  ## Enable & set the log level for the Postgres driver.
  # log_level = "warn" # trace, debug, info, warn, error, none

输入和输出集成示例

AMQP

  1. 集成应用指标与 AMQP:使用 AMQP Consumer 插件收集发布到 RabbitMQ 交换的应用指标。通过配置插件以侦听特定队列,团队可以深入了解应用程序性能,跟踪请求速率、错误计数和延迟指标,所有这些都是实时的。这种设置不仅有助于异常检测,还为容量规划和系统优化提供了有价值的数据。

  2. 事件驱动的监控:每当应用程序中满足某些条件时,配置 AMQP Consumer 以触发特定的监控事件。例如,如果收到指示高错误率的消息,插件可以将此数据馈送到监控工具,生成警报或扩展事件。这种集成可以提高对问题的响应速度,并自动化部分操作工作流程。

  3. 跨平台数据聚合:利用 AMQP Consumer 插件整合来自分布在不同平台上的各种应用程序的指标。通过使用 RabbitMQ 作为集中式消息代理,组织可以统一其监控数据,从而通过 Telegraf 进行全面的分析和仪表板展示,从而在异构环境中保持可见性。

  4. 实时日志处理:扩展 AMQP Consumer 的使用范围,以捕获发送到 RabbitMQ 交换的日志数据,实时处理日志以进行监控和告警。此应用确保通过分析日志模式、趋势和异常情况(在发生时),可以快速检测和解决操作问题。

TimescaleDB

  1. 实时物联网数据摄取:使用该插件实时收集和存储来自数千个物联网设备的传感器数据。这种设置有助于即时分析,帮助组织监控运营效率并快速响应不断变化的条件。

  2. 云应用性能监控:利用该插件将来自分布式云应用程序的详细性能指标馈送到 TimescaleDB。这种集成支持实时仪表板和警报,使团队能够快速识别和缓解性能瓶颈。

  3. 历史数据分析和报告:实施一个系统,将长期指标存储在 TimescaleDB 中,以进行全面的历史分析。这种方法允许企业执行趋势分析、生成详细报告,并根据存档的时间序列数据做出数据驱动的决策。

  4. 自适应告警和异常检测:将插件与自动异常检测工作流程集成。通过将指标持续流式传输到 TimescaleDB,机器学习模型可以分析数据模式,并在发生异常时触发警报,从而增强系统可靠性和主动维护。

反馈

感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提出意见。请在 InfluxDB 社区 Slack 中提交您的反馈。

强大的性能,无限的扩展性

收集、组织和处理海量高速数据。当您将任何数据视为时间序列数据时,它会更有价值。InfluxDB 是排名第一的时间序列平台,旨在与 Telegraf 一起扩展。

查看入门方法

相关集成

HTTP 和 InfluxDB 集成

HTTP 插件从一个或多个 HTTP(S) 端点收集指标。它支持各种身份验证方法和数据格式的配置选项。

查看集成

Kafka 和 InfluxDB 集成

此插件从 Kafka 读取消息,并允许基于这些消息创建指标。它支持各种配置,包括不同的 Kafka 设置和消息处理选项。

查看集成

Kinesis 和 InfluxDB 集成

Kinesis 插件允许从 AWS Kinesis 流中读取指标。它支持多种输入数据格式,并提供带有 DynamoDB 的检查点功能,以实现可靠的消息处理。

查看集成