目录
强大的性能,无限的扩展能力
收集、组织和处理海量高速数据。当您将任何数据视为时序数据时,它都更有价值。借助 InfluxDB,排名第一的时序平台,专为与 Telegraf 一起扩展而构建。
查看入门方法
输入和输出集成概述
此插件通过 gRPC 接收来自 OpenTelemetry 客户端和代理的跟踪、指标和日志,从而实现对应用程序的全面可观测性。
此输出插件为将 Telegraf 收集的指标直接路由到 TimescaleDB 提供了可靠高效的机制。通过利用 PostgreSQL 强大的生态系统以及 TimescaleDB 的时序优化,它支持高性能数据摄取和高级查询功能。
集成详情
OpenTelemetry
OpenTelemetry 插件旨在通过 gRPC 接收来自客户端和代理的遥测数据,例如跟踪、指标和日志,这些客户端和代理实现了 OpenTelemetry。此插件启动一个 gRPC 服务,监听传入的遥测数据,这使其与标准插件不同,标准插件以定义的间隔收集指标。OpenTelemetry 生态系统通过提供一种供应商中立的方式来检测、生成、收集和导出遥测数据,从而帮助开发人员观察和理解其应用程序的性能。此插件的主要功能包括可自定义的连接超时、可调整的传入数据最大消息大小,以及用于指定 span、log 和 profile 维度以标记传入指标的选项。凭借这种灵活性,组织可以定制其遥测数据收集,以满足精确的可观测性要求,并确保数据无缝集成到 InfluxDB 等系统中。
TimescaleDB
TimescaleDB 是一个开源时序数据库,构建为 PostgreSQL 的扩展,旨在高效处理大规模、面向时间的数据。TimescaleDB 于 2017 年推出,是为了响应对强大、可扩展的解决方案日益增长的需求而诞生的,该解决方案可以管理海量数据,并具有高插入率和复杂查询。通过利用 PostgreSQL 熟悉的 SQL 接口,并使用专门的时序功能对其进行增强,TimescaleDB 在希望将时序功能集成到现有关系数据库中的开发人员中迅速普及。它的混合方法允许用户受益于 PostgreSQL 的灵活性、可靠性和生态系统,同时为时序数据提供优化的性能。
该数据库在需要快速摄取数据点以及对历史时期进行复杂分析查询的环境中尤其有效。TimescaleDB 具有许多创新功能,例如将数据透明地分区为可管理块的超表和内置的持续聚合。这些功能可以显着提高查询速度和资源效率。
配置
OpenTelemetry
[[inputs.opentelemetry]]
## Override the default (0.0.0.0:4317) destination OpenTelemetry gRPC service
## address:port
# service_address = "0.0.0.0:4317"
## Override the default (5s) new connection timeout
# timeout = "5s"
## gRPC Maximum Message Size
# max_msg_size = "4MB"
## Override the default span attributes to be used as line protocol tags.
## These are always included as tags:
## - trace ID
## - span ID
## Common attributes can be found here:
## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
# span_dimensions = ["service.name", "span.name"]
## Override the default log record attributes to be used as line protocol tags.
## These are always included as tags, if available:
## - trace ID
## - span ID
## Common attributes can be found here:
## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
## When using InfluxDB for both logs and traces, be certain that log_record_dimensions
## matches the span_dimensions value.
# log_record_dimensions = ["service.name"]
## Override the default profile attributes to be used as line protocol tags.
## These are always included as tags, if available:
## - profile_id
## - address
## - sample
## - sample_name
## - sample_unit
## - sample_type
## - sample_type_unit
## Common attributes can be found here:
## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
# profile_dimensions = []
## Override the default (prometheus-v1) metrics schema.
## Supports: "prometheus-v1", "prometheus-v2"
## For more information about the alternatives, read the Prometheus input
## plugin notes.
# metrics_schema = "prometheus-v1"
## Optional TLS Config.
## For advanced options: https://github.com/influxdata/telegraf/blob/v1.18.3/docs/TLS.md
##
## Set one or more allowed client CA certificate file names to
## enable mutually authenticated TLS connections.
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
## Add service certificate and key.
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
TimescaleDB
# Publishes metrics to a TimescaleDB database
[[outputs.postgresql]]
## Specify connection address via the standard libpq connection string:
## host=... user=... password=... sslmode=... dbname=...
## Or a URL:
## postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
## See https://postgresql.ac.cn/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
##
## All connection parameters are optional. Environment vars are also supported.
## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
## All supported vars can be found here:
## https://postgresql.ac.cn/docs/current/libpq-envars.html
##
## Non-standard parameters:
## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
## pool_min_conns (default: 0) - Minimum size of connection pool.
## pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
## pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
# connection = ""
## Postgres schema to use.
# schema = "public"
## Store tags as foreign keys in the metrics table. Default is false.
# tags_as_foreign_keys = false
## Suffix to append to table name (measurement name) for the foreign tag table.
# tag_table_suffix = "_tag"
## Deny inserting metrics if the foreign tag can't be inserted.
# foreign_tag_constraint = false
## Store all tags as a JSONB object in a single 'tags' column.
# tags_as_jsonb = false
## Store all fields as a JSONB object in a single 'fields' column.
# fields_as_jsonb = false
## Name of the timestamp column
## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
# timestamp_column_name = "time"
## Type of the timestamp column
## Currently, "timestamp without time zone" and "timestamp with time zone"
## are supported
# timestamp_column_type = "timestamp without time zone"
## Templated statements to execute when creating a new table.
# create_templates = [
# '''CREATE TABLE {{ .table }} ({{ .columns }})''',
# ]
## Templated statements to execute when adding columns to a table.
## Set to an empty list to disable. Points containing tags for which there is
## no column will be skipped. Points containing fields for which there is no
## column will have the field omitted.
# add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ]
## Templated statements to execute when creating a new tag table.
# tag_table_create_templates = [
# '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
# ]
## Templated statements to execute when adding columns to a tag table.
## Set to an empty list to disable. Points containing tags for which there is
## no column will be skipped.
# tag_table_add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ]
## The postgres data type to use for storing unsigned 64-bit integer values
## (Postgres does not have a native unsigned 64-bit integer type).
## The value can be one of:
## numeric - Uses the PostgreSQL "numeric" data type.
## uint8 - Requires pguint extension (https://github.com/petere/pguint)
# uint64_type = "numeric"
## When using pool_max_conns > 1, and a temporary error occurs, the query is
## retried with an incremental backoff. This controls the maximum duration.
# retry_max_backoff = "15s"
## Approximate number of tag IDs to store in in-memory cache (when using
## tags_as_foreign_keys). This is an optimization to skip inserting known
## tag IDs. Each entry consumes approximately 34 bytes of memory.
# tag_cache_size = 100000
## Cut column names at the given length to not exceed PostgreSQL's
## 'identifier length' limit (default: no limit)
## (see https://postgresql.ac.cn/docs/current/limits.html)
## Be careful to not create duplicate column names!
# column_name_length_limit = 0
## Enable & set the log level for the Postgres driver.
# log_level = "warn" # trace, debug, info, warn, error, none
输入和输出集成示例
OpenTelemetry
-
跨服务统一监控:使用 OpenTelemetry 插件收集和整合来自 Kubernetes 环境中各种微服务的遥测数据。通过使用 OpenTelemetry 检测每个服务,您可以利用此插件收集应用程序性能和依赖关系的整体视图,从而更快地进行故障排除并提高复杂系统的可靠性。
-
使用跟踪增强调试:实施此插件以捕获流经多个服务的请求的端到端跟踪。例如,当用户发起一个事务,该事务触发多个后端服务时,OpenTelemetry 插件可以记录详细的跟踪,突出显示性能瓶颈,从而为开发人员提供必要的见解来调试问题并优化其代码。
-
动态负载测试和性能监控:在负载测试阶段利用此插件的功能,通过在模拟的更高负载下收集实时指标和跟踪。这种方法有助于评估应用程序组件的弹性,并主动识别潜在的性能下降,从而确保在生产环境中获得流畅的用户体验。
-
用于实时监控的集成日志记录和指标:将 OpenTelemetry 插件与日志记录框架结合使用,以收集实时日志以及指标数据,从而创建一个强大的可观测性平台。例如,将其集成到 CI/CD 管道中以监控构建和部署,同时收集日志,以帮助实时诊断故障或性能问题。
TimescaleDB
-
实时物联网数据摄取:使用该插件实时收集和存储来自数千个物联网设备的传感器数据。这种设置有助于立即分析,帮助组织监控运营效率并快速响应不断变化的条件。
-
云应用程序性能监控:利用该插件将来自分布式云应用程序的详细性能指标馈送到 TimescaleDB。这种集成支持实时仪表板和警报,使团队能够快速识别和缓解性能瓶颈。
-
历史数据分析和报告:实施一个系统,将长期指标存储在 TimescaleDB 中,以进行全面的历史分析。这种方法允许企业执行趋势分析、生成详细报告并根据存档的时序数据做出数据驱动的决策。
-
自适应警报和异常检测:将该插件与自动异常检测工作流集成。通过将指标持续流式传输到 TimescaleDB,机器学习模型可以分析数据模式并在发生异常时触发警报,从而提高系统可靠性和主动维护能力。
反馈
感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提出意见。请在 InfluxDB 社区 Slack 中提交您的反馈。
强大的性能,无限的扩展能力
收集、组织和处理海量高速数据。当您将任何数据视为时序数据时,它都更有价值。借助 InfluxDB,排名第一的时序平台,专为与 Telegraf 一起扩展而构建。
查看入门方法