目录
输入和输出集成概述
Docker 输入插件允许您使用 Docker Engine API 从 Docker 容器收集指标,从而增强容器化应用程序的可见性和监控。
InfluxDB 插件将指标写入 InfluxDB HTTP 服务,从而实现时序数据的高效存储和检索。
集成细节
Docker
Telegraf 的 Docker 输入插件从 Docker Engine API 收集有价值的指标,从而深入了解正在运行的容器。该插件使用官方 Docker 客户端与 Engine API 交互,允许用户监控各种容器状态、资源分配和性能指标。通过按名称和状态过滤容器的选项,以及可自定义的标签,此插件支持在各种环境中灵活监控容器化应用程序,无论是在本地系统上还是在 Kubernetes 等编排平台中。此外,它通过需要访问 Docker 守护进程的权限来解决安全问题,并强调在容器化环境中部署时的正确配置。
InfluxDB
InfluxDB Telegraf 插件用于将指标发送到 InfluxDB HTTP API,从而以结构化方式促进时序数据的存储和查询。该插件与 InfluxDB 无缝集成,提供诸如基于令牌的身份验证和对多个 InfluxDB 集群节点的支持等基本功能,从而确保可靠且可扩展的数据摄取。通过其可配置性,用户可以指定组织、目标存储桶和 HTTP 特定设置等选项,从而灵活地定制数据的发送和存储方式。该插件还支持敏感数据的密钥管理,从而增强生产环境中的安全性。在实时分析和时序数据存储至关重要的现代可观测性堆栈中,此插件尤其有益。
配置
Docker
[[inputs.docker]]
## Docker Endpoint
## To use TCP, set endpoint = "tcp://[ip]:[port]"
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock"
## Set to true to collect Swarm metrics(desired_replicas, running_replicas)
## Note: configure this in one of the manager nodes in a Swarm cluster.
## configuring in multiple Swarm managers results in duplication of metrics.
gather_services = false
## Only collect metrics for these containers. Values will be appended to
## container_name_include.
## Deprecated (1.4.0), use container_name_include
container_names = []
## Set the source tag for the metrics to the container ID hostname, eg first 12 chars
source_tag = false
## Containers to include and exclude. Collect all if empty. Globs accepted.
container_name_include = []
container_name_exclude = []
## Container states to include and exclude. Globs accepted.
## When empty only containers in the "running" state will be captured.
# container_state_include = []
# container_state_exclude = []
## Objects to include for disk usage query
## Allowed values are "container", "image", "volume"
## When empty disk usage is excluded
storage_objects = []
## Timeout for docker list, info, and stats commands
timeout = "5s"
## Whether to report for each container per-device blkio (8:0, 8:1...),
## network (eth0, eth1, ...) and cpu (cpu0, cpu1, ...) stats or not.
## Usage of this setting is discouraged since it will be deprecated in favor of 'perdevice_include'.
## Default value is 'true' for backwards compatibility, please set it to 'false' so that 'perdevice_include' setting
## is honored.
perdevice = true
## Specifies for which classes a per-device metric should be issued
## Possible values are 'cpu' (cpu0, cpu1, ...), 'blkio' (8:0, 8:1, ...) and 'network' (eth0, eth1, ...)
## Please note that this setting has no effect if 'perdevice' is set to 'true'
# perdevice_include = ["cpu"]
## Whether to report for each container total blkio and network stats or not.
## Usage of this setting is discouraged since it will be deprecated in favor of 'total_include'.
## Default value is 'false' for backwards compatibility, please set it to 'true' so that 'total_include' setting
## is honored.
total = false
## Specifies for which classes a total metric should be issued. Total is an aggregated of the 'perdevice' values.
## Possible values are 'cpu', 'blkio' and 'network'
## Total 'cpu' is reported directly by Docker daemon, and 'network' and 'blkio' totals are aggregated by this plugin.
## Please note that this setting has no effect if 'total' is set to 'false'
# total_include = ["cpu", "blkio", "network"]
## docker labels to include and exclude as tags. Globs accepted.
## Note that an empty array for both will include all labels as tags
docker_label_include = []
docker_label_exclude = []
## Which environment variables should we use as a tag
tag_env = ["JAVA_HOME", "HEAP_SIZE"]
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
InfluxDB
[[outputs.influxdb]]
## The full HTTP or UDP URL for your InfluxDB instance.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
# urls = ["unix:///var/run/influxdb.sock"]
# urls = ["udp://127.0.0.1:8089"]
# urls = ["http://127.0.0.1:8086"]
## Local address to bind when connecting to the server
## If empty or not set, the local address is automatically chosen.
# local_address = ""
## The target database for metrics; will be created as needed.
## For UDP url endpoint database needs to be configured on server side.
# database = "telegraf"
## The value of this tag will be used to determine the database. If this
## tag is not set the 'database' option is used as the default.
# database_tag = ""
## If true, the 'database_tag' will not be included in the written metric.
# exclude_database_tag = false
## If true, no CREATE DATABASE queries will be sent. Set to true when using
## Telegraf with a user without permissions to create databases or when the
## database already exists.
# skip_database_creation = false
## Name of existing retention policy to write to. Empty string writes to
## the default retention policy. Only takes effect when using HTTP.
# retention_policy = ""
## The value of this tag will be used to determine the retention policy. If this
## tag is not set the 'retention_policy' option is used as the default.
# retention_policy_tag = ""
## If true, the 'retention_policy_tag' will not be included in the written metric.
# exclude_retention_policy_tag = false
## Write consistency (clusters only), can be: "any", "one", "quorum", "all".
## Only takes effect when using HTTP.
# write_consistency = "any"
## Timeout for HTTP messages.
# timeout = "5s"
## HTTP Basic Auth
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
## HTTP User-Agent
# user_agent = "telegraf"
## UDP payload size is the maximum packet size to send.
# udp_payload = "512B"
## Optional TLS Config for use on HTTP connections.
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## HTTP Proxy override, if unset values the standard proxy environment
## variables are consulted to determine which proxy, if any, should be used.
# http_proxy = "http://corporate.proxy:3128"
## Additional HTTP headers
# http_headers = {"X-Special-Header" = "Special-Value"}
## HTTP Content-Encoding for write request body, can be set to "gzip" to
## compress body or "identity" to apply no encoding.
# content_encoding = "gzip"
## When true, Telegraf will output unsigned integers as unsigned values,
## i.e.: "42u". You will need a version of InfluxDB supporting unsigned
## integer values. Enabling this option will result in field type errors if
## existing data has been written.
# influx_uint_support = false
## When true, Telegraf will omit the timestamp on data to allow InfluxDB
## to set the timestamp of the data during ingestion. This is generally NOT
## what you want as it can lead to data points captured at different times
## getting omitted due to similar data.
# influx_omit_timestamp = false
输入和输出集成示例
Docker
-
监控容器化应用程序的性能:使用 Docker 输入插件来跟踪在 Docker 容器中运行的应用程序的 CPU、内存、磁盘 I/O 和网络活动。通过收集这些指标,DevOps 团队可以主动管理资源分配、排除性能瓶颈并确保跨不同环境的最佳应用程序性能。
-
与 Kubernetes 集成:利用此插件收集由 Kubernetes 编排的 Docker 容器的指标。通过滤除不必要的 Kubernetes 标签并专注于关键指标,团队可以简化其监控解决方案并创建仪表板,从而深入了解在 Kubernetes 集群中运行的微服务的整体健康状况。
-
容量规划和资源优化:使用 Docker 输入插件收集的指标来执行 Docker 部署的容量规划。分析使用模式有助于识别未充分利用的资源和过度配置的容器,从而根据实际使用趋势指导扩展或缩减的决策。
-
容器异常的自动警报:根据通过 Docker 插件收集的指标设置警报规则,以通知团队资源使用量或服务中断的异常峰值。这种主动监控方法有助于维护服务可靠性并优化容器化应用程序的性能。
InfluxDB
-
实时系统监控:利用 InfluxDB 插件捕获和存储来自各种系统组件的指标,例如 CPU 使用率、内存消耗和磁盘 I/O。通过将这些指标推送到 InfluxDB,您可以创建一个实时仪表板,可视化系统性能。此设置不仅有助于识别性能瓶颈,还有助于通过分析长期趋势进行主动容量规划。
-
Web 应用程序的性能跟踪:自动收集与 Web 应用程序性能相关的指标(例如请求持续时间、错误率和用户交互),并将它们推送到 InfluxDB。通过在您的监控堆栈中使用此插件,您可以使用存储的指标生成报告和分析,以帮助了解用户行为和应用程序效率,从而指导开发和优化工作。
-
物联网数据聚合:利用 InfluxDB Telegraf 插件从各种物联网设备收集传感器数据,并将其存储在集中的 InfluxDB 实例中。此用例使您能够分析环境或机器数据随时间的趋势和模式,从而促进更智能的决策和预测性维护策略。通过将物联网数据集成到 InfluxDB 中,组织可以利用历史数据分析的力量来驱动创新和运营效率。
-
分析历史指标以进行预测:设置 InfluxDB 插件以将历史指标数据发送到 InfluxDB,并使用它来驱动预测模型。通过分析过去的性能指标,您可以创建预测未来趋势和需求的预测模型。此应用程序对于商业智能目的特别有用,可以帮助组织根据历史使用模式为资源需求的波动做好准备。
反馈
感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提出意见。请在 InfluxDB 社区 Slack 中提交您的反馈。