目录
强大的性能,无限的扩展能力
收集、组织和处理海量高速数据。 当您将任何数据视为时间序列数据时,它都更有价值。 借助 InfluxDB,由 Telegraf 构建的排名第一的时间序列平台可随之扩展。
查看入门方法
输入和输出集成概述
此插件可以通过 Stackdriver Monitoring API 收集来自 Google Cloud 服务的监控数据。 它旨在通过收集相关指标来帮助用户监控其云基础设施的性能和健康状况。
Telegraf Elasticsearch 插件无缝地将指标发送到 Elasticsearch 服务器。 该插件处理模板创建和动态索引管理,并支持各种 Elasticsearch 特定的功能,以确保数据格式正确以进行存储和检索。
集成详情
Google Cloud Stackdriver
Stackdriver Telegraf 插件允许用户使用 Cloud Monitoring API v3 从 Google Cloud Monitoring 查询时间序列数据。 通过此插件,用户可以轻松地将 Google Cloud 监控指标集成到他们的监控堆栈中。 此 API 提供了关于 Google Cloud 中运行的资源和应用程序的大量见解,包括性能、正常运行时间和运营指标。 该插件支持各种配置选项来过滤和优化检索到的数据,使用户能够根据其特定需求自定义其监控设置。 这种集成有助于更顺畅地维护云资源的健康和性能,并帮助团队根据历史和当前的性能统计数据做出数据驱动的决策。
Elasticsearch
此插件将指标写入 Elasticsearch,这是一个分布式 RESTful 搜索和分析引擎,能够以近乎实时的速度存储大量数据。 它旨在处理 Elasticsearch 5.x 到 7.x 版本,并利用其动态模板功能来正确管理数据类型映射。 该插件支持高级功能,例如模板管理、动态索引命名以及与 OpenSearch 的集成。 它还允许配置 Elasticsearch 节点的身份验证和健康状况监控。
配置
Google Cloud Stackdriver
[[inputs.stackdriver]]
## GCP Project
project = "erudite-bloom-151019"
## Include timeseries that start with the given metric type.
metric_type_prefix_include = [
"compute.googleapis.com/",
]
## Exclude timeseries that start with the given metric type.
# metric_type_prefix_exclude = []
## Most metrics are updated no more than once per minute; it is recommended
## to override the agent level interval with a value of 1m or greater.
interval = "1m"
## Maximum number of API calls to make per second. The quota for accounts
## varies, it can be viewed on the API dashboard:
## https://cloud.google.com/monitoring/quotas#quotas_and_limits
# rate_limit = 14
## The delay and window options control the number of points selected on
## each gather. When set, metrics are gathered between:
## start: now() - delay - window
## end: now() - delay
#
## Collection delay; if set too low metrics may not yet be available.
# delay = "5m"
#
## If unset, the window will start at 1m and be updated dynamically to span
## the time between calls (approximately the length of the plugin interval).
# window = "1m"
## TTL for cached list of metric types. This is the maximum amount of time
## it may take to discover new metrics.
# cache_ttl = "1h"
## If true, raw bucket counts are collected for distribution value types.
## For a more lightweight collection, you may wish to disable and use
## distribution_aggregation_aligners instead.
# gather_raw_distribution_buckets = true
## Aggregate functions to be used for metrics whose value type is
## distribution. These aggregate values are recorded in in addition to raw
## bucket counts; if they are enabled.
##
## For a list of aligner strings see:
## https://cloud.google.com/monitoring/api/ref_v3/rpc/google.monitoring.v3#aligner
# distribution_aggregation_aligners = [
# "ALIGN_PERCENTILE_99",
# "ALIGN_PERCENTILE_95",
# "ALIGN_PERCENTILE_50",
# ]
## Filters can be added to reduce the number of time series matched. All
## functions are supported: starts_with, ends_with, has_substring, and
## one_of. Only the '=' operator is supported.
##
## The logical operators when combining filters are defined statically using
## the following values:
## filter ::= {AND AND AND }
## resource_labels ::= {OR }
## metric_labels ::= {OR }
## user_labels ::= {OR }
## system_labels ::= {OR }
##
## For more details, see https://cloud.google.com/monitoring/api/v3/filters
#
## Resource labels refine the time series selection with the following expression:
## resource.labels. =
# [[inputs.stackdriver.filter.resource_labels]]
# key = "instance_name"
# value = 'starts_with("localhost")'
#
## Metric labels refine the time series selection with the following expression:
## metric.labels. =
# [[inputs.stackdriver.filter.metric_labels]]
# key = "device_name"
# value = 'one_of("sda", "sdb")'
#
## User labels refine the time series selection with the following expression:
## metadata.user_labels."" =
# [[inputs.stackdriver.filter.user_labels]]
# key = "environment"
# value = 'one_of("prod", "staging")'
#
## System labels refine the time series selection with the following expression:
## metadata.system_labels."" =
# [[inputs.stackdriver.filter.system_labels]]
# key = "machine_type"
# value = 'starts_with("e2-")'
</code></pre>
Elasticsearch
[[outputs.elasticsearch]]
## The full HTTP endpoint URL for your Elasticsearch instance
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval
urls = [ "http://node1.es.example.com:9200" ] # required.
## Elasticsearch client timeout, defaults to "5s" if not set.
timeout = "5s"
## Set to true to ask Elasticsearch a list of all cluster nodes,
## thus it is not necessary to list all nodes in the urls config option
enable_sniffer = false
## Set to true to enable gzip compression
enable_gzip = false
## Set the interval to check if the Elasticsearch nodes are available
## Setting to "0s" will disable the health check (not recommended in production)
health_check_interval = "10s"
## Set the timeout for periodic health checks.
# health_check_timeout = "1s"
## HTTP basic authentication details.
## HTTP basic authentication details
# username = "telegraf"
# password = "mypassword"
## HTTP bearer token authentication details
# auth_bearer_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"
## Index Config
## The target index for metrics (Elasticsearch will create if it not exists).
## You can use the date specifiers below to create indexes per time frame.
## The metric timestamp will be used to decide the destination index name
# %Y - year (2016)
# %y - last two digits of year (00..99)
# %m - month (01..12)
# %d - day of month (e.g., 01)
# %H - hour (00..23)
# %V - week of the year (ISO week) (01..53)
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the index name. If the tag does not exist,
## the default tag value will be used.
# index_name = "telegraf-{{host}}-%Y.%m.%d"
# default_tag_value = "none"
index_name = "telegraf-%Y.%m.%d" # required.
## Optional Index Config
## Set to true if Telegraf should use the "create" OpType while indexing
# use_optype_create = false
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Template Config
## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes
manage_template = true
## The template name used for telegraf indexes
template_name = "telegraf"
## Set to true if you want telegraf to overwrite an existing template
overwrite_template = false
## If set to true a unique ID hash will be sent as sha256(concat(timestamp,measurement,series-hash)) string
## it will enable data resend and update metric points avoiding duplicated metrics with different id's
force_document_id = false
## Specifies the handling of NaN and Inf values.
## This option can have the following values:
## none -- do not modify field-values (default); will produce an error if NaNs or infs are encountered
## drop -- drop fields containing NaNs or infs
## replace -- replace with the value in "float_replacement_value" (default: 0.0)
## NaNs and inf will be replaced with the given number, -inf with the negative of that number
# float_handling = "none"
# float_replacement_value = 0.0
## Pipeline Config
## To use a ingest pipeline, set this to the name of the pipeline you want to use.
# use_pipeline = "my_pipeline"
## Additionally, you can specify a tag name using the notation {{tag_name}}
## which will be used as part of the pipeline name. If the tag does not exist,
## the default pipeline will be used as the pipeline. If no default pipeline is set,
## no pipeline is used for the metric.
# use_pipeline = "{{es_pipeline}}"
# default_pipeline = "my_pipeline"
#
# Custom HTTP headers
# To pass custom HTTP headers please define it in a given below section
# [outputs.elasticsearch.headers]
# "X-Custom-Header" = "custom-value"
## Template Index Settings
## Overrides the template settings.index section with any provided options.
## Defaults provided here in the config
# template_index_settings = {
# refresh_interval = "10s",
# mapping.total_fields.limit = 5000,
# auto_expand_replicas = "0-1",
# codec = "best_compression"
# }
输入和输出集成示例
Google Cloud Stackdriver
-
将云指标集成到自定义仪表板中:借助此插件,团队可以将来自 Google Cloud 的指标导入到个性化仪表板中,从而可以实时监控应用程序性能和资源利用率。 通过自定义云指标的可视化表示,运营团队可以轻松识别趋势和异常,从而在问题升级之前进行主动管理。
-
自动化警报和分析:用户可以设置自动化警报机制,利用插件的指标来跟踪资源阈值。 这种功能使团队能够通过提供即时通知来快速响应性能下降或中断,从而缩短平均恢复时间并确保持续的运营效率。
-
跨平台资源比较:该插件可用于从各种 Google Cloud 服务中提取指标,并将它们与本地资源进行比较。 这种跨平台可见性有助于组织就资源分配和扩展策略做出明智的决策,并优化云支出与本地基础设施的对比。
-
用于容量规划的历史数据分析:通过随时间推移收集历史指标,该插件使团队能够进行彻底的容量规划。 了解过去的性能趋势有助于准确预测资源需求,从而实现更好的预算编制和投资策略。
Elasticsearch
-
基于时间的索引:使用此插件将指标存储在 Elasticsearch 中,以根据收集时间为每个指标编制索引。 例如,CPU 指标可以存储在名为
telegraf-2023.01.01
的每日索引中,从而可以轻松进行基于时间的查询和保留策略。 -
动态模板管理:利用模板管理功能自动创建针对您的指标量身定制的自定义模板。 这允许您定义如何索引和分析不同的字段,而无需手动配置 Elasticsearch,从而确保用于查询的最佳数据结构。
-
OpenSearch 兼容性:如果您使用的是 AWS OpenSearch,则可以通过激活兼容性模式来配置此插件以无缝工作,从而确保您现有的 Elasticsearch 客户端保持功能,并与较新的集群设置兼容。
反馈
感谢您成为我们社区的一份子! 如果您有任何一般性反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提供意见。 请在InfluxDB 社区 Slack中提交您的反馈。
强大的性能,无限的扩展能力
收集、组织和处理海量高速数据。 当您将任何数据视为时间序列数据时,它都更有价值。 借助 InfluxDB,由 Telegraf 构建的排名第一的时间序列平台可随之扩展。
查看入门方法