目录
输入和输出集成概述
使用 Azure Monitor API 从 Azure 资源收集指标。
OpenSearch 输出插件允许用户使用 HTTP 将指标直接发送到 OpenSearch 实例,从而促进 OpenSearch 生态系统内有效的数据管理和分析。
集成详情
Azure Monitor
Azure Monitor Telegraf 插件专为使用 Azure Monitor API 从各种 Azure 资源收集指标而设计。用户必须提供特定的凭据,例如 client_id、client_secret、tenant_id 和 subscription_id,以进行身份验证并访问其 Azure 资源。此外,该插件还支持从单个资源和资源组或订阅收集指标的功能,从而实现根据用户需求量身定制的灵活且可扩展的指标收集。此插件非常适合利用 Azure 云基础设施的组织,可以深入了解资源在一段时间内的性能和利用率,从而促进云资源的积极管理和优化。
OpenSearch
OpenSearch Telegraf 插件通过 HTTP 与 OpenSearch 数据库集成,从而实现指标的简化收集和存储。作为一个专为 OpenSearch 2.x 版本设计的强大工具,该插件提供了强大的功能,同时通过原始的 Elasticsearch 插件提供与 1.x 的兼容性。此插件有助于在 OpenSearch 中创建和管理索引,自动管理模板并确保数据结构化以有效地进行分析。该插件支持各种配置选项,例如索引名称、身份验证、运行状况检查和值处理,从而可以根据不同的操作要求进行定制。它的功能使其对于希望利用 OpenSearch 的强大功能进行指标存储和查询的组织至关重要。
配置
Azure Monitor
# Gather Azure resources metrics from Azure Monitor API
[[inputs.azure_monitor]]
# can be found under Overview->Essentials in the Azure portal for your application/service
subscription_id = "<>"
# can be obtained by registering an application under Azure Active Directory
client_id = "<>"
# can be obtained by registering an application under Azure Active Directory.
# If not specified Default Azure Credentials chain will be attempted:
# - Environment credentials (AZURE_*)
# - Workload Identity in Kubernetes cluster
# - Managed Identity
# - Azure CLI auth
# - Developer Azure CLI auth
client_secret = "<>"
# can be found under Azure Active Directory->Properties
tenant_id = "<>"
# Define the optional Azure cloud option e.g. AzureChina, AzureGovernment or AzurePublic. The default is AzurePublic.
# cloud_option = "AzurePublic"
# resource target #1 to collect metrics from
[[inputs.azure_monitor.resource_target]]
# can be found under Overview->Essentials->JSON View in the Azure portal for your application/service
# must start with 'resourceGroups/...' ('/subscriptions/xxxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxxx'
# must be removed from the beginning of Resource ID property value)
resource_id = "<>"
# the metric names to collect
# leave the array empty to use all metrics available to this resource
metrics = [ "<>", "<>" ]
# metrics aggregation type value to collect
# can be 'Total', 'Count', 'Average', 'Minimum', 'Maximum'
# leave the array empty to collect all aggregation types values for each metric
aggregations = [ "<>", "<>" ]
# resource target #2 to collect metrics from
[[inputs.azure_monitor.resource_target]]
resource_id = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# resource group target #1 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.resource_group_target]]
# the resource group name
resource_group = "<>"
# defines the resources to collect metrics from
[[inputs.azure_monitor.resource_group_target.resource]]
# the resource type
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# defines the resources to collect metrics from
[[inputs.azure_monitor.resource_group_target.resource]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# resource group target #2 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.resource_group_target]]
resource_group = "<>"
[[inputs.azure_monitor.resource_group_target.resource]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# subscription target #1 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.subscription_target]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# subscription target #2 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.subscription_target]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
</code></pre>
OpenSearch
[[outputs.opensearch]]
## URLs
## The full HTTP endpoint URL for your OpenSearch instance. Multiple URLs can
## be specified as part of the same cluster, but only one URLs is used to
## write during each interval.
urls = ["http://node1.os.example.com:9200"]
## Index Name
## Target index name for metrics (OpenSearch will create if it not exists).
## This is a Golang template (see https://pkg.go.dev/text/template)
## You can also specify
## metric name (`{{.Name}}`), tag value (`{{.Tag "tag_name"}}`), field value (`{{.Field "field_name"}}`)
## If the tag does not exist, the default tag value will be empty string "".
## the timestamp (`{{.Time.Format "xxxxxxxxx"}}`).
## For example: "telegraf-{{.Time.Format \"2006-01-02\"}}-{{.Tag \"host\"}}" would set it to telegraf-2023-07-27-HostName
index_name = ""
## Timeout
## OpenSearch client timeout
# timeout = "5s"
## Sniffer
## Set to true to ask OpenSearch a list of all cluster nodes,
## thus it is not necessary to list all nodes in the urls config option
# enable_sniffer = false
## GZIP Compression
## Set to true to enable gzip compression
# enable_gzip = false
## Health Check Interval
## Set the interval to check if the OpenSearch nodes are available
## Setting to "0s" will disable the health check (not recommended in production)
# health_check_interval = "10s"
## Set the timeout for periodic health checks.
# health_check_timeout = "1s"
## HTTP basic authentication details.
# username = ""
# password = ""
## HTTP bearer token authentication details
# auth_bearer_token = ""
## Optional TLS Config
## Set to true/false to enforce TLS being enabled/disabled. If not set,
## enable TLS only if any of the other options are specified.
# tls_enable =
## Trusted root certificates for server
# tls_ca = "/path/to/cafile"
## Used for TLS client certificate authentication
# tls_cert = "/path/to/certfile"
## Used for TLS client certificate authentication
# tls_key = "/path/to/keyfile"
## Send the specified TLS server name via SNI
# tls_server_name = "kubernetes.example.com"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Template Config
## Manage templates
## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes
# manage_template = true
## Template Name
## The template name used for telegraf indexes
# template_name = "telegraf"
## Overwrite Templates
## Set to true if you want telegraf to overwrite an existing template
# overwrite_template = false
## Document ID
## If set to true a unique ID hash will be sent as
## sha256(concat(timestamp,measurement,series-hash)) string. It will enable
## data resend and update metric points avoiding duplicated metrics with
## different id's
# force_document_id = false
## Value Handling
## Specifies the handling of NaN and Inf values.
## This option can have the following values:
## none -- do not modify field-values (default); will produce an error
## if NaNs or infs are encountered
## drop -- drop fields containing NaNs or infs
## replace -- replace with the value in "float_replacement_value" (default: 0.0)
## NaNs and inf will be replaced with the given number, -inf with the negative of that number
# float_handling = "none"
# float_replacement_value = 0.0
## Pipeline Config
## To use a ingest pipeline, set this to the name of the pipeline you want to use.
# use_pipeline = "my_pipeline"
## Pipeline Name
## Additionally, you can specify a tag name using the notation (`{{.Tag "tag_name"}}`)
## which will be used as the pipeline name (e.g. "{{.Tag \"os_pipeline\"}}").
## If the tag does not exist, the default pipeline will be used as the pipeline.
## If no default pipeline is set, no pipeline is used for the metric.
# default_pipeline = ""
输入和输出集成示例
Azure Monitor
-
动态资源监控:使用 Azure Monitor 插件根据特定标准(如标签或资源类型)动态收集 Azure 资源的指标。组织可以自动化加载和卸载资源指标的过程,从而根据资源利用率模式更好地跟踪和优化性能。
-
多云监控集成:使用集中式监控解决方案将从 Azure Monitor 收集的指标与其他云提供商集成。这使组织可以查看和分析跨多个云部署的性能数据,从而全面了解资源性能和成本,并简化操作。
-
异常检测和警报:结合机器学习算法,利用通过 Azure Monitor 插件收集的指标来检测资源利用率的异常。通过建立基准性能指标并在出现偏差时自动发出警报,组织可以降低风险并在性能问题升级之前解决它们。
-
历史性能分析:通过将数据馈送到数据仓库解决方案中,使用收集的 Azure 指标进行历史分析。这使组织能够跟踪随时间变化的趋势,从而可以根据历史性能数据进行详细的报告和决策。
OpenSearch
-
时序数据的动态索引:利用 OpenSearch Telegraf 插件为时序指标动态创建索引,确保数据以有组织的方式存储,从而有利于基于时间的查询。通过使用 Go 模板定义索引模式,用户可以利用该插件创建每日或每月索引,这可以大大简化数据管理和长期检索,从而提高分析性能。
-
多租户应用程序的集中式日志记录:在多租户应用程序中实施 OpenSearch 插件,其中每个租户的日志都发送到单独的索引。这可以在保持数据隔离的同时,对每个租户进行有针对性的分析和监控。通过利用索引名称模板功能,用户可以自动创建租户特定的索引,这不仅简化了流程,而且还增强了租户数据的安全性和可访问性。
-
与机器学习集成以进行异常检测:将 OpenSearch 插件与机器学习工具结合使用,以自动检测指标数据中的异常。通过配置插件以将实时指标发送到 OpenSearch,用户可以将机器学习模型应用于传入的数据流,以识别异常值或异常模式,从而促进主动监控和快速补救措施。
-
使用 OpenSearch 增强监控仪表板:使用从 OpenSearch 收集的指标来创建实时仪表板,以提供对系统性能的深入了解。通过将指标馈送到 OpenSearch,组织可以利用 OpenSearch Dashboards 可视化关键性能指标,使运营团队能够快速评估健康状况和性能,并做出数据驱动的决策。
反馈
感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提出意见。请在 InfluxDB 社区 Slack 中提交您的反馈。