目录
输入和输出集成概述
使用 Azure Monitor API 从 Azure 资源收集指标。
Telegraf 的 SQL 插件使用简单的表模式和动态列生成将收集的指标发送到 SQL 数据库。 当配置为 ClickHouse 时,它会调整 DSN 格式和类型转换设置,以确保无缝数据集成。
集成详情
Azure Monitor
Azure Monitor Telegraf 插件专门设计用于使用 Azure Monitor API 从各种 Azure 资源收集指标。 用户必须提供特定的凭据,例如 `client_id`、`client_secret`、`tenant_id` 和 `subscription_id`,以进行身份验证并访问其 Azure 资源。 此外,该插件支持从单个资源和资源组或订阅收集指标的功能,从而可以根据用户需求灵活且可扩展地收集指标。 此插件非常适合利用 Azure 云基础设施的组织,可以深入了解资源随时间的性能和利用率,从而促进云资源的积极管理和优化。
Clickhouse
Telegraf 的 SQL 插件旨在通过基于传入指标动态创建表和列,将指标数据写入 SQL 数据库。 当配置为 ClickHouse 时,它使用 clickhouse-go v1.5.4 驱动程序,该驱动程序采用唯一的 DSN 格式和一组专门的类型转换规则,将 Telegraf 的数据类型直接映射到 ClickHouse 的原生类型。 这种方法确保了在高吞吐量环境中的最佳存储和检索性能,使其非常适合实时分析和大规模数据仓库。 动态模式创建和精确的类型映射实现了详细的时序数据日志记录,这对于监控现代分布式系统至关重要。
配置
Azure Monitor
# Gather Azure resources metrics from Azure Monitor API
[[inputs.azure_monitor]]
# can be found under Overview->Essentials in the Azure portal for your application/service
subscription_id = "<>"
# can be obtained by registering an application under Azure Active Directory
client_id = "<>"
# can be obtained by registering an application under Azure Active Directory.
# If not specified Default Azure Credentials chain will be attempted:
# - Environment credentials (AZURE_*)
# - Workload Identity in Kubernetes cluster
# - Managed Identity
# - Azure CLI auth
# - Developer Azure CLI auth
client_secret = "<>"
# can be found under Azure Active Directory->Properties
tenant_id = "<>"
# Define the optional Azure cloud option e.g. AzureChina, AzureGovernment or AzurePublic. The default is AzurePublic.
# cloud_option = "AzurePublic"
# resource target #1 to collect metrics from
[[inputs.azure_monitor.resource_target]]
# can be found under Overview->Essentials->JSON View in the Azure portal for your application/service
# must start with 'resourceGroups/...' ('/subscriptions/xxxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxxx'
# must be removed from the beginning of Resource ID property value)
resource_id = "<>"
# the metric names to collect
# leave the array empty to use all metrics available to this resource
metrics = [ "<>", "<>" ]
# metrics aggregation type value to collect
# can be 'Total', 'Count', 'Average', 'Minimum', 'Maximum'
# leave the array empty to collect all aggregation types values for each metric
aggregations = [ "<>", "<>" ]
# resource target #2 to collect metrics from
[[inputs.azure_monitor.resource_target]]
resource_id = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# resource group target #1 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.resource_group_target]]
# the resource group name
resource_group = "<>"
# defines the resources to collect metrics from
[[inputs.azure_monitor.resource_group_target.resource]]
# the resource type
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# defines the resources to collect metrics from
[[inputs.azure_monitor.resource_group_target.resource]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# resource group target #2 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.resource_group_target]]
resource_group = "<>"
[[inputs.azure_monitor.resource_group_target.resource]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# subscription target #1 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.subscription_target]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
# subscription target #2 to collect metrics from resources under it with resource type
[[inputs.azure_monitor.subscription_target]]
resource_type = "<>"
metrics = [ "<>", "<>" ]
aggregations = [ "<>", "<>" ]
</code></pre>
Clickhouse
[[outputs.sql]]
## Database driver
## Valid options include mssql, mysql, pgx, sqlite, snowflake, clickhouse
driver = "clickhouse"
## Data source name
## For ClickHouse, the DSN follows the clickhouse-go v1.5.4 format.
## Example DSN: "tcp://localhost:9000?debug=true"
data_source_name = "tcp://localhost:9000?debug=true"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE} ({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - table name as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL (optional)
init_sql = ""
## Maximum amount of time a connection may be idle. "0s" means connections are never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## Metric type to SQL type conversion for ClickHouse.
## The conversion maps Telegraf metric types to ClickHouse native data types.
[outputs.sql.convert]
conversion_style = "literal"
integer = "Int64"
text = "String"
timestamp = "DateTime"
defaultvalue = "String"
unsigned = "UInt64"
bool = "UInt8"
real = "Float64"
输入和输出集成示例
Azure Monitor
-
动态资源监控: 使用 Azure Monitor 插件根据特定条件(如标签或资源类型)动态收集 Azure 资源的指标。 组织可以自动化加载和卸载资源指标的过程,从而根据资源利用率模式更好地跟踪和优化性能。
-
多云监控集成: 使用集中式监控解决方案,将从 Azure Monitor 收集的指标与其他云提供商集成。 这使组织能够查看和分析跨多个云部署的性能数据,从而全面了解资源性能和成本,并简化运营。
-
异常检测和警报: 利用通过 Azure Monitor 插件收集的指标,结合机器学习算法来检测资源利用率的异常。 通过建立基线性能指标并自动警报偏差,组织可以降低风险并在性能问题升级之前解决它们。
-
历史性能分析: 通过将数据输入到数据仓库解决方案中,使用收集的 Azure 指标进行历史分析。 这使组织能够跟踪随时间变化的趋势,从而可以根据历史性能数据进行详细的报告和决策。
Clickhouse
-
高容量数据的实时分析: 使用该插件将来自大型系统的流式指标馈送到 ClickHouse 中。 这种设置支持超快的查询性能和近乎实时的分析,非常适合监控高流量应用程序。
-
时序数据仓库: 将插件与 ClickHouse 集成以创建强大的时序数据仓库。 此用例允许组织存储详细的历史指标,并执行复杂的查询以进行趋势分析和容量规划。
-
分布式环境中的可扩展监控: 利用该插件在 ClickHouse 中为每种指标类型动态创建表,从而更容易管理和查询来自大量分布式系统的数据,而无需事先定义模式。
-
针对物联网部署的优化存储: 部署该插件以将来自物联网传感器的数据摄取到 ClickHouse 中。 其高效的模式创建和原生类型映射有助于处理海量数据,从而实现实时监控和预测性维护。
反馈
感谢您成为我们社区的一份子! 如果您有任何一般性反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提出意见。 请在 InfluxDB 社区 Slack 中提交您的反馈。