目录
输入和输出集成概述
此插件将从 Amazon CloudWatch 中提取指标统计信息,从而简化监控和分析 AWS 资源的过程。
Telegraf SQL 插件允许您将 Telegraf 中的指标直接存储到 MySQL 数据库中,从而更轻松地分析和可视化收集的指标。
集成详情
Amazon CloudWatch
Amazon CloudWatch 插件允许用户从 Amazon 的 CloudWatch 服务中提取详细的指标统计信息。作为一种监控解决方案,CloudWatch 使用户能够跟踪与 AWS 资源和应用程序相关的各种指标,从而改进运营和性能洞察。该插件使用结构化身份验证方法,通过 STS(安全令牌服务)、共享凭证、环境变量和 EC2 实例配置文件的组合来优先考虑安全性和灵活性,确保对 AWS 资源的强大访问控制。主要功能包括定义特定指标命名空间、指标聚合周期以及可选包含链接帐户以进行跨帐户监控的功能。此插件的一个重要方面是它能够处理稀疏和密集指标格式,从而允许根据用户偏好使用不同的输出结构。因此,它通过直接从 CloudWatch 提供全面、及时的数据,支持云监控和分析中的多种用例。
MySQL
Telegraf 的 SQL 输出插件旨在通过基于传入指标动态创建表和列,将指标数据无缝写入 SQL 数据库。当配置为 MySQL 时,该插件利用 go-sql-driver/mysql,这需要启用 ANSI_QUOTES SQL 模式以确保正确处理带引号的标识符。这种动态模式创建方法确保每个指标都存储在自己的表中,其结构源自其字段和标签,从而提供系统性能的详细、带时间戳的记录。该插件的灵活性使其能够处理高吞吐量环境,使其成为需要强大、精细的指标日志记录和历史数据分析的场景的理想选择。
配置
Amazon CloudWatch
[[inputs.cloudwatch]]
region = "us-east-1"
# access_key = ""
# secret_key = ""
# token = ""
# role_arn = ""
# web_identity_token_file = ""
# role_session_name = ""
# profile = ""
# shared_credential_file = ""
# include_linked_accounts = false
# endpoint_url = ""
# use_system_proxy = false
# http_proxy_url = "http://localhost:8888"
period = "5m"
delay = "5m"
interval = "5m"
#recently_active = "PT3H"
# cache_ttl = "1h"
namespaces = ["AWS/ELB"]
# metric_format = "sparse"
# ratelimit = 25
# timeout = "5s"
# batch_size = 500
# statistic_include = ["average", "sum", "minimum", "maximum", sample_count]
# statistic_exclude = []
# [[inputs.cloudwatch.metrics]]
# names = ["Latency", "RequestCount"]
# [[inputs.cloudwatch.metrics.dimensions]]
# name = "LoadBalancerName"
# value = "p-example"
MySQL
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
driver = "mysql"
## Data source name
## The format of the data source name is different for each database driver.
## See the plugin readme for details.
data_source_name = "username:password@tcp(host:port)/dbname"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE}({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - tablename as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL
init_sql = "SET sql_mode='ANSI_QUOTES';"
## Maximum amount of time a connection may be idle. "0s" means connections are
## never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections
## are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## NOTE: Due to the way TOML is parsed, tables must be at the END of the
## plugin definition, otherwise additional config options are read as part of the
## table
## Metric type to SQL type conversion
## The values on the left are the data types Telegraf has and the values on
## the right are the data types Telegraf will use when sending to a database.
##
## The database values used must be data types the destination database
## understands. It is up to the user to ensure that the selected data type is
## available in the database they are using. Refer to your database
## documentation for what data types are available and supported.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
# ## This setting controls the behavior of the unsigned value. By default the
# ## setting will take the integer value and append the unsigned value to it. The other
# ## option is "literal", which will use the actual value the user provides to
# ## the unsigned option. This is useful for a database like ClickHouse where
# ## the unsigned value should use a value like "uint64".
# # conversion_style = "unsigned_suffix"
输入和输出集成示例
Amazon CloudWatch
-
跨帐户监控:通过启用
include_linked_accounts
选项,使用此插件跨多个 AWS 帐户监控资源。这种情况允许管理多个 AWS 帐户的公司将指标聚合到中央监控仪表板中,从而提供所有指标的统一视图,同时通过适当的角色管理确保安全的数据访问和合规性。 -
动态警报系统:将此插件与警报工具集成,以创建基于 CloudWatch 指标的已定义阈值触发警报的自动化系统。例如,如果延迟指标超过指定限制,则可以将警报发送给相关团队,从而能够主动响应性能问题并减少停机时间。
-
成本管理仪表板:使用从插件收集的指标来构建成本管理仪表板,该仪表板可视化 AWS 服务随时间推移的使用情况指标。通过将这些指标与计费数据相关联,组织可以识别高成本服务,并采取明智的措施来优化其资源使用和支出。
-
应用程序的性能基准测试:利用从在 AWS 上运行的应用程序收集的指标来执行性能基准测试。例如,通过跟踪 ELB 的延迟和请求计数指标,开发人员可以评估应用程序更改对其性能的影响,从而为优化做出数据驱动的决策。
MySQL
-
实时 Web 分析存储:利用插件捕获网站性能指标并将其存储在 MySQL 中。此设置使团队能够监控用户交互、分析流量模式,并根据实时数据洞察动态调整站点功能。
-
物联网设备监控:使用该插件从物联网传感器网络收集指标,并将它们记录到 MySQL 数据库中。此用例支持对设备运行状况和性能的持续监控,从而实现预测性维护和对异常的即时响应。
-
金融交易日志记录:记录具有精确时间戳的高频金融交易数据。这种方法支持强大的审计跟踪、实时欺诈检测以及全面的历史分析,以实现合规性和报告目的。
-
应用程序性能基准测试:将插件与应用程序性能监控系统集成,以将指标记录到 MySQL 中。这有助于长期进行详细的基准测试和趋势分析,使组织能够有效地识别性能瓶颈并优化资源分配。
反馈
感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提出意见。请在 InfluxDB 社区 Slack 中提交您的反馈。