目录
强大的性能,无限的扩展能力
收集、组织和处理大量高速数据。当您将任何数据视为时间序列数据时,它都会更有价值。InfluxDB 是排名第一的时间序列平台,旨在通过 Telegraf 进行扩展。
查看入门方法
输入和输出集成概览
Kinesis 插件使您能够从 Kinesis 数据流中读取数据,支持各种数据格式和配置。
Telegraf 的 SQL 插件使用简单的表模式和动态列生成将收集的指标发送到 SQL 数据库。当配置为 ClickHouse 时,它会调整 DSN 格式和类型转换设置,以确保无缝数据集成。
集成详情
Kinesis
Kinesis Telegraf 插件旨在从 Amazon Kinesis 数据流中读取数据,使用户能够实时收集指标。作为服务输入插件,它通过侦听传入数据而不是定期轮询来运行。配置指定了各种选项,包括 AWS 区域、流名称、身份验证凭据和数据格式。它支持跟踪未送达的消息以防止数据丢失,用户可以利用 DynamoDB 来维护上次处理记录的检查点。此插件对于需要可靠且可扩展的流处理以及其他监控需求的应用特别有用。
Clickhouse
Telegraf 的 SQL 插件旨在通过基于传入指标动态创建表和列的方式将指标数据写入 SQL 数据库。当配置为 ClickHouse 时,它使用 clickhouse-go v1.5.4 驱动程序,该驱动程序采用独特的 DSN 格式和一组专门的类型转换规则,将 Telegraf 的数据类型直接映射到 ClickHouse 的本机类型。这种方法确保了在高吞吐量环境中的最佳存储和检索性能,使其非常适合实时分析和大规模数据仓库。动态模式创建和精确的类型映射实现了详细的时间序列数据记录,这对于监控现代分布式系统至关重要。
配置
Kinesis
# Configuration for the AWS Kinesis input.
[[inputs.kinesis_consumer]]
## Amazon REGION of kinesis endpoint.
region = "ap-southeast-2"
## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
## 5) environment variables
## 6) shared credentials file
## 7) EC2 Instance Profile
# access_key = ""
# secret_key = ""
# token = ""
# role_arn = ""
# web_identity_token_file = ""
# role_session_name = ""
# profile = ""
# shared_credential_file = ""
## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default.
## ex: endpoint_url = "http://localhost:8000"
# endpoint_url = ""
## Kinesis StreamName must exist prior to starting telegraf.
streamname = "StreamName"
## Shard iterator type (only 'TRIM_HORIZON' and 'LATEST' currently supported)
# shard_iterator_type = "TRIM_HORIZON"
## Max undelivered messages
## This plugin uses tracking metrics, which ensure messages are read to
## outputs before acknowledging them to the original broker to ensure data
## is not lost. This option sets the maximum messages to read from the
## broker that have not been written by an output.
##
## This value needs to be picked with awareness of the agent's
## metric_batch_size value as well. Setting max undelivered messages too high
## can result in a constant stream of data batches to the output. While
## setting it too low may never flush the broker's messages.
# max_undelivered_messages = 1000
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
##
## The content encoding of the data from kinesis
## If you are processing a cloudwatch logs kinesis stream then set this to "gzip"
## as AWS compresses cloudwatch log data before it is sent to kinesis (aws
## also base64 encodes the zip byte data before pushing to the stream. The base64 decoding
## is done automatically by the golang sdk, as data is read from kinesis)
##
# content_encoding = "identity"
## Optional
## Configuration for a dynamodb checkpoint
[inputs.kinesis_consumer.checkpoint_dynamodb]
## unique name for this consumer
app_name = "default"
table_name = "default"
Clickhouse
[[outputs.sql]]
## Database driver
## Valid options include mssql, mysql, pgx, sqlite, snowflake, clickhouse
driver = "clickhouse"
## Data source name
## For ClickHouse, the DSN follows the clickhouse-go v1.5.4 format.
## Example DSN: "tcp://localhost:9000?debug=true"
data_source_name = "tcp://localhost:9000?debug=true"
## Timestamp column name
timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
table_template = "CREATE TABLE {TABLE} ({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - table name as a quoted identifier
table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL (optional)
init_sql = ""
## Maximum amount of time a connection may be idle. "0s" means connections are never closed due to idle time.
connection_max_idle_time = "0s"
## Maximum amount of time a connection may be reused. "0s" means connections are never closed due to age.
connection_max_lifetime = "0s"
## Maximum number of connections in the idle connection pool. 0 means unlimited.
connection_max_idle = 2
## Maximum number of open connections to the database. 0 means unlimited.
connection_max_open = 0
## Metric type to SQL type conversion for ClickHouse.
## The conversion maps Telegraf metric types to ClickHouse native data types.
[outputs.sql.convert]
conversion_style = "literal"
integer = "Int64"
text = "String"
timestamp = "DateTime"
defaultvalue = "String"
unsigned = "UInt64"
bool = "UInt8"
real = "Float64"
输入和输出集成示例
Kinesis
-
使用 Kinesis 进行实时数据处理:此用例涉及将 Kinesis 插件与监控仪表板集成,以实时分析传入的数据指标。例如,应用程序可以消耗来自多个服务的日志并以可视化方式呈现它们,使运营团队能够快速识别趋势并对出现的异常做出反应。
-
无服务器日志聚合:在无服务器架构中使用此插件,其中 Kinesis 流聚合来自各种微服务的日志。该插件可以创建有助于检测系统中问题的指标,通过第三方集成自动执行警报流程,使团队能够最大限度地减少停机时间并提高可靠性。
-
基于流指标的动态扩展:实施一种解决方案,其中 Kinesis 插件消耗的流指标可用于动态调整资源。例如,如果处理的记录数量激增,则可以触发相应的向上扩展操作以处理增加的负载,从而确保最佳的资源分配和性能。
-
使用检查点的数据管道到 S3:创建一个强大的数据管道,其中 Kinesis 流数据通过 Telegraf Kinesis 插件处理,检查点存储在 DynamoDB 中。这种方法可以确保数据一致性和可靠性,因为它管理已处理数据的状态,从而实现与下游数据湖或存储解决方案的无缝集成。
Clickhouse
-
高容量数据的实时分析:使用该插件将来自大规模系统的流式指标馈送到 ClickHouse 中。此设置支持超快的查询性能和近乎实时的分析,非常适合监控高流量应用程序。
-
时间序列数据仓库:将插件与 ClickHouse 集成以创建强大的时间序列数据仓库。此用例允许组织存储详细的历史指标,并执行复杂的查询以进行趋势分析和容量规划。
-
分布式环境中的可扩展监控:利用该插件在 ClickHouse 中为每种指标类型动态创建表,从而更轻松地管理和查询来自大量分布式系统的数据,而无需事先定义模式。
-
针对物联网部署的优化存储:部署该插件以将来自物联网传感器的数据摄取到 ClickHouse 中。其高效的模式创建和本机类型映射有助于处理海量数据,从而实现实时监控和预测性维护。
反馈
感谢您成为我们社区的一份子!如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提供意见。请在 InfluxDB 社区 Slack 中提交您的反馈。
强大的性能,无限的扩展能力
收集、组织和处理大量高速数据。当您将任何数据视为时间序列数据时,它都会更有价值。InfluxDB 是排名第一的时间序列平台,旨在通过 Telegraf 进行扩展。
查看入门方法