目录
强大的性能,无限的扩展
收集、组织和处理海量高速数据。 当您将任何数据视为时间序列数据时,它会更有价值。 借助 InfluxDB,这个排名第一的时间序列平台旨在与 Telegraf 一起扩展。
查看入门方法
输入和输出集成概述
gNMI(gRPC 网络管理接口)输入插件使用 gNMI Subscribe 方法从网络设备收集遥测数据。 它支持 TLS,用于安全身份验证和数据传输。
此插件使 Telegraf 能够使用 Prometheus remote write 协议将指标发送到 Cortex,从而可以无缝地摄取到 Cortex 的可扩展、多租户时间序列存储中。
集成详情
gNMI
此输入插件与供应商无关,可用于任何支持 gNMI 规范的平台。 它基于 gNMI Subscribe 方法使用遥测数据,从而可以实时监控网络设备。
Cortex
借助 Telegraf 的 HTTP 输出插件和 prometheusremotewrite
数据格式,您可以将指标直接发送到 Cortex,这是一个水平可扩展的 Prometheus 长期存储后端。 Cortex 支持多租户,并使用 Prometheus protobuf 格式接受 remote write 请求。 通过使用 Telegraf 作为收集代理和 Remote Write 作为传输机制,组织可以将可观测性扩展到 Prometheus 本身不支持的来源(例如 Windows 主机、支持 SNMP 的设备或自定义应用程序指标),同时利用 Cortex 的高可用性和长期保留能力。
配置
gNMI
[[inputs.gnmi]]
## Address and port of the gNMI GRPC server
addresses = ["10.49.234.114:57777"]
## define credentials
username = "cisco"
password = "cisco"
## gNMI encoding requested (one of: "proto", "json", "json_ietf", "bytes")
# encoding = "proto"
## redial in case of failures after
# redial = "10s"
## gRPC Keepalive settings
## See https://pkg.go.dev/google.golang.org/grpc/keepalive
## The client will ping the server to see if the transport is still alive if it has
## not see any activity for the given time.
## If not set, none of the keep-alive setting (including those below) will be applied.
## If set and set below 10 seconds, the gRPC library will apply a minimum value of 10s will be used instead.
# keepalive_time = ""
## Timeout for seeing any activity after the keep-alive probe was
## sent. If no activity is seen the connection is closed.
# keepalive_timeout = ""
## gRPC Maximum Message Size
# max_msg_size = "4MB"
## Enable to get the canonical path as field-name
# canonical_field_names = false
## Remove leading slashes and dots in field-name
# trim_field_names = false
## Guess the path-tag if an update does not contain a prefix-path
## Supported values are
## none -- do not add a 'path' tag
## common path -- use the common path elements of all fields in an update
## subscription -- use the subscription path
# path_guessing_strategy = "none"
## Prefix tags from path keys with the path element
# prefix_tag_key_with_path = false
## Optional client-side TLS to authenticate the device
## Set to true/false to enforce TLS being enabled/disabled. If not set,
## enable TLS only if any of the other options are specified.
# tls_enable =
## Trusted root certificates for server
# tls_ca = "/path/to/cafile"
## Used for TLS client certificate authentication
# tls_cert = "/path/to/certfile"
## Used for TLS client certificate authentication
# tls_key = "/path/to/keyfile"
## Password for the key file if it is encrypted
# tls_key_pwd = ""
## Send the specified TLS server name via SNI
# tls_server_name = "kubernetes.example.com"
## Minimal TLS version to accept by the client
# tls_min_version = "TLS12"
## List of ciphers to accept, by default all secure ciphers will be accepted
## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
## Use "all", "secure" and "insecure" to add all support ciphers, secure
## suites or insecure suites respectively.
# tls_cipher_suites = ["secure"]
## Renegotiation method, "never", "once" or "freely"
# tls_renegotiation_method = "never"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## gNMI subscription prefix (optional, can usually be left empty)
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
# origin = ""
# prefix = ""
# target = ""
## Vendor specific options
## This defines what vendor specific options to load.
## * Juniper Header Extension (juniper_header): some sensors are directly managed by
## Linecard, which adds the Juniper GNMI Header Extension. Enabling this
## allows the decoding of the Extension header if present. Currently this knob
## adds component, component_id & sub_component_id as additional tags
# vendor_specific = []
## YANG model paths for decoding IETF JSON payloads
## Model files are loaded recursively from the given directories. Disabled if
## no models are specified.
# yang_model_paths = []
## Define additional aliases to map encoding paths to measurement names
# [inputs.gnmi.aliases]
# ifcounters = "openconfig:/interfaces/interface/state/counters"
[[inputs.gnmi.subscription]]
## Name of the measurement that will be emitted
name = "ifcounters"
## Origin and path of the subscription
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
##
## origin usually refers to a (YANG) data model implemented by the device
## and path to a specific substructure inside it that should be subscribed
## to (similar to an XPath). YANG models can be found e.g. here:
## https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
origin = "openconfig-interfaces"
path = "/interfaces/interface/state/counters"
## Subscription mode ("target_defined", "sample", "on_change") and interval
subscription_mode = "sample"
sample_interval = "10s"
## Suppress redundant transmissions when measured values are unchanged
# suppress_redundant = false
## If suppression is enabled, send updates at least every X seconds anyway
# heartbeat_interval = "60s"
Cortex
[[outputs.http]]
## Cortex Remote Write endpoint
url = "http://cortex.example.com/api/v1/push"
## Use POST to send data
method = "POST"
## Send metrics using Prometheus remote write format
data_format = "prometheusremotewrite"
## Optional HTTP headers for authentication
# [outputs.http.headers]
# X-Scope-OrgID = "your-tenant-id"
# Authorization = "Bearer YOUR_API_TOKEN"
## Optional TLS configuration
# tls_ca = "/path/to/ca.pem"
# tls_cert = "/path/to/cert.pem"
# tls_key = "/path/to/key.pem"
# insecure_skip_verify = false
## Request timeout
timeout = "10s"
输入和输出集成示例
gNMI
-
监控 Cisco 设备:使用 gNMI 插件从 Cisco IOS XR、NX-OS 或 IOS XE 设备收集遥测数据,以进行性能监控。
-
实时网络洞察:借助 gNMI 插件,网络管理员可以深入了解实时指标,例如接口统计信息和 CPU 使用率。
-
安全数据收集:配置具有 TLS 设置的 gNMI 插件,以确保在从设备收集敏感遥测数据时进行安全通信。
-
灵活的数据处理:使用订阅选项自定义您想要根据特定需求或要求收集的遥测数据。
-
错误处理:该插件包含故障排除选项,用于处理常见问题,例如缺少指标名称或 TLS 握手失败。
Cortex
-
统一的多租户监控:使用 Telegraf 从不同的团队或环境收集指标,并将它们推送到带有单独
X-Scope-OrgID
标头的 Cortex。 这使得每个租户可以隔离数据摄取和查询,非常适合托管服务和平台团队。 -
将 Prometheus 覆盖范围扩展到边缘设备:在边缘或物联网设备上部署 Telegraf 以收集系统指标并将它们发送到集中的 Cortex 集群。 即使对于没有本地 Prometheus 抓取器的环境,此方法也能确保一致的可观测性。
-
具有联邦租户的全局服务可观测性:通过配置 Telegraf 代理将数据推送到区域 Cortex 集群(每个集群都标记有租户标识符)来聚合来自全局基础设施的指标。 Cortex 处理跨区域的重复数据删除和集中访问。
-
自定义应用程序遥测管道:通过 Telegraf 的
exec
或http
输入插件收集特定于应用程序的遥测数据,并将其转发到 Cortex。 这使 DevOps 团队能够以可扩展、查询高效的格式监控特定于应用程序的 KPI,同时保持指标按租户或服务进行逻辑分组。
反馈
感谢您成为我们社区的一份子! 如果您有任何一般性反馈或在这些页面上发现任何错误,我们欢迎并鼓励您提供意见。 请在 InfluxDB 社区 Slack 中提交您的反馈。
强大的性能,无限的扩展
收集、组织和处理海量高速数据。 当您将任何数据视为时间序列数据时,它会更有价值。 借助 InfluxDB,这个排名第一的时间序列平台旨在与 Telegraf 一起扩展。
查看入门方法