Syslog 和 Elasticsearch 集成

通过 Telegraf(InfluxData 构建的开源数据连接器)提供支持,实现强大的性能和简单的集成。

info

对于大规模实时查询,这不是推荐的配置。 为了进行查询和压缩优化、高速摄取和高可用性,您可能需要考虑 Syslog 和 InfluxDB

50 亿+

Telegraf 下载量

#1

时序数据库
来源:DB Engines

10 亿+

InfluxDB 下载量

2,800+

贡献者

目录

强大性能,无限扩展

收集、组织和处理海量高速数据。 当您将任何数据视为时序数据时,它会更有价值。 借助 InfluxDB,第一时序平台,可与 Telegraf 一起扩展。

查看入门方法

输入和输出集成概述

Syslog 插件可以使用标准网络协议从各种来源收集 syslog 消息。 此功能对于需要有效监控和记录系统的环境至关重要。

Telegraf Elasticsearch 插件无缝地将指标发送到 Elasticsearch 服务器。 该插件处理模板创建和动态索引管理,并支持各种 Elasticsearch 特有功能,以确保数据格式正确,以便存储和检索。

集成详情

Syslog

Telegraf 的 Syslog 插件捕获通过各种协议(如 TCP、UDP 和 TLS)传输的 syslog 消息。 它同时支持 RFC 5424(较新的 syslog 协议)和较旧的 RFC 3164(BSD syslog 协议)。 此插件作为服务输入运行,有效地启动一个服务来监听传入的 syslog 消息。 与传统插件不同,服务输入可能无法与标准间隔设置或 CLI 选项(如 `--once`)一起使用。 它包括用于设置网络配置、套接字权限、消息处理和连接处理的选项。 此外,与 Rsyslog 的集成允许转发日志消息,使其成为实时收集和中继系统日志的强大工具,从而无缝集成到监控和日志记录系统中。

Elasticsearch

此插件将指标写入 Elasticsearch,这是一个分布式 RESTful 搜索和分析引擎,能够近乎实时地存储大量数据。 它旨在处理 Elasticsearch 5.x 到 7.x 版本,并利用其动态模板功能来正确管理数据类型映射。 该插件支持高级功能,如模板管理、动态索引命名以及与 OpenSearch 的集成。 它还允许配置 Elasticsearch 节点的身份验证和运行状况监控。

配置

Syslog

[[inputs.syslog]]
  ## Protocol, address and port to host the syslog receiver.
  ## If no host is specified, then localhost is used.
  ## If no port is specified, 6514 is used (RFC5425#section-4.1).
  ##   ex: server = "tcp://localhost:6514"
  ##       server = "udp://:6514"
  ##       server = "unix:///var/run/telegraf-syslog.sock"
  ## When using tcp, consider using 'tcp4' or 'tcp6' to force the usage of IPv4
  ## or IPV6 respectively. There are cases, where when not specified, a system
  ## may force an IPv4 mapped IPv6 address.
  server = "tcp://127.0.0.1:6514"

  ## Permission for unix sockets (only available on unix sockets)
  ## This setting may not be respected by some platforms. To safely restrict
  ## permissions it is recommended to place the socket into a previously
  ## created directory with the desired permissions.
  ##   ex: socket_mode = "777"
  # socket_mode = ""

  ## Maximum number of concurrent connections (only available on stream sockets like TCP)
  ## Zero means unlimited.
  # max_connections = 0

  ## Read timeout (only available on stream sockets like TCP)
  ## Zero means unlimited.
  # read_timeout = "0s"

  ## Optional TLS configuration (only available on stream sockets like TCP)
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key  = "/etc/telegraf/key.pem"
  ## Enables client authentication if set.
  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

  ## Maximum socket buffer size (in bytes when no unit specified)
  ## For stream sockets, once the buffer fills up, the sender will start
  ## backing up. For datagram sockets, once the buffer fills up, metrics will
  ## start dropping. Defaults to the OS default.
  # read_buffer_size = "64KiB"

  ## Period between keep alive probes (only applies to TCP sockets)
  ## Zero disables keep alive probes. Defaults to the OS configuration.
  # keep_alive_period = "5m"

  ## Content encoding for message payloads
  ## Can be set to "gzip" for compressed payloads or "identity" for no encoding.
  # content_encoding = "identity"

  ## Maximum size of decoded packet (in bytes when no unit specified)
  # max_decompression_size = "500MB"

  ## Framing technique used for messages transport
  ## Available settings are:
  ##   octet-counting  -- see RFC5425#section-4.3.1 and RFC6587#section-3.4.1
  ##   non-transparent -- see RFC6587#section-3.4.2
  # framing = "octet-counting"

  ## The trailer to be expected in case of non-transparent framing (default = "LF").
  ## Must be one of "LF", or "NUL".
  # trailer = "LF"

  ## Whether to parse in best effort mode or not (default = false).
  ## By default best effort parsing is off.
  # best_effort = false

  ## The RFC standard to use for message parsing
  ## By default RFC5424 is used. RFC3164 only supports UDP transport (no streaming support)
  ## Must be one of "RFC5424", or "RFC3164".
  # syslog_standard = "RFC5424"

  ## Character to prepend to SD-PARAMs (default = "_").
  ## A syslog message can contain multiple parameters and multiple identifiers within structured data section.
  ## Eg., [id1 name1="val1" name2="val2"][id2 name1="val1" nameA="valA"]
  ## For each combination a field is created.
  ## Its name is created concatenating identifier, sdparam_separator, and parameter name.
  # sdparam_separator = "_"

Elasticsearch


[[outputs.elasticsearch]]
  ## The full HTTP endpoint URL for your Elasticsearch instance
  ## Multiple urls can be specified as part of the same cluster,
  ## this means that only ONE of the urls will be written to each interval
  urls = [ "http://node1.es.example.com:9200" ] # required.
  ## Elasticsearch client timeout, defaults to "5s" if not set.
  timeout = "5s"
  ## Set to true to ask Elasticsearch a list of all cluster nodes,
  ## thus it is not necessary to list all nodes in the urls config option
  enable_sniffer = false
  ## Set to true to enable gzip compression
  enable_gzip = false
  ## Set the interval to check if the Elasticsearch nodes are available
  ## Setting to "0s" will disable the health check (not recommended in production)
  health_check_interval = "10s"
  ## Set the timeout for periodic health checks.
  # health_check_timeout = "1s"
  ## HTTP basic authentication details.
  ## HTTP basic authentication details
  # username = "telegraf"
  # password = "mypassword"
  ## HTTP bearer token authentication details
  # auth_bearer_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"

  ## Index Config
  ## The target index for metrics (Elasticsearch will create if it not exists).
  ## You can use the date specifiers below to create indexes per time frame.
  ## The metric timestamp will be used to decide the destination index name
  # %Y - year (2016)
  # %y - last two digits of year (00..99)
  # %m - month (01..12)
  # %d - day of month (e.g., 01)
  # %H - hour (00..23)
  # %V - week of the year (ISO week) (01..53)
  ## Additionally, you can specify a tag name using the notation {{tag_name}}
  ## which will be used as part of the index name. If the tag does not exist,
  ## the default tag value will be used.
  # index_name = "telegraf-{{host}}-%Y.%m.%d"
  # default_tag_value = "none"
  index_name = "telegraf-%Y.%m.%d" # required.

  ## Optional Index Config
  ## Set to true if Telegraf should use the "create" OpType while indexing
  # use_optype_create = false

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Template Config
  ## Set to true if you want telegraf to manage its index template.
  ## If enabled it will create a recommended index template for telegraf indexes
  manage_template = true
  ## The template name used for telegraf indexes
  template_name = "telegraf"
  ## Set to true if you want telegraf to overwrite an existing template
  overwrite_template = false
  ## If set to true a unique ID hash will be sent as sha256(concat(timestamp,measurement,series-hash)) string
  ## it will enable data resend and update metric points avoiding duplicated metrics with different id's
  force_document_id = false

  ## Specifies the handling of NaN and Inf values.
  ## This option can have the following values:
  ##    none    -- do not modify field-values (default); will produce an error if NaNs or infs are encountered
  ##    drop    -- drop fields containing NaNs or infs
  ##    replace -- replace with the value in "float_replacement_value" (default: 0.0)
  ##               NaNs and inf will be replaced with the given number, -inf with the negative of that number
  # float_handling = "none"
  # float_replacement_value = 0.0

  ## Pipeline Config
  ## To use a ingest pipeline, set this to the name of the pipeline you want to use.
  # use_pipeline = "my_pipeline"
  ## Additionally, you can specify a tag name using the notation {{tag_name}}
  ## which will be used as part of the pipeline name. If the tag does not exist,
  ## the default pipeline will be used as the pipeline. If no default pipeline is set,
  ## no pipeline is used for the metric.
  # use_pipeline = "{{es_pipeline}}"
  # default_pipeline = "my_pipeline"
  #
  # Custom HTTP headers
  # To pass custom HTTP headers please define it in a given below section
  # [outputs.elasticsearch.headers]
  #    "X-Custom-Header" = "custom-value"

  ## Template Index Settings
  ## Overrides the template settings.index section with any provided options.
  ## Defaults provided here in the config
  # template_index_settings = {
  #   refresh_interval = "10s",
  #   mapping.total_fields.limit = 5000,
  #   auto_expand_replicas = "0-1",
  #   codec = "best_compression"
  # }

输入和输出集成示例

Syslog

  1. 集中式日志管理:使用 Syslog 插件将来自多个服务器的日志消息聚合到中央日志记录系统中。 此设置可以通过从不同来源收集 syslog 数据,帮助监控整体系统运行状况、有效排除故障并维护审计跟踪。

  2. 实时警报:将 Syslog 插件与警报工具集成,以便在检测到特定日志模式或错误时触发实时通知。 例如,如果日志中出现严重系统错误,则可以向运维团队发送警报,从而最大限度地减少停机时间并执行主动维护。

  3. 安全监控:利用 Syslog 插件进行安全监控,方法是从防火墙、入侵检测系统和其他安全设备捕获日志。 此日志记录功能增强了安全可见性,并通过分析捕获的 syslog 数据,有助于调查潜在的恶意活动。

  4. 应用程序性能跟踪:利用 Syslog 插件通过从各种应用程序收集日志来监控应用程序性能。 此集成有助于分析应用程序的行为和性能趋势,从而有助于优化应用程序流程并确保更平稳的运行。

Elasticsearch

  1. 基于时间的索引:使用此插件将指标存储在 Elasticsearch 中,以根据收集的时间为每个指标编制索引。 例如,CPU 指标可以存储在名为 `telegraf-2023.01.01` 的每日索引中,从而实现简单的基于时间的查询和保留策略。

  2. 动态模板管理:利用模板管理功能自动创建针对您的指标定制的自定义模板。 这使您可以定义如何索引和分析不同的字段,而无需手动配置 Elasticsearch,从而确保最佳的数据结构用于查询。

  3. OpenSearch 兼容性:如果您正在使用 AWS OpenSearch,您可以配置此插件通过激活兼容模式来无缝工作,从而确保您现有的 Elasticsearch 客户端保持功能并与较新的集群设置兼容。

反馈

感谢您成为我们社区的一份子! 如果您有任何一般性反馈或在这些页面上发现了任何错误,我们欢迎并鼓励您提供意见。 请在 InfluxDB 社区 Slack 中提交您的反馈。

强大性能,无限扩展

收集、组织和处理海量高速数据。 当您将任何数据视为时序数据时,它会更有价值。 借助 InfluxDB,第一时序平台,可与 Telegraf 一起扩展。

查看入门方法

相关集成

HTTP 和 InfluxDB 集成

HTTP 插件从一个或多个 HTTP(S) 端点收集指标。 它支持各种身份验证方法和数据格式的配置选项。

查看集成

Kafka 和 InfluxDB 集成

此插件从 Kafka 读取消息,并允许基于这些消息创建指标。 它支持各种配置,包括不同的 Kafka 设置和消息处理选项。

查看集成

Kinesis 和 InfluxDB 集成

Kinesis 插件允许从 AWS Kinesis 流中读取指标。 它支持多种输入数据格式,并提供与 DynamoDB 的检查点功能,以实现可靠的消息处理。

查看集成