ELK安装调试及遇到的问题


1.前期准备:

如果是es非root用户启动es,可以把*改成非root用户,例如 es soft nofile 65535修改 /etc/security/limits.conf

在文件末尾中增加下面内容# 每个进程可以打开的文件数的限制* soft nofile 65536* hard nofile 65536

修改/etc/security/limits.d/20-nproc.conf

在文件末尾中增加下面内容# 每个进程可以打开的文件数的限制* soft nofile 65536* hard nofile 65536# 操作系统级别对每个用户创建的进程数的限制* hard nproc 4096# 注: * 带表 Linux 所有用户名称

修改 /etc/sysctl.conf

在文件中增加下面内容# 一个进程可以拥有的 VMA(虚拟内存区域)的数量,默认值为 65536

vm.max_map_count=655360

2.ELK组建安装:

导入官网秘钥

rpm –import https://artifacts.elastic.co/GPG-KEY-elasticsearch

vim /etc/yum.repos.d/elasticsearch.repo

[elasticsearch-7.x]

name=Elasticsearch repository for 7.x packages

baseurl=https://artifacts.elastic.co/packages/7.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

yum install elasticsearch kibana logstash -y

systemctl restart elasticsearch

systemctl enable elasticsearch

设置各类应用密码

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

elastic 用户为超级管理员

systemctl restart kibana

systemctl enable kibana

systemctl restart logstash.service

systemctl enable logstash.service

Vim etc/elasticsearch/elasticsearch.yml

path.data: /data/elk/elasticsearch
#path.data: /var/lib/elasticsearch

# Path to log files:
#
path.logs: /data/elk/elasticsearch/logs
#path.logs: /var/log/elasticsearch

#置集群的名称
cluster.name: elasticsearch
 # 当前节点的名称
node.name: node-1

 # 绑定IP地址,外网访问0.0.0.0 否则绑定localhost
network.host: 0.0.0.0
http.port: 9200
 # 允许跨域请求
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
 # 访问需要密码
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

 # 初始化,必须要配置
cluster.initial_master_nodes: ["node-1"]
# ----------------------------------- Memory -----------------------------------
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: cert/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: cert/elastic-certificates.p12
# Lock the memory on startup:

vim /etc/kibana/kibana.yml

server.port: 5601
 # 绑定IP
server.host: "0.0.0.0"
 # 
elasticsearch.hosts: ["http://192.168.8.6:9200"]

 # 访问密码,这里等下要设置,先配置好
elasticsearch.username: "elastic"
elasticsearch.password: "elastic"
xpack.monitoring.enabled: true
 # 界面使用中文
i18n.locale: "zh-CN"
# Enables you to speci

vim /etc/logstash/logstash.yml

path.data: /data/elk/logstash
http.host: "0.0.0.0"
#http.port: 5044

# 访问需要验证, 先配置,等下设置密码
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: elastic
xpack.monitoring.elasticsearch.hosts: ["http://192.168.8.6:9200"]
# ------------ Pipeline Settings --------------
#
api.http.port: 9600-9700
#
#
log.level: debug
#log.level: info
path.logs: /data/elk/logstash/logs

Vim pipelines.yml

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"

控制web页面使用的功能:

xpack.apm.enabled: false 设置为false禁用X-Pack性能管理功能。

xpack.graph.enabled: false 设置为false禁用X-Pack图形功能。

xpack.ml.enabled: false 设置为false禁用X-Pack机器学习功能。

xpack.monitoring.enabled: false 设置为false禁用X-Pack监视功能。

xpack.reporting.enabled: false 设置为false禁用X-Pack报告功能。

xpack.security.enabled: false 设置为false禁用X-Pack安全功能。(引起报错)

xpack.watcher.enabled: false 设置false为禁用观察器。

https://www.elastic.co/guide/en/kibana/current/settings-xpack-kb.html

调试的时候可能需要ES Head插件,curl -X GET “localhost:9200”

  1. 确认Elasticsearch服务的运行状态:可以通过运行curl -X GET “localhost:9200”来检查Elasticsearch服务是否在运行,并检查是否能够从浏览器中访问到ES Head插件的界面。

  2. 检查ES Head插件的配置文件:确保您已经正确配置了ES Head插件,主要包括Elasticsearch服务器的地址和端口号等信息。您可以在ES Head插件的配置文件中查看并修改这些配置。

  3. 检查网络连接和防火墙设置:确保您的网络连接正常,并且没有遇到任何网络问题。如果您的计算机上运行有防火墙,您需要检查防火墙设置,确保允许ES Head插件与Elasticsearch服务器进行通信。

  4. 更新ES Head插件版本:如果您正在使用旧版本的ES Head插件,尝试将插件升级到最新版本,并确保与您正在使用的Elasticsearch服务器兼容。

cd elasticsearch-head

///修改Gruntfile.js , vi Gruntfile.js

connect: {
			server: {
				options: {
					port: 9100,
					hostname: "192.168.8.6",	
					base: '.',
					keepalive: true
				}
			}
		}

///修改elasticsearch-head默认连接地址,

cd elasticsearch-head/_site/

vi app.js

init: function(parent) {
			this._super();
			this.prefs = services.Preferences.instance();
			this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.8.6:9200";
			if( this.base_uri.charAt( this.base_uri.length - 1 ) !== "/" ) {
				// XHR request fails if the URL is not ending with a "/"
				this.base_uri += "/";
			}

启动方式2种:

1:

cd elasticsearch-head

npm run start

2:

cd elasticsearch-head

node_modules/grunt/bin/grunt server

Debian 安装filebeat

获取filebeat安装包密钥

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

安装apt和https协议转换工具

apt-get install apt-transport-https

添加安装源到本地

echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

更新软件列表并安装filebeat

sudo apt-get update && sudo apt-get install filebeat

查看filebeat版本

filebeat version

Filebeat 配置:

type:指定数据的输入类型,这里是log,即日志,是默认值,还可以指定为stdin,即标准输入。

enabled: true:启用手工配置filebeat,而不是采用模块方式配置filebeat。启动后才能使用我们指定的模块。否则使用filebeat默认的模块;

encoding: gbk 中文编码;清单详见characters encoding

paths:用于指定要监控的日志文件,可以指定一个完整路径的文件,也可以是一个模糊匹配格式,例如:

,该配置表示将获取/data/nginx/logs目录下的所有以.log结尾的文件,注意这里有个破折号“-”,要在paths配置项基础上进行缩进,不然启动filebeat会报错,另外破折号前面不能有tab缩进,建议通过空格方式缩进。

,该配置表示将获取/var/log目录的所有子目录中以”.log”结尾的文件,而不会去查找/var/log目录下以”.log”结尾的文件。

name: 设置filebeat收集的日志中对应主机的名字,如果配置为空,则使用该服务器的主机名。这里设置为IP,便于区分多台主机的日志信息。

logging.level:debug;filebeat日志级别

include_lines: [‘request’,’response’] 匹配关键字所在行。

moitoring.enabled:false;filebeat监控开关,默认为启动;

multiline.type: pattern 多行模式类型:pattern:正则表达式

multiline.pattern: ‘^[[:space:]]+(at|.{3})[[:space:]]+\b|^Caused by:’

multiline.negate: false 根据您配置其他多行选项的方式,与指定正则表达式匹配的行将被视为上一行的延续或新多行事件的开始

multiline.match: after

千辛万苦终于建立索引有数据出来了,但是乱码…

[2023-10-08T16:30:50,921][WARN ][logstash.codecs.line ][main][73bbfd3e8a1ad5e719cadc016524498db78b02b7eb1ac2e3ddc8ca7d9c5e649b] Received an event that has a different character encoding than you configured. {:text=>”\"\x8CT\xAE\xD7c~s\xF5i\xFA\xD5\
u001D[3O\xDF\xF5\u001Ei\xD0\xC6\xE8Hv<\x93\xEFϚcnA\x83\x94\xFA\xC6\xEFp\xF9۪4\xF6t:\x9D\xFE
\xBE\xE2+\a\x8A\xEF\xBBf]\xDF\xE3n\xF6\xFAA{jI}\xFC\xFDŹ\xDB\xF6\xE1e\xEB\xE2
\x95\xE2\xAD=\xC5}OY\x97\x82e]5\xCB\xCF\xE6\xAC\u000F\xED\xB9s\xE4\xA83\xB3\xDF:u\xCE\
xDA7\xE5_\xB3*\xAB=\x8B2\x88c\xA2*XsUZ)# \u0014\xD0\u0004\x8CI”, :expected_charset=>”UTF-8”}

网络上好多事字符集问题,改过

Charset => “GBK”Charset => “UTF-8”Charset => “GB2312”

等等最终还是没能解决,乱码依旧…

后来来回折腾原来在这样的配置下面是需要配置ssl的,可以参考: https://boke.wsfnk.com/archives/330.html

环境:ELK:192.168.8.6 centos 7.8

Filebeat:192.168.8.105 192.168.8.106 debian10.

具体Centos:

Centos:

cp /etc/pki/tls/openssl.cnf /etc/pki/tls/openssl.cnf_bak

vi /etc/pki/tls/openssl.cnf

#在[ v3_ca ]下面填写subjectAltName = IP:192.168.8.6

[ v3_ca ]

subjectKeyIdentifier=hash

subjectAltName = IP:192.168.8.6 log端处于内网,需要外网抓取数据建议ip写成公网出口ip放行端口

添加以后执行:

openssl req -subj ‘/CN=192.168.8.6/‘ -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash.key -out certs/logstash.crt

会生成/etc/pki/tls/certs/logstash.crt和/etc/pki/tls/private/logstash.key

同样:filebeat:

Debian:

Cd /etc/ssl&&cp openssl.conf openssl.conf_bak && vim openssl.cnf

[ v3_ca ]

Extensions for a typical CA

PKIX recommendation.

subjectKeyIdentifier=hash

subjectAltName = IP:192.168.8.106

openssl req -subj ‘/CN=192.168.8.106/‘ -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/filebeat.key -out certs/filebeat.crt

/etc/ssl/certs/filebeat.crt 和/etc/ssl/private/filebeat.key

接下来需要把logstash.crt 放到filebeat所在服务器的/etc/ssl/certs证书目录下面,

而每个filebeat的证书同样也放到logstash服务器的证书目录下面:

Cd /etc/pki/tls/certs&& ls

logstash.crt beat2.crt filebeat-gs1.crt

如果 filebeat-gs1.crt filebeat-gs2.crt 会有个抓不到数据,命名不一致区分开就好。

然后修改/etc/logstash/conf.d/filebeat-logstash.conf

input {
     beats {
          port => 5045
          type => "gs1"
    ssl => true
    ssl_certificate_authorities => ["/etc/pki/tls/certs/filebeat-gs1.crt"]
    ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
    ssl_key => "/etc/pki/tls/private/logstash.key"
    ssl_verify_mode => "force_peer"
          codec => json {
                  charset => "UTF-8"
           }
       }
}
input{
     beats {
          port => 5046
          type => "gs2"
    ssl => true
    ssl_certificate_authorities => ["/etc/pki/tls/certs/beat2.crt"]
    ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
    ssl_key => "/etc/pki/tls/private/logstash.key"
    ssl_verify_mode => "force_peer"
          codec => json {
                  charset => "UTF-8"
           }

    }
}

filter {
        grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
 mutate{
        remove_field  => "beat.hostname"
        #remove_field => "beat.name"
        remove_field => "version"
        remove_field => "_type"
    }

}
output {
         if [type] ==  "gs1"{
            elasticsearch {
                  index => "gs1_log_%{+yyyy.MM.dd}"
                   hosts => "192.168.8.6:9200"
                    codec => plain {
                        charset => "UTF-8"
                   }
                   user => "elastic"
                   password => "elastic"
                 }
              }
         if  [type] == "gs2"{
            elasticsearch {
                  index => "gs2_log_%{+yyyy.MM.dd}"
                   hosts => "192.168.8.6:9200"
                   codec => plain {
                        charset => "UTF-8"
                     }
                   user => "elastic"
                   password => "elastic"
                 }
              }
}

logstash配置大致如此,优化另说。

filebeat.yml配置如下:

filebeat.yml
    #- c:\programdata\elasticsearch\logs\*
  fields:
          type: gs1
          ip: 192.168.8.106
          fields_under_root: true

          #  multiline.pattern: '^instance_id'
          #  multiline.negate: true
          #  multiline.match: after
          #  multiline.max_lines: 500 

  #  multiline.pattern: '^[I,]|^[D,]|^[E,]|^[W,]|^[#]'
  #  multiline.max_lines: 100
  #  multiline.timeout: 20s

    #monitoring.enabled: true
inputs.container.enabled: true
#xpack.monitoring.elasticsearch.username: elastic
#xpack.monitoring.elasticsearch.password: elastic

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # The Logstash hosts
output.logstash:
    hosts: ["192.168.8.6:5045"]
    ssl.certificate_authorities: ["/etc/ssl/certs/logstash.crt"]
    ssl.certificate: "/etc/ssl/certs/filebeat.crt"
ssl.key: "/etc/ssl/private/filebeat.key"

这样总算有正常数据了

Filebeat 重新抓取数据:

filebeat设置从头读取配置文件

流程:关闭filebeat –> 删掉registry文件 –> 启动filebeat

停止filebeat

删除filebeat日志记录日志位置文件

rm -rf /var/lib/filebeat/registry

find / -name registry

/var/lib/filebeat/registry

rm -rf /var/lib/filebeat/registry

重新启动filebeat

3.遇到的问题:

3.1.安装遇到证书报错:

Err:20 https://packages.sury.org/php buster Release

Certificate verification failed: The certificate is NOT trusted. The certificate chain uses expired certificate. Could not handshake: Error in the certificate verification. [IP: 143.244.50.90 443]

Get:21 https://artifacts.elastic.co/packages/7.x/apt stable/main amd64 Packages [118 kB]

Reading package lists… Done

E: The repository ‘https://packages.sury.org/php buster Release’ does not have a Release file.

N: Updating from such a repository can’t be done securely, and is therefore disabled by default.

N: See apt-secure(8) manpage for repository creation and user configuration details.

root@test:/home/tools# apt-get install ca-certificates

sudo apt-get install –reinstall ca-certificates

sudo apt-get update

3.2.Logstash 报错:

Your settings are invalid. Reason: Path “/data/elk/logstash/dead_letter_queue” must be a writable directory. It is not writable.

Your settings are invalid. Reason: Path “/data/elk/logstash/dead_letter_queue” must be a writable directory. It is not writable.

An unexpected error occurred! {:error=>java.nio.file.AccessDeniedException: /data/elk/logstash/.lock, :

权限问题给权限;

3.3.Logstash 报错:

syslog listener died {:protocol=>:tcp, :address=>”0.0.0.0:5044”, :exceptinotallow=>#<Errno::EADDRINUSE: Address already in use - bind(2) for “0.0.0.0” port 5044>, :backtrace=>[“org/jruby/ext/socket/RubyTCPServer.java:123:in initialize'", "org/jruby/RubyIO.java:876:innew’”, “/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:208:in tcp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:172:inserver’”, “/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-syslog-3.6.0/lib/logstash/inputs/syslog.rb:156:in `block in run’”]}

是因为非ROOT用户启动的程序禁止监听1-1024的端口,要到修改服务启动的用户为ROOT,要到把监听端口修改为1024以上的端口即可。

vim /etc/systemd/system/logstash.service

[Service]

Type=simple

User=root

Group=root

….

curl –user elastic:changeme ‘http://127.0.0.1:9200/_xpack/license

{

“license” : {

“status” : “active”,

“uid” : “ad2ea283-fbdf-483a-9c53-e09f43c14a97”,

“type” : “basic”,

“issue_date” : “2023-09-19T07:37:00.727Z”,

“issue_date_in_millis” : 1695109020727,

“max_nodes” : 1000,

“issued_to” : “elasticsearch”,

“issuer” : “elasticsearch”,

“start_date_in_millis” : -1

}

}

3.4.配置缺失

有时候直接复制的比如input 只复制到nput少了i,或者格式错了等等;

[ERROR][logstash.agent ] Failed to execute action {:actinotallow=>LogStash::PipelineAction::Create/pipeline_id:main, :exceptinotallow=>”LogStash::ConfigurationError”, :message=>”Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)”, :backtrace=>[“/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:189:ininitialize’”, “org/logstash/execution/JavaBasePipelineExt.java:72:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:ininitialize’”, “/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:392:inblock in converge_state’”]}

测试:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/log.conf -t

Using bundled JDK: /usr/share/logstash/jdk

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults

Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console

等等可以看日志找出这些问题;本次安装遇到最大问题是乱码,做一次点滴笔记。



文章作者: 云上的小雨滴
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 云上的小雨滴 !
评论
  目录