您好,登錄后才能下訂單哦!
ELK + filebeat集群部署
ELK簡介
- Elasticsearch
Elasticsearch是一個實時的分布式搜索分析引擎, 它能讓你以一個之前從未有過的速度和規模,去探索你的數據。它被用作全文檢索、結構化搜索、分析以及這三個功能的組合2.Logstash
Logstash是一款強大的數據處理工具,它可以實現數據傳輸,格式處理,格式化輸出,還有強大的插件功能,常用于日志處理。3.Kibana
kibana是一個開源和免費的工具,它可以為Logstash和ElasticSearch提供的日志分析友好的Web界面,可以幫助您匯總、分析和搜索重要數據日志。官網地址:https://www.elastic.co/cn/downloads/
注意:配置文件ip要根據實際情況修改
環境準備,三臺Linux服務器,系統統一
elk-node1 192.168.3.181 數據、主節點(安裝elasticsearch、logstash、kabana、filebeat)
elk-node2 192.168.3.182 數據節點(安裝elasticsearch、filebeat)
elk-node3 192.168.3.183 數據節點(安裝elasticsearch、filebeat)
修改hosts文件每臺hosts均相同
vim /etc/hosts
192.168.243.162 elk-node1
192.168.243.163 elk-node2
安裝jdk11,二進制安裝
已安裝java則略過此步驟
{{
cd /home/tools &&
wget https://download.java.net/java/GA/jdk11/13/GPL/openjdk-11.0.1_linux-x64_bin.tar.gz
解壓到指定目錄
tar -xzvf jdk-11.0.4_linux-x64_bin.tar.gz -C /usr/local/jdk
配置環境變量(set java environment)
JAVA_HOME=/usr/local/jdk/jdk-11.0.1
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME CLASSPATH
使環境變量生效
source /etc/profile
yum源安裝
yum -y install java
java -version
}}
修改系統內核參數
調整最大虛擬內存映射空間
在末尾追加如下
vim /etc/sysctl.conf
vm.max_map_count=262144
在末尾追加如下
vim /etc/security/limit.conf
* soft nofile 1000000
* hard nofile 1000000
* soft nproc 1000000
* hard nproc 1000000
* soft memlock unlimited
* hard memlock unlimited
sysctl -p
cd /etc/security/limits.d
vi 20-nproc.conf
-# Default limit for number of user's processes to prevent
-# accidental fork bombs.
-# See rhbz #432903 for reasoning.
* soft nproc 4096
root soft nproc unlimited
將*號改成用戶名
esyonghu soft nproc 4096
root soft nproc unlimited
下載依賴包,安裝repo源
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate
vim /etc/yum.repos.d/elastic.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum repolist
#部署elasticsearch集群,在所有節點上操作
yum -y install elasticsearch
grep "^[a-Z]" /etc/elasticsearch/elasticsearch.yml
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk
node.name: elk-node1(對應主機名)
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
transport.tcp.compress: true
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300 ##只在其他節點上配置
discovery.seed_hosts: ["192.168.243.162", "192.168.243.163","192.168.243.164"]
cluster.initial_master_nodes: ["192.168.243.162", "192.168.243.163","192.168.243.164"]
discovery.zen.minimum_master_nodes: 2 #防止集群“腦裂”,需要配置集群最少主節點數目,通常為 (主節點數目/2) + 1
node.master: true
node.data: true
xpack.security.enabled: true
http.cors.enabled: true ##
http.cors.allow-origin: "*" ##跨域訪問,支持head插件可以訪問es
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12
elasticesearch在實際生產中非常消耗cpu,需要將初始申請的JVM內存調高,默認是1G,按照實際情況調整
vim /etc/elasticsearch/jvm.options
#修改這兩行
-Xms4g #設置最小堆的值為4g
-Xmx4g #設置組大堆的值為4g
配置TLS和身份驗證 -- 此步為了安全也可跳過該步
{{
在Elasticsearch主節點上配置TLS.
cd /usr/share/elasticsearch/
./bin/elasticsearch-certutil ca ##一直用enter鍵
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
ll
-rw------- 1 root root 3443 Jun 28 16:46 elastic-certificates.p12
-rw------- 1 root root 2527 Jun 28 16:43 elastic-stack-ca.p12
#####給生產的文件添加elasticsearch組權限
chgrp elasticsearch /usr/share/elasticsearch/elastic-certificates.p12 /usr/share/elasticsearch/elastic-stack-ca.p12
#####給這兩個文件賦640權限
chmod 640 /usr/share/elasticsearch/elastic-certificates.p12 /usr/share/elasticsearch/elastic-stack-ca.p12
######把這兩個文件移動到elasticsearch配置文件夾中
mv /usr/share/elasticsearch/elastic-* /etc/elasticsearch/
將tls身份驗證文件拷貝到節點配置文件夾中
scp /etc/elasticsearch/elastic-certificates.p12 root@192.168.243.163:/etc/elasticsearch/
scp /etc/elasticsearch/elastic-stack-ca.p12 root@192.168.243.163:/etc/elasticsearch/
}}
啟動服務,驗證集群
先啟動主節點集群,在隨后啟動其他節點
systemctl start elasticsearch
設置密碼--統一設置密碼為123456
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
驗證集群--##瀏覽器訪問
http://192.168.243.163:9200/_cluster/health?pretty
返回如下
{
"cluster_name" : "my-elk",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,##節點數
"number_of_data_nodes" : 3, ##數據節點數
"active_primary_shards" : 4,
"active_shards" : 8,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
#部署kibana
yum源安裝 ##在任意節點上安裝
yum -y install kibana
修改kibana配置文件
vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
server.name: "elk-node2"
elasticsearch.hosts: ["http://192.168.243.162:9200","http://192.168.243.163:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "123456"
i18n.locale: "en"
啟動服務
systemctl start kibana
瀏覽器訪問 http://192.168.243.162:5601/
安裝logstash
在主節點上進行部署
yum -y install logstash ##yum 源安裝
##二進制安裝
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.1.tar.gz
tar -zvxf logstash-7.4.1.tar.gz -C /home/elk
mkdir -p /data/logstash/{logs,data}
修改配置文件
vim /etc/logstash/logstash.conf
egrep "#|^$" /etc/logstash/conf.d/logstash_debug.conf
input {
beats {
port => 5044
}
}
filter {
grok {
match => {
"message" => "(?<temMsg>(?<=logBegin ).*?(?=logEnd))"
}
overwrite => ["temMsg"]
}
grok {
match => {
"temMsg" => "(?<reqId>(?<=reqId:).*?(?=,operatName))"
}
overwrite => ["reqId"]
}
grok {
match => {
"temMsg" => "(?<operatName>(?<=operatName:).*?(?=,operatUser))"
}
overwrite => ["operatName"]
}
grok {
match => {
"temMsg" => "(?<operatUser>(?<=operatUser:).*?(?=,userType))"
}
overwrite => ["operatUser"]
}
grok {
match => {
"temMsg" => "(?<userType>(?<=userType:).*?(?=,requestTime))"
}
overwrite => ["userType"]
}
grok {
match => {
"temMsg" => "(?<requestTime>(?<=requestTime:).*?(?=,method))"
}
overwrite => ["requestTime"]
}
grok {
match => {
"temMsg" => "(?<method>(?<=method:).*?(?=,params))"
}
overwrite => ["method"]
}
grok {
match => {
"temMsg" => "(?<params>(?<=params:).*?(?=,operatIp))"
}
overwrite => ["params"]
}
grok {
match => {
"temMsg" => "(?<operatIp>(?<=operatIp:).*?(?=,executionTime))"
}
overwrite => ["operatIp"]
}
grok {
match => {
"temMsg" => "(?<executionTime>(?<=executionTime:).*?(?=,operatDesc))"
}
overwrite => ["executionTime"]
}
grok {
match => {
"temMsg" => "(?<operatDesc>(?<=operatDesc:).*?(?=result))"
}
overwrite => ["operatDesc"]
}
grok {
match => {
"temMsg" => "(?<result>(?<=result:).*?(?=,siteCode))"
}
overwrite => ["result"]
}
grok {
match => {
"temMsg" => "(?<siteCode>(?<=siteCode:).*?(?=,module))"
}
overwrite => ["siteCode"]
}
grok {
match => {
"temMsg" => "(?<module>(?<=module:).*?(?= ))"
}
overwrite => ["module"]
}
grok {
match => [
"message", "%{NOTSPACE:temMsg}"
]
}
json {
source => "temMsg"
# field_split => ","
# value_split => ":"
# remove_field => [ "@timestamp","message","path","@version","path","host" ]
}
urldecode {
all_fields => true
}
mutate {
rename => {"temMsg" => "message"}
remove_field => [ "message" ]
}
}
output {
elasticsearch {
hosts => ["192.168.243.162:9200","192.168.243.163:9200","192.168.243.164:9200"]
user => "elastic"
password => "123456"
index => "logstash-%{+YYYY.MM.dd}"
}
}
vim /etc/logstash/logstash.yml
http.host: "ELK1"
path.data: /home/elk/data/logstash/data
path.logs: /data/logstash/logstash/logs
xpack.monitoring.enabled: true #kibana監控插件中啟動監控logstash
xpack.monitoring.elasticsearch.hosts: ["192.168.243.162:9200","192.168.243.163:9200","192.168.243.164:9200"]
啟動logstash服務
systemctl start logstash
二進制啟動方式
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash_debug.conf
部署filebeat
yum -y install filebeat
vim /etc/filebaet/filebaet.conf
filebeat.inputs:
- type: log
enabled: true
paths:
- /srv/docker/produce/*/*/cloud*.log
include_lines: [".*logBegin.*",".*logEnd.*"]
# multiline.pattern: ^\[
# multiline.negate: true
# multiline.match: after
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
hosts: ["192.168.243.162:5601"]
output.logstash:
hosts: ["192.168.243.162:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
啟動filebeat
systemctl start filebeat
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。