您好,登錄后才能下訂單哦!
今天小編給大家分享一下ELK怎么寫入日志的對應鍵值信息的相關知識點,內容詳細,邏輯清晰,相信大部分人都還太了解這方面的知識,所以分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后有所收獲,下面我們一起來了解一下吧。
ELK是三個開源軟件的縮寫,分別表示:Elasticsearch , Logstash, Kibana , 它們都是開源軟件。新增了一個FileBeat,它是一個輕量級的日志收集處理工具(Agent),Filebeat占用資源少,適合于在各個服務器上搜集日志后傳輸給Logstash
Logstash是一個開源的用于收集,分析和存儲日志的工具。 Kibana4用來搜索和查看Logstash已索引的日志的web接口。這兩個工具都基于Elasticsearch。 ● Logstash: Logstash服務的組件,用于處理傳入的日志。 ● Elasticsearch: 存儲所有日志 ● Kibana 4: 用于搜索和可視化的日志的Web界面,通過nginx反代 ● Logstash Forwarder: 安裝在將要把日志發送到logstash的服務器上,作為日志轉發的道理,通過 lumberjack 網絡協議與 Logstash 服務通訊 注意:logstash-forwarder要被beats替代了,關注后續內容。后續會轉到logstash+elasticsearch+beats上。
ELK架構如下:
elasticsearch-1.7.2.tar.gz kibana-4.1.2-linux-x64.tar.gz logstash-1.5.6-1.noarch.rpm logstash-forwarder-0.4.0-1.x86_64.rpm 單機模式#OSCentOS release 6.5 (Final)#Base and JDKgroupadd elk useradd -g elk elk passwd elk yum install vim lsof man wget ntpdate vixie-cron -y crontab -e */1 * * * * /usr/sbin/ntpdate time.windows.com > /dev/null 2>&1 service crond restart 禁用selinux,關閉iptables sed -i "s#SELINUX=enforcing#SELINUX=disabled#" /etc/selinux/config service iptables stop reboot tar -zxvf jdk-8u92-linux-x64.tar.gz -C /usr/local/ vim /etc/profileexport JAVA_HOME=/usr/local/jdk1.8.0_92export JRE_HOME=/usr/local/jdk1.8.0_92/jreexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATHexport CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/libsource /etc/profile#Elasticsearch#(cluster時在其他server安裝elasticsearch,并配置相同集群名稱,不同節點名稱即可)RPM安裝 rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch wget -c https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.noarch.rpm rpm -ivh elasticsearch-1.7.2.noarch.rpm tar安裝 wget -c https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.tar.gz tar zxvf elasticsearch-1.7.2.tar.gz -C /usr/local/cd /usr/local/elasticsearch-1.7.2/ mkdir -p /data/{db,logs} vim config/elasticsearch.yml#cluster.name: elasticsearch#node.name: "es-node1"#node.master: true #node.data: true path.data: /data/db path.logs: /data/logs network.host: 192.168.28.131#插件安裝cd /usr/local/elasticsearch-1.7.2/ bin/plugin -install mobz/elasticsearch-head#https://github.com/mobz/elasticsearch-headbin/plugin -install lukas-vlcek/bigdesk bin/plugin install lmenezes/elasticsearch-kopf#會提示版本過低解決辦法就是手動下載該軟件,不通過插件安裝命令...cd /usr/local/elasticsearch-1.7.2/plugins wget https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip unzip master.zip mv elasticsearch-kopf-master kopf 以上操作就完全等價于插件的安裝命令cd /usr/local/ chown elk:elk elasticsearch-1.7.2/ -R chown elk:elk /data/* -R supervisord安裝: yum install supervisor -y 末尾添加針對elasticsearch的配置項 vim /etc/supervisord.conf [program:elasticsearch] directory = /usr/local/elasticsearch-1.7.2/ ;command = su -c "/usr/local/elasticsearch-1.7.2/bin/elasticsearch" elkcommand =/usr/local/elasticsearch-1.7.2/bin/elasticsearch numprocs = 1 autostart = truestartsecs = 5 autorestart = truestartretries = 3 user = elk ;stdout_logfile_maxbytes = 200MB ;stdout_logfile_backups = 20 ;stdout_logfile = /var/log/pvs_elasticsearch_stdout.log#Kibana(注意版本搭配)https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz tar zxvf kibana-4.1.2-linux-x64.tar.gz -C /usr/local/cd /usr/local/kibana-4.1.2-linux-x64 vim config/kibana.yml port: 5601 host: "192.168.28.131"elasticsearch_url: "http://192.168.28.131:9200"./bin/kibana -l /var/log/kibana.log #啟動服務,kibana 4.0開始是以socket服務啟動的#cd /etc/init.d && curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init#cd /etc/default && curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default#修改對應信息,添加可執行權限或者如下: cat >> /etc/init.d/kibana "$KIBANA_LOG" 2>&1 & sleep 2 pidofproc node > $PID_FILE RETVAL=$? [[ $? -eq 0 ]] && success || failureecho [ $RETVAL = 0 ] && touch $LOCK_FILE return $RETVAL fi}reload() { echo "Reload command is not implemented for this service." return $RETVAL}stop() { echo -n "Stopping $DESC : " killproc -p $PID_FILE $DAEMON RETVAL=$?echo [ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE}case "$1" in start) start ;; stop) stop ;; status) status -p $PID_FILE $DAEMON RETVAL=$? ;; restart) stop start ;; reload) reload ;; *)# Invalid Arguments, print the following message. echo "Usage: $0 {start|stop|status|restart}" >&2exit 2 ;;esacEOF chmod +x kibana mv kibana /etc/init.d/#Nginxyum install nginx -y vim /etc/nginx/conf.d/elk.conf server { server_name elk.sudo.com; auth_basic "Restricted Access"; auth_basic_user_file passwd; location / { proxy_pass http://192.168.28.131:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }#htpsswd添加:yum install httpd-tools –yecho -n 'sudo:' >> /etc/nginx/passwd #添加用戶openssl passwd elk.sudo.com >> /etc/nginx/passwd #添加密碼cat /etc/nginx/passwd #查看chkconfig nginx on && service nginx start#Logstash--Setuprpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch vi /etc/yum.repos.d/logstash.repo [logstash-1.5] name=Logstash repository for 1.5.x packages baseurl=http://packages.elasticsearch.org/logstash/1.5/centos gpgcheck=1 gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch enabled=1 yum install logstash -y#創建SSL證書(在logstash服務器上生成ssl證書。創建ssl證書有兩種方式,一種指定IP地址,一種指定fqdn(dns)),選其一即可#1、IP地址在[ v3_ca ]配置段下設置上面的參數。192.168.28.131是logstash服務端的地址。 vi /etc/pki/tls/openssl.cnf subjectAltName = IP: 192.168.28.131cd /etc/pki/tls openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt#注意將-days設置大點,以免證書過期。#2、fqdn# 不需要修改openssl.cnf文件。cd /etc/pki/tls openssl req -subj '/CN=logstash.sudo.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt logstash.sudo.com是我自己測試的域名,所以無需添加logstash.sudo.com的A記錄#Logstash-Config#添加GeoIP數據源#wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz#gzip -d GeoLiteCity.dat.gz && mv GeoLiteCity.dat /etc/logstash/.logstash配置文件是以json格式設置參數的,配置文件位于/etc/logstash/conf.d目錄下,配置包括三個部分:輸入端,過濾器和輸出。 首先,創建一個01-lumberjack-input.conf文件,設置lumberjack輸入,Logstash-Forwarder使用的協議。 vi /etc/logstash/conf.d/01-lumberjack-input.conf input { lumberjack { port => 5043 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } 再來創建一個11-nginx.conf用于過濾nginx日志 vi /etc/logstash/conf.d/11-nginx.conf filter { if [type] == "nginx" { grok { match => { "message" => "%{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:method} %{NOTSPACE:request}(?: %{URIPROTO:proto}/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:size}|-) %{QS:referrer} %{QS:a gent} %{QS:xforwardedfor}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ] }# geoip {# source => "clientip"# add_tag => [ "geoip" ]# fields => ["country_name", "country_code2","region_name", "city_name", "real_region_name", "latitude", "longitude"]# remove_field => [ "[geoip][longitude]", "[geoip][latitude]" ]# } } } 這個過濾器會尋找被標記為“nginx”類型(Logstash-forwarder定義的)的日志,嘗試使用“grok”來分析傳入的nginx日志,使之結構化和可查詢。type要與logstash-forwarder相匹配。 同時,要注意nginx日志格式設置,我這里采用默認log_format。#負載均衡反向代理時可修改為如下格式:log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $upstream_response_time $request_time $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$http_x_forwarded_for" "$request_body" ' '$scheme $upstream_addr'; 日志格式不對,grok匹配規則要重寫。 可以通過http://grokdebug.herokuapp.com/在線工具進行調試。多數情況下ELK沒數據的錯誤在此處。#Grok Debug -- http://grokdebug.herokuapp.com/grok 匹配日志不成功,不要往下看測試。之道匹配成功對為止。可參考ttp://grokdebug.herokuapp.com/patterns# grok匹配模式,對后面寫規則匹配很受益的。最后,創建一文件,來定義輸出。 vi /etc/logstash/conf.d/99-lumberjack-output.conf output { if "_grokparsefailure" in [tags] { file { path => "/var/log/logstash/grokparsefailure-%{type}-%{+YYYY.MM.dd}.log" } } elasticsearch { host => "192.168.28.131" protocol => "http" index => "logstash-%{type}-%{+YYYY.MM.dd}" document_type => "%{type}" workers => 5 template_overwrite => true } #stdout { codec =>rubydebug }} 定義結構化的日志存儲到elasticsearch,對于不匹配grok的日志寫入到文件。注意,后面添加的過濾器文件名要位于01-99之間。因為logstash配置文件有順序的。 在調試時候,先不將日志存入到elasticsearch,而是標準輸出,以便排錯。同時,多看看日志,很多錯誤在日志里有體現,也容易定位錯誤在哪。 在啟動logstash服務之前,最好先進行配置文件檢測,如下:# /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/*Configuration OK 也可指定文件名檢測,直到OK才行。不然,logstash服務器起不起來。最后,就是啟動logstash服務了。#logstash-forwarder需要將在安裝logstash時候創建的ssl證書的公鑰logstash.crt拷貝到每臺logstash-forwarder服務器(需監控日志的server) wget https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpm rpm -ivh logstash-forwarder-0.4.0-1.x86_64.rpm vi /etc/logstash-forwarder.conf { "network": { "servers": [ "192.168.28.131:5043" ], "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt", "timeout": 30 }, "files": [ { "paths": [ "/var/log/nginx/*-access.log" ], "fields": { "type": "nginx" } } ] } 配置文件是json格式,格式不對logstash-forwarder服務是啟動不起來的。 后面就是啟動logstash-forwarder服務了echo -e "192.168.28.131 Test1\n192.168.28.130 Test2\n192.168.28.138 Test3">>/etc/hosts #不添加elasticsearch啟動會報錯(無法識別Test*)su - elkcd /usr/local/elasticsearch-1.7.2 nohup ./bin/elasticsearch & (可以通過supervisord進行管理,與其他服務一同開機啟動) elk: service logstash restart service kibana restart 訪問http://elk.sudo.com:9200/查詢啟動是否成功 client: service nginx start && service logstash-forwarder start#使用redis存儲日志(隊列),創建對應的配置文件vi /etc/logstash/conf.d/redis-input.conf input { lumberjack { port => 5043 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } filter { if [type] == "nginx" { grok { match => { "message" => "%{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:method} %{NOTSPACE:request}(?: %{URIPROTO:proto}/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:size}|-) %{QS:referrer} %{QS:a gent} %{QS:xforwardedfor}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ] }#test } } output { ####將接收的日志放入redis消息隊列#### redis { host => "127.0.0.1" port => 6379 data_type => "list" key => "logstash:redis" } } vi /etc/logstash/conf.d/redis-output.conf input { # 讀取redis redis { data_type => "list" key => "logstash:redis" host => "192.168.28.131" #redis-server port => 6379 #threads => 5 } } output { elasticsearch { host => "192.168.28.131" protocol => "http" index => "logstash-%{type}-%{+YYYY.MM.dd}" document_type => "%{type}" workers => 36 template_overwrite => true } #stdout { codec =>rubydebug }}# /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/*Configuration OK 登錄redis查詢,可以看到日志的對應鍵值信息已經寫入
以上就是“ELK怎么寫入日志的對應鍵值信息”這篇文章的所有內容,感謝各位的閱讀!相信大家閱讀完這篇文章都有很大的收獲,小編每天都會為大家更新不同的知識,如果還想學習更多的知識,請關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。