您好,登錄后才能下訂單哦!
本篇內容主要講解“怎么將nginx日志導入elasticsearch”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“怎么將nginx日志導入elasticsearch”吧!
將nginx日志通過filebeat收集后傳入logstash,經過logstash處理后寫入elasticsearch。filebeat只負責收集工作,logstash完成日志的格式化,數據的替換,拆分 ,以及將日志寫入elasticsearch后的索引的創建。
1、配置nginx日志格式
log_format main '$remote_addr $http_x_forwarded_for [$time_local] $server_name $request ' '$status $body_bytes_sent $http_referer ' '"$http_user_agent" ' '"$connection" ' '"$http_cookie" ' '$request_time ' '$upstream_response_time';
2、安裝配置filebeat,啟用nginx module
tar -zxvf filebeat-6.2.4-linux-x86_64.tar.gz -c /usr/local cd /usr/local;ln -s filebeat-6.2.4-linux-x86_64 filebeat cd /usr/local/filebeat
啟用nginx模塊
./filebeat modules enable nginx
查看模塊
./filebeat modules list
創建配置文件
vim /usr/local/filebeat/blog_module_logstash.yml filebeat.modules: - module: nginx access: enabled: true var.paths: ["/home/weblog/blog.cnfol.com_access.log"] #error: # enabled: true # var.paths: ["/home/weblogerr/blog.cnfol.com_error.log"] output.logstash: hosts: ["192.168.15.91:5044"]
啟動filebeat
./filebeat -c blog_module_logstash.yml -e
3、配置logstash
tar -zxvf logstash-6.2.4.tar.gz /usr/local cd /usr/local;ln -s logstash-6.2.4 logstash 創建一個nginx日志的pipline文件 cd /usr/local/logstash
logstash內置的模板目錄
vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns
編輯 grok-patterns 添加一個支持多ip的正則
forword (?:%{ipv4}[,]?[ ]?)+|%{word}
官方grok
#
創建logstash pipline配置文件
#input { # stdin {} #} # 從filebeat接受數據 input { beats { port => 5044 host => "0.0.0.0" } } filter { # 添加一個調試的開關 mutate{add_field => {"[@metadata][debug]"=>true}} grok { # 過濾nginx日志 #match => { "message" => "%{nginxaccess_test2}" } #match => { "message" => '%{iporhost:clientip} # (?<http_x_forwarded_for>[^\#]*) # \[%{httpdate:[@metadata][webtime]}\] # %{notspace:hostname} # %{word:verb} %{uripathparam:request} http/%{number:httpversion} # %{number:response} # (?:%{number:bytes}|-) # (?:"(?:%{notspace:referrer}|-)"|%{notspace:referrer}|-) # (?:"(?<http_user_agent>[^#]*)") # (?:"(?:%{number:connection}|-)"|%{number:connection}|-) # (?:"(?<cookies>[^#]*)") # %{number:request_time:float} # (?:%{number:upstream_response_time:float}|-)' } #match => { "message" => '(?:%{iporhost:clientip}|-) (?:%{two_ip:http_x_forwarded_for}|%{ipv4:http_x_forwarded_for}|-) \[%{httpdate:[@metadata][webtime]}\] (?:%{hostname:hostname}|-) %{word:method} %{uripathparam:request} http/%{number:httpversion} %{number:response} (?:%{number:bytes}|-) (?:"(?:%{notspace:referrer}|-)"|%{notspace:referrer}|-) %{qs:agent} (?:"(?:%{number:connection}|-)"|%{number:connection}|-) (?:"(?<cookies>[^#]*)") %{number:request_time:float} (?:%{number:upstream_response_time:float}|-)' } match => { "message" => '(?:%{iporhost:clientip}|-) %{forword:http_x_forwarded_for} \[%{httpdate:[@metadata][webtime]}\] (?:%{hostname:hostname}|-) %{word:method} %{uripathparam:request} http/%{number:httpversion} %{number:response} (?:%{number:bytes}|-) (?:"(?:%{notspace:referrer}|-)"|%{notspace:referrer}|-) %{qs:agent} (?:"(?:%{number:connection}|-)"|%{number:connection}|-) %{qs:cookie} %{number:request_time:float} (?:%{number:upstream_response_time:float}|-)' } } # 將默認的@timestamp(beats收集日志的時間)的值賦值給新字段@read_tiimestamp ruby { #code => "event.set('@read_timestamp',event.get('@timestamp'))" #將時區改為東8區 code => "event.set('@read_timestamp',event.get('@timestamp').time.localtime + 8*60*60)" } # 將nginx的日志記錄時間格式化 # 格式化時間 20/may/2015:21:05:56 +0000 date { locale => "en" match => ["[@metadata][webtime]","dd/mmm/yyyy:hh:mm:ss z"] } # 將bytes字段由字符串轉換為數字 mutate { convert => {"bytes" => "integer"} } # 將cookie字段解析成一個json #mutate { # gsub => ["cookies",'\;',','] #} # 如果有使用到cdn加速http_x_forwarded_for會有多個ip,第一個ip是用戶真實ip if[http_x_forwarded_for] =~ ", "{ ruby { code => 'event.set("http_x_forwarded_for", event.get("http_x_forwarded_for").split(",")[0])' } } # 解析ip,獲得ip的地理位置 geoip { source => "http_x_forwarded_for" # # 只獲取ip的經緯度、國家、城市、時區 fields => ["location","country_name","city_name","region_name"] } # 將agent字段解析,獲得瀏覽器、系統版本等具體信息 useragent { source => "agent" target => "useragent" } #指定要刪除的數據 #mutate{remove_field=>["message"]} # 根據日志名設置索引名的前綴 ruby { code => 'event.set("@[metadata][index_pre]",event.get("source").split("/")[-1])' } # 將@timestamp 格式化為2019.04.23 ruby { code => 'event.set("@[metadata][index_day]",event.get("@timestamp").time.localtime.strftime("%y.%m.%d"))' } # 設置輸出的默認索引名 mutate { add_field => { #"[@metadata][index]" => "%{@[metadata][index_pre]}_%{+yyyy.mm.dd}" "[@metadata][index]" => "%{@[metadata][index_pre]}_%{@[metadata][index_day]}" } } # 將cookies字段解析成json # mutate { # gsub => [ # "cookies", ";", ",", # "cookies", "=", ":" # ] # #split => {"cookies" => ","} # } # json_encode { # source => "cookies" # target => "cookies_json" # } # mutate { # gsub => [ # "cookies_json", ',', '","', # "cookies_json", ':', '":"' # ] # } # json { # source => "cookies_json" # target => "cookies2" # } # 如果grok解析存在錯誤,將錯誤獨立寫入一個索引 if "_grokparsefailure" in [tags] { #if "_dateparsefailure" in [tags] { mutate { replace => { #"[@metadata][index]" => "%{@[metadata][index_pre]}_failure_%{+yyyy.mm.dd}" "[@metadata][index]" => "%{@[metadata][index_pre]}_failure_%{@[metadata][index_day]}" } } # 如果不存在錯誤就刪除message }else{ mutate{remove_field=>["message"]} } } output { if [@metadata][debug]{ # 輸出到rubydebuyg并輸出metadata stdout{codec => rubydebug{metadata => true}} }else{ # 將輸出內容轉換成 "." stdout{codec => dots} # 將輸出到指定的es elasticsearch { hosts => ["192.168.15.160:9200"] index => "%{[@metadata][index]}" document_type => "doc" } } }
啟動logstash
nohup bin/logstash -f test_pipline2.conf &
到此,相信大家對“怎么將nginx日志導入elasticsearch”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。