91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

ELKB5.2.2集群環境的部署過程

發布時間:2021-09-10 12:58:34 來源:億速云 閱讀:139 作者:chen 欄目:建站服務器

本篇內容主要講解“ELKB5.2.2集群環境的部署過程”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“ELKB5.2.2集群環境的部署過程”吧!

ELKB5.2.2集群環境部署

本人陸陸續續接觸了ELK的1.4,2.0,2.4,5.0,5.2版本,可以說前面使用當中一直沒有太多感觸,最近使用5.2才慢慢有了點感覺,可見認知事務的艱難,本次文檔盡量詳細點,現在寫文檔越來越喜歡簡潔了,不知道是不是不太好。不扯了看正文(注意這里的配置是優化前配置,正常使用沒問題,量大時需要優化)。

備注:

本次屬于大版本變更,有很多修改,部署重大修改如下:

1,filebeat直接輸出kafka,并drop不必要的字段如beat相關的

2,elasticsearch集群布局優化:分三master節點6data節點

3,logstash filter 加入urldecode支持url、reffer、agent中文顯示

4,logstash fileter加入geoip支持客戶端ip區域城市定位功能

5, logstash mutate替換字符串并remove不必要字段如kafka相關的

5,elasticsearch插件需要另外部署node.js,不能像以前一樣集成一起

6,nginx日志新增request參數、請求方法

一,架構

可選架構

filebeat--elasticsearch--kibana

filebeat--logstash--kafka--logstash--elasticsearch--kibana

filebeat--kafka--logstash--elasticsearch--kibana

由于filebeat5.2.2支持多種輸出logstash、elasticsearch、kafka、redis、syslog、file等,為了優化資源使用率且能夠支持大并發場景選擇

filebeat(18)--kafka(3)--logstash(3)--elasticsearch(3)--kibana(3--nginx負載均衡

共3臺物理機、12臺虛擬機、系統CentOS6.8、具體劃分如下:

服務器一(192.168.188.186)
kafka1  32G700G4CPU
logstash8G      100G    4CPU
elasticsearch2  40G1.4T    8CPU
elasticsearch3  40G     1.4T    8CPU
服務器二(192.168.188.187)
kafka2  32G700G4CPU
logstash8G      100G    4CPU
elasticsearch4  40G1.4T    8CPU
elasticsearch5  40G     1.4T    8CPU
服務器三(192.168.188.188)
kafka3  32G700G4CPU
logstash8G      100G    4CPU
elasticsearch6  40G1.4T    8CPU
elasticsearch7  40G     1.4T    8CPU
磁盤分區
Logstach     100G
SWAP  8G    /boot  200M  剩下/ 
Kafka       700G 
SWAP  8G    /boot  200M   /30G  剩下 /data
Elasticsearch 1.4T
SWAP  8G    /boot  200M   /30G  剩下 /data
IP分配
Elasticsearch2-6     192.168.188.191-196
kibana1-3              192.168.188.191/193/195
kafka1-3                192.168.188.237-239
logstash                192.168.188.238/198/240

二,環境準備

yum -y remove java-1.6.0-openjdk
yum -y remove java-1.7.0-openjdk
yum -y remove perl-*
yum -y remove sssd-*
yum -y install  java-1.8.0-openjdk
java -version
yum update
reboot

設置host環境kafka需要用到

cat /etc/hosts

192.168.188.191   ES191(master和data)
192.168.188.192   ES192(data)
192.168.188.193   ES193(master和data)
192.168.188.194   ES194(data)
192.168.188.195   ES195(master和data)
192.168.188.196   ES196(data)
192.168.188.237   kafka237
192.168.188.238   kafka238
192.168.188.239   kafka239
192.168.188.197   logstash297 
192.168.188.198   logstash298
192.168.188.240   logstash340

三,部署elasticsearch集群

mkdir /data/esnginx

mkdir /data/eslog

rpm -ivh /srv/elasticsearch-5.2.2.rpm 

chkconfig --add elasticsearch

chkconfig postfix off

rpm -ivh /srv/kibana-5.2.2-x86_64.rpm 

chown  elasticsearch:elasticsearch /data/eslog -R

chown  elasticsearch:elasticsearch /data/esnginx -R

配置文件(3master+6data)

[root@ES191 elasticsearch]# cat elasticsearch.yml|grep -Ev '^#|^$'

cluster.name: nginxlog
node.name: ES191
node.master: true
node.data: true
node.attr.rack: r1
path.data: /data/esnginx
path.logs: /data/eslog
bootstrap.memory_lock: true
network.host: 192.168.188.191
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.188.191","192.168.188.192","192.168.188.193","192.168.188.194","192.168.188.195","192.168.188.196"]
discovery.zen.minimum_master_nodes: 2
gateway.recover_after_nodes: 5
gateway.recover_after_time: 5m
gateway.expected_nodes: 6
cluster.routing.allocation.same_shard.host: true 
script.engine.groovy.inline.search: on
script.engine.groovy.inline.aggs: on
indices.recovery.max_bytes_per_sec: 30mb
http.cors.enabled: true
http.cors.allow-origin: "*"
bootstrap.system_call_filter: false#內核3.0以下的需要,centos7內核3.10不需要

特別注意

/etc/security/limits.conf
elasticsearch  soft  memlock  unlimited
elasticsearch  hard  memlock  unlimited
elasticsearch  soft  nofile   65536
elasticsearch  hard  nofile   131072
elasticsearch  soft  nproc    2048
elasticsearch  hard  nproc    4096
/etc/elasticsearch/jvm.options
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms20g
-Xmx20g

啟動集群

service elasticsearch start

健康檢查

http://192.168.188.191:9200/_cluster/health?pretty=true
{
  "cluster_name" : "nginxlog",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 6,
  "number_of_data_nodes" : 6,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

elasticsearch-head插件

http://192.168.188.215:9100/

連接上面192.168.188.191:9200任意一臺即可

設置分片

官方建議生成索引時再設置

curl -XPUT 'http://192.168.188.193:9200/_all/_settings?preserve_existing=true' -d '{

  "index.number_of_replicas" : "1",

  "index.number_of_shards" : "6"

}'

沒有生效,后來發現這個分片設置可以在模版創建時指定,目前還是使用默認1副本,5分片。

其他報錯(這個只是參考,優化時有方案)

bootstrap.system_call_filter: false   # 針對 system call filters failed to install,

參見 https://www.elastic.co/guide/en/elasticsearch/reference/current/system-call-filter-check.html

[WARN ][o.e.b.JNANatives ] unable to install syscall filter: 

java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in

四、部署kafka集群

kafka集群搭建

1,zookeeper集群

wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
tar zxvf zookeeper-3.4.10.tar.gz -C /usr/local/
ln -s /usr/local/zookeeper-3.4.10/ /usr/local/zookeeper
mkdir -p /data/zookeeper/data/
vim /usr/local/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=5
syncLimit=2
dataDir=/data/zookeeper/data
clientPort=2181
server.1=192.168.188.237:2888:3888
server.2=192.168.188.238:2888:3888
server.3=192.168.188.239:2888:3888
vim /data/zookeeper/data/myid
1
/usr/local/zookeeper/bin/zkServer.sh start

2,kafka集群

wget http://mirrors.hust.edu.cn/apache/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz

tar zxvf kafka_2.11-0.10.0.1.tgz -C /usr/local/

ln -s /usr/local/kafka_2.11-0.10.0.1 /usr/local/kafka

diff了下server.properties和zookeeper.properties變動不大可以直接使用

vim /usr/local/kafka/config/server.properties

broker.id=237
port=9092
host.name=192.168.188.237
num.network.threads=4
 
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafkalog
num.partitions=3
num.recovery.threads.per.data.dir=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:237
zookeeper.connection.timeout.ms=6000
producer.type=async
broker.list=192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092

mkdir /data/kafkalog

修改內存使用大小

vim /usr/local/kafka/bin/kafka-server-start.sh

    export KAFKA_HEAP_OPTS="-Xmx16G -Xms16G"

啟動kafka

/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

創建六組前端topic

/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx1-168 --replication-factor 1 --partitions 3 --zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx2-178 --replication-factor 1 --partitions 3 --zookeeper  192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx3-188 --replication-factor 1 --partitions 3 --zookeeper  192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

檢查topic

/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper  192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181

ngx1-168

ngx2-178

ngx3-188

3,開機啟動

cat /etc/rc.local

/usr/local/zookeeper/bin/zkServer.sh start

/usr/local/kafka/bin/kafka-server-start.sh  -daemon /usr/local/kafka/config/server.properties 

注意:開機啟動如果設置在rc.local里,java安裝又不是用yum安裝的openjdk-1.8.0時,需要指定JAVA_HOME,否則java環境不生效,受java環境影響的zookeeper與kafka服務也啟動不了,因為java環境一般配置在/etc/profile里,它的生效時間在rc.local后。 

五,部署配置logstash

安裝

rpm -ivh logstash-5.2.2.rpm

mkdir /usr/share/logstash/config

#1. 復制配置文件到logstash home

cp /etc/logstash /usr/share/logstash/config

#2. 配置路徑

vim /usr/share/logstash/config/logstash.yml

修改前:

path.config: /etc/logstash/conf.d

修改后:

path.config: /usr/share/logstash/config/conf.d

#3.修改 startup.options

修改前:

LS_SETTINGS_DIR=/etc/logstash

修改后:

LS_SETTINGS_DIR=/usr/share/logstash/config

修改startup.options需要執行/usr/share/logstash/bin/system-install 生效

配置

消費者輸出端3個logstash只負責一部分

in-kafka-ngx1-out-es.conf

in-kafka-ngx2-out-es.conf

in-kafka-ngx3-out-es.conf

[root@logstash297 conf.d]# cat in-kafka-ngx1-out-es.conf 

input {    
  kafka {    
  bootstrap_servers => "192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092"    
  group_id => "ngx1"
  topics => ["ngx1-168"]
  codec => "json"
  consumer_threads => 3  
  decorate_events => true
  }    
} 
filter {
  mutate {
    gsub => ["message", "\\x", "%"]
    remove_field => ["kafka"]
  }
  json {
    source => "message"
    remove_field => ["message"]
  }
  geoip {
    source => "clientRealIp"
  }
  urldecode {
    all_fields => true
  }
}
output {    
  elasticsearch {
    hosts => ["192.168.188.191:9200","192.168.188.192:9200","192.168.188.193:9200","192.168.188.194:9200","192.168.188.195:9200","192.168.188.196:9200"]
    index => "filebeat-%{type}-%{+YYYY.MM.dd}"
    manage_template => true
    template_overwrite => true
    template_name => "nginx_template"
    template => "/usr/share/logstash/templates/nginx_template"
    flush_size => 50000
    idle_flush_time => 10
  }
}

nginx 模版

[root@logstash297 logstash]# cat /usr/share/logstash/templates/nginx_template 

{
  "template" : "filebeat-*",
  "settings" : {
    "index.refresh_interval" : "10s"
  },
  "mappings" : {
    "_default_" : {
       "_all" : {"enabled" : true, "omit_norms" : true},
       "dynamic_templates" : [ 
       {
         "string_fields" : {
           "match_pattern": "regex",
           "match" : "(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)|(upstreamstatus)",
           "match_mapping_type" : "string",
           "mapping" : {
             "type" : "string", "index" : "analyzed", "omit_norms" : true,
               "fields" : {
                 "raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 512}
               }
           }
         }
       } ],
       "properties": {
         "@version": { "type": "string", "index": "not_analyzed" },
         "geoip"  : {
           "type": "object",
             "dynamic": true,
             "properties": {
               "location": { "type": "geo_point" }
             }
         }
       }
    }
  }
}

啟動

/usr/share/logstash/bin/logstash -f /usr/share/logstash/config/conf.d/in-kafka-ngx1-out-es.conf  &

默認logstash開機啟動

參考

/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.5/DEVELOPER.md

報錯處理

[2017-05-08T12:24:30,388][ERROR][logstash.inputs.kafka    ] Unknown setting 'zk_connect' for kafka

[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka    ] Unknown setting 'topic_id' for kafka

[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka    ] Unknown setting 'reset_beginning' for kafka

[2017-05-08T12:24:30,395][ERROR][logstash.agent           ] Cannot load an invalid configuration {:reason=>"Something is wrong with your configuration."}

驗證日志 

[root@logstash297 conf.d]# cat /var/log/logstash/logstash-plain.log 

[2017-05-09T10:43:20,832][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.188.191:9200/, http://192.168.188.192:9200/, http://192.168.188.193:9200/, http://192.168.188.194:9200/, http://192.168.188.195:9200/, http://192.168.188.196:9200/]}}
[2017-05-09T10:43:20,838][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.191:9200/, :path=>"/"}
[2017-05-09T10:43:20,919][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x59d1baad URL:http://192.168.188.191:9200/>}
[2017-05-09T10:43:20,920][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.192:9200/, :path=>"/"}
[2017-05-09T10:43:20,922][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x39defbff URL:http://192.168.188.192:9200/>}
[2017-05-09T10:43:20,924][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.193:9200/, :path=>"/"}
[2017-05-09T10:43:20,927][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x6e2b7f40 URL:http://192.168.188.193:9200/>}
[2017-05-09T10:43:20,927][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.194:9200/, :path=>"/"}
[2017-05-09T10:43:20,929][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x208910a2 URL:http://192.168.188.194:9200/>}
[2017-05-09T10:43:20,930][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.195:9200/, :path=>"/"}
[2017-05-09T10:43:20,932][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x297a8bbd URL:http://192.168.188.195:9200/>}
[2017-05-09T10:43:20,933][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://192.168.188.196:9200/, :path=>"/"}
[2017-05-09T10:43:20,935][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x3ac661af URL:http://192.168.188.196:9200/>}
[2017-05-09T10:43:20,936][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>"/usr/share/logstash/templates/nginx_template"}
[2017-05-09T10:43:20,970][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"filebeat-*", "settings"=>{"index.refresh_interval"=>"10s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"string_fields"=>{"match_pattern"=>"regex", "match"=>"(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "ignore_above"=>512}}}}}]}}}}
[2017-05-09T10:43:20,974][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/nginx_template
[2017-05-09T10:43:21,009][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x65ed1af5 URL://192.168.188.191:9200>, #<URI::Generic:0x2d2a52a6 URL://192.168.188.192:9200>, #<URI::Generic:0x6e79e44b URL://192.168.188.193:9200>, #<URI::Generic:0x531436ae URL://192.168.188.194:9200>, #<URI::Generic:0x5e23a48b URL://192.168.188.195:9200>, #<URI::Generic:0x2163628b URL://192.168.188.196:9200>]}
[2017-05-09T10:43:21,010][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.4-java/vendor/GeoLite2-City.mmdb"}
[2017-05-09T10:43:21,022][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-05-09T10:43:21,037][INFO ][logstash.pipeline        ] Pipeline main started
[2017-05-09T10:43:21,086][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

六,部署配置filebeat

安裝

rpm -ivh filebeat-5.2.2-x86_64.rpm

nginx日志格式需要為json的

  log_format access '{ "@timestamp": "$time_iso8601", '
                         '"clientRealIp": "$clientRealIp", '
                         '"size": $body_bytes_sent, '
                         '"request": "$request", '
                         '"method": "$request_method", '
                         '"responsetime": $request_time, '
                         '"upstreamhost": "$upstream_addr", '
                         '"http_host": "$host", '
                         '"url": "$uri", '
                         '"referrer": "$http_referer", '
                         '"agent": "$http_user_agent", '
                         '"status": "$status"} ';

配置filebeat

vim /etc/filebeat/filebeat.yml

filebeat.prospectors:
- input_type: log
  paths:
    - /data/wwwlogs/*.log
  document_type: ngx1-168
  tail_files: true
  json.keys_under_root: true
  json.add_error_key: true
output.kafka:
  enabled: true
  hosts: ["192.168.188.237:9092","192.168.188.238:9092","192.168.188.239:9092"]
  topic: '%{[type]}'
  partition.round_robin:
    reachable_only: false
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
  worker: 3
processors:
- drop_fields:
    fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset", "source"] 
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  rotateeverybytes: 10485760 # = 10MB
  keepfiles: 7

filebeat詳細配置參考官網

https://www.elastic.co/guide/en/beats/filebeat/5.2/index.html

采用kafka作為日志輸出端

https://www.elastic.co/guide/en/beats/filebeat/5.2/kafka-output.html

output.kafka:

  # initial brokers for reading cluster metadata

  hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

  # message topic selection + partitioning

  topic: '%{[type]}'

  partition.round_robin:

    reachable_only: false

  required_acks: 1

  compression: gzip

  max_message_bytes: 1000000

啟動

chkconfig filebeat on

/etc/init.d/filebeat start

報錯處理

[root@localhost ~]# tail -f /var/log/filebeat/filebeat

2017-05-09T15:21:39+08:00 ERR Error decoding JSON: invalid character 'x' in string escape code

使用$uri 可以在nginx對URL進行更改或重寫,但是用于日志輸出可以使用$request_uri代替,如無特殊業務需求,完全可以替換

參考

http://www.mamicode.com/info-detail-1368765.html

七,驗證

1,kafka消費者查看

/usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic ngx1-168 

2,elasticserch head查看Index及分片信息

八,部署配置kibana

1,配置啟動

cat /etc/kibana/kibana.yml

server.port: 5601

server.host: "192.168.188.191"

elasticsearch.url: "http://192.168.188.191:9200"

chkconfig --add kibana

/etc/init.d/kibana start

2,字段格式

{
  "_index": "filebeat-ngx1-168-2017.05.10",
  "_type": "ngx1-168",
  "_id": "AVvvtIJVy6ssC9hG9dKY",
  "_score": null,
  "_source": {
    "request": "GET /qiche/奧迪A3/ HTTP/1.1",
    "agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36",
    "geoip": {
      "city_name": "Jinhua",
      "timezone": "Asia/Shanghai",
      "ip": "122.226.77.150",
      "latitude": 29.1068,
      "country_code2": "CN",
      "country_name": "China",
      "continent_code": "AS",
      "country_code3": "CN",
      "region_name": "Zhejiang",
      "location": [
        119.6442,
        29.1068
      ],
      "longitude": 119.6442,
      "region_code": "33"
    },
    "method": "GET",
    "type": "ngx1-168",
    "http_host": "www.niubi.com",
    "url": "/qiche/奧迪A3/",
    "referrer": "http://www.niubi.com/qiche/奧迪S6/",
    "upstreamhost": "172.17.4.205:80",
    "@timestamp": "2017-05-10T08:14:00.000Z",
    "size": 10027,
    "beat": {},
    "@version": "1",
    "responsetime": 0.217,
    "clientRealIp": "122.226.77.150",
    "status": "200"
  },
  "fields": {
    "@timestamp": [
      1494404040000
    ]
  },
  "sort": [
    1494404040000
  ]
}

3,視圖儀表盤

1),添加高德地圖

編輯kibana配置文件kibana.yml,最后面添加

tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

ES 模版的調整,Geo-points 不適用 dynamic mapping 因此這類項目需要顯式的指定:

需要將 geoip.location 指定為 geo_point 類型,則在模版的 properties 中增加一個項目,如下所示:

       "properties": {

         "@version": { "type": "string", "index": "not_analyzed" },

         "geoip"  : {

           "type": "object",

             "dynamic": true,

             "properties": {

               "location": { "type": "geo_point" }

             }

         }

       }

4,安裝x-pack插件

參考

https://www.elastic.co/guide/en/x-pack/5.2/installing-xpack.html#xpack-installing-offline

https://www.elastic.co/guide/en/x-pack/5.2/setting-up-authentication.html#built-in-users

注意要修改密碼

http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/1.json

http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/2.json

http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/3.json

或者

curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'

{

  "password": "elasticpassword"

}

'

curl -XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d'

{

  "password": "kibanapassword"

}

'

curl -XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d'

{

  "password": "logstashpassword"

}

'

下面是官網x-pack安裝升級卸載文檔,后發現注冊版本的x-pack,只具有監控功能,就沒安裝

Installing X-Pack on Offline Machines
The plugin install scripts require direct Internet access to download and install X-Pack. If your server doesn’t have Internet access, you can manually download and install X-Pack.
To install X-Pack on a machine that doesn’t have Internet access:
Manually download the X-Pack zip file: https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.2.2.zip (sha1)
Transfer the zip file to a temporary directory on the offline machine. (Do NOT put the file in the Elasticsearch plugins directory.)
Run bin/elasticsearch-plugin install from the Elasticsearch install directory and specify the location of the X-Pack zip file. For example:
bin/elasticsearch-plugin install file:///path/to/file/x-pack-5.2.2.zip
Note
You must specify an absolute path to the zip file after the file:// protocol.
Run bin/kibana-plugin install from the Kibana install directory and specify the location of the X-Pack zip file. (The plugins for Elasticsearch, Kibana, and Logstash are included in the same zip file.) For example:
bin/kibana-plugin install file:///path/to/file/x-pack-5.2.2.zip
Run bin/logstash-plugin install from the Logstash install directory and specify the location of the X-Pack zip file. (The plugins for Elasticsearch, Kibana, and Logstash are included in the same zip file.) For example:
bin/logstash-plugin install file:///path/to/file/x-pack-5.2.2.zip
Enabling and Disabling X-Pack Features
By default, all X-Pack features are enabled. You can explicitly enable or disable X-Pack features in elasticsearch.yml and kibana.yml:
SettingDescription
xpack.security.enabled
Set to false to disable X-Pack security. Configure in both elasticsearch.yml and kibana.yml.
xpack.monitoring.enabled
Set to false to disable X-Pack monitoring. Configure in both elasticsearch.yml and kibana.yml.
xpack.graph.enabled
Set to false to disable X-Pack graph. Configure in both elasticsearch.yml and kibana.yml.
xpack.watcher.enabled
Set to false to disable Watcher. Configure in elasticsearch.yml only.
xpack.reporting.enabled
Set to false to disable X-Pack reporting. Configure in kibana.yml only.

九、Nginx負載均衡

1,配置負載

[root@~# cat /usr/local/nginx/conf/nginx.conf 

server

{
    listen   5601;
    server_name 192.168.188.215;
    index index.html index.htm index.shtml;
    location / {
        allow  192.168.188.0/24;
        deny all;
        proxy_pass http://kibanangx_niubi_com;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;       
       auth_basic "Please input Username and Password";
       auth_basic_user_file /usr/local/nginx/conf/.pass_file_elk;
    }
          access_log  /data/wwwlogs/access_kibanangx.niubi.com.log  access;
}
upstream kibanangx_niubi_com {
        ip_hash;
        server  192.168.188.191:5601;
        server  192.168.188.193:5601;
        server  192.168.188.195:5601;
}

2,訪問

http://192.168.188.215:5601/app/kibana#

-------------------------------------------------------------------------------------------------

完美的分割線

-------------------------------------------------------------------------------------------------

優化文檔

ELKB5.2集群優化方案

一,優化效果

優化前 

收集日志請求達到1萬/s,延時10s內,默認設置數據10s刷新。

優化后

收集日志請求達到3萬/s,延時10s內,默認設置數據10s刷新。(預估可以滿足最大請求5萬/s)

缺點:CPU處理能力不足,在dashboard大時間聚合運算是生成儀表視圖會有超時現象發生;另外elasticsarch結構和搜索語法等還有進一步優化空間。

二,優化步驟

1,內存和CPU重新規劃

1),es                       16CPU  48G內存

2),kafka                 8CPU   16G內存

3),logstash            16CPU  12G內存

2,kafka優化

kafka manager 監控觀察消費情況

kafka heap size需要修改

logstash涉及kafka的一個參數修改

1),修改jvm內存數

vi /usr/local/kafka/bin/kafka-server-start.sh

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then

    export KAFKA_HEAP_OPTS="-Xmx8G -Xms8G"

    export JMX_PORT="8999"

fi

2),Broker參數配置

配置優化都是修改server.properties文件中參數值

網絡和io操作線程配置優化

# broker處理消息的最大線程數(默認3,可以為CPU核數) 

num.network.threads=4   

# broker處理磁盤IO的線程數 (默認4,可以為CPU核數2倍左右)

num.io.threads=8     

3),安裝kafka監控

/data/scripts/kafka-manager-1.3.3.4/bin/kafka-manager

http://192.168.188.215:8099/clusters/ngxlog/consumers

3,logstah優化

logstas需要修改2個配置文件

1),修改jvm參數

vi /usr/share/logstash/config/jvm.options 

-Xms2g

-Xmx6g

2),修改logstash.yml

vi /usr/share/logstash/config/logstash.yml

path.data: /var/lib/logstash

pipeline.workers: 16#cpu核心數

pipeline.output.workers: 4#這里相當于output elasticsearch里面的workers數

pipeline.batch.size: 5000#根據qps,壓力情況等填寫

pipeline.batch.delay: 5

path.config: /usr/share/logstash/config/conf.d

path.logs: /var/log/logstash

3),修改對應的logstash.conf文件

input文件

vi /usr/share/logstash/config/in-kafka-ngx12-out-es.conf 

input {
  kafka {
  bootstrap_servers => "192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092"
  group_id => "ngx1"
  topics => ["ngx1-168"]
  codec => "json"
  consumer_threads => 3
  auto_offset_reset => "latest"   #添加這行
  #decorate_events =>   #true 這行去掉
  }
}

filter文件

filter {
  mutate {
    gsub => ["message", "\\x", "%"]   #這個是轉義,url里面的加密方式和request等不一樣,用于漢字顯示
    #remove_field => ["kafka"]這行去掉  decorate events 默認false后就不添加kafka.{}字段了,這里也及不需要再remove了
  }

output文件

修改前

    flush_size => 50000

    idle_flush_time => 10

修改后

4秒集齊8萬條一次性輸出

    flush_size => 80000

    idle_flush_time => 4

啟動后logstash輸出(pipeline.max_inflight是8萬)

[2017-05-16T10:07:02,552][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>5000, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>80000}
[2017-05-16T10:07:02,553][WARN ][logstash.pipeline        ] CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 80000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 5000), or changing the number of pipeline workers (currently 16)

4,elasticsearch優化

1),修改jvm參加

vi /etc/elasticsearch/jvm.options

調整為24g,最大為虛擬機內存的50%

-Xms24g

-Xmx24g

2),修改GC方法(待定,后續觀察,該參數不確定時不建議修改)

elasticsearch默認使用的GC是CMS GC

如果你的內存大小超過6G,CMS是不給力的,容易出現stop-the-world

建議使用G1 GC

注釋掉:

JAVA_OPTS=”$JAVA_OPTS -XX:+UseParNewGC”

JAVA_OPTS=”$JAVA_OPTS -XX:+UseConcMarkSweepGC”

JAVA_OPTS=”$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75″

JAVA_OPTS=”$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly”

修改為:

JAVA_OPTS=”$JAVA_OPTS -XX:+UseG1GC”

JAVA_OPTS=”$JAVA_OPTS -XX:MaxGCPauseMillis=200″

3),安裝elasticsearch集群監控工具Cerebro

https://github.com/lmenezes/cerebro

Cerebro 時一個第三方的 elasticsearch 集群管理軟件,可以方便地查看集群狀態:

https://github.com/lmenezes/cerebro/releases/download/v0.6.5/cerebro-0.6.5.tgz

安裝后訪問地址

http://192.168.188.215:9000/

4),elasticsearch搜索參數優化(難點問題)

發現沒事可做的,首先默認配置已經很好了,其次bulk,刷新等配置里都寫好了 

5),elasticsarch集群角色優化

es191,es193,es195只做master節點+ingest節點

es192,es194,es196只做data節點(上面是虛擬機2個虛擬機共用一組raid5磁盤,如果都做data節點性能表現不好)

再加2個data節點,這樣聚合計算性能提升很大

5,filebeat優化

1),使用json格式輸入,這樣logstash就不需要dcode減輕后端壓力

  json.keys_under_root: true

  json.add_error_key: true

2),drop不必要的字段如下

vim /etc/filebeat/filebeat.yml 

processors:

- drop_fields:

    fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset", "source"]

3),計劃任務刪索引

index默認保留5天

cat /data/scripts/delindex.sh 

#!/bin/bash
OLDDATE=`date -d  -5days  +%Y.%m.%d`
echo  $OLDDATE
curl -XDELETE http://192.168.188.193:9200/filebeat-ngx1-168-$OLDDATE
curl -XDELETE http://192.168.188.193:9200/filebeat-ngx2-178-$OLDDATE
curl -XDELETE http://192.168.188.193:9200/filebeat-ngx3-188-$OLDDATE

到此,相信大家對“ELKB5.2.2集群環境的部署過程”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

凤凰县| 饶阳县| 筠连县| 噶尔县| 油尖旺区| 天峨县| 舞阳县| 盖州市| 沙湾县| 镇江市| 花垣县| 泽库县| 金华市| 喀喇| 宾阳县| 曲松县| 彩票| 台中县| 顺义区| 彰武县| 行唐县| 凌海市| 司法| 宝应县| 怀来县| 兴安盟| 齐河县| 营山县| 丰城市| 藁城市| 延庆县| 石门县| 鄂托克前旗| 泾源县| 辽宁省| 增城市| 新巴尔虎右旗| 托克逊县| 昆山市| 昌吉市| 探索|