您好,登錄后才能下訂單哦!
本次實驗使用3臺虛擬機
192.168.209.168
192.168.209.169
192.168.209.170
cp /usr/elasticsearch-6.2.3/config/elasticsearch.yml /usr/elasticsearch-6.2.3/config/elasticsearch.yml.bak
vi /usr/elasticsearch-6.2.3/config/elasticsearch.yml
cluster.name: ES_Cluster_Pcdog
node.name: 192.168.209.168
path.data: /usr/elasticsearch-6.2.3/data
path.logs: /usr/elasticsearch-6.2.3/logs
network.host: 192.168.209.168
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.209.168","192.168.209.169","192.168.209.170"]
discovery.zen.minimum_master_nodes: 3
discovery.zen.fd.ping_timeout: 120s
discovery.zen.fd.ping_retries: 6
discovery.zen.fd.ping_interval: 30s
cluster.routing.allocation.cluster_concurrent_rebalance: 40
cluster.routing.allocation.node_concurrent_recoveries: 40
cluster.routing.allocation.node_initial_primaries_recoveries: 40
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
檢查配置,過濾掉注釋
cat elasticsearch.yml | grep -v "^#"
node 168 啟動
./bin/elasticssearch
集群狀態yellow,因為其他2臺還沒起。。。
[2018-04-20T11:36:16,000][INFO ][o.e.c.r.a.AllocationService] [192.168.209.168] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2018.04.18][1]] ...]).
配置文件復制到其他2個節點
scp elasticsearch.yml 192.168.209.169:/usr/elasticsearch-6.2.3/config
scp elasticsearch.yml 192.168.209.170:/usr/elasticsearch-6.2.3/config
然后在各自的node啟動elasticssearch
node 002
node 003
在master node的目錄/usr/elasticsearch-6.2.3/logs
可以看到cluster日志
tail -f ES_Cluster_Pcdog.log
尼瑪日志毫無更新,去其他2個節點看,2個node都有報錯,難道是我克隆出來的機器的問題? 容我修個錯誤。
[2018-04-20T11:49:10,794][INFO ][o.e.d.z.ZenDiscovery ] [192.168.209.169] failed to send join request to master [{192.168.209.168}{-aU5102ETMW8isf85FWEHA}{V4flu0jLRfixBA0Yf9740w}{192.168.209.168}{192.168.209.168:9300}], reason [RemoteTransportException[[192.168.209.168][192.168.209.168:9300][internal:discovery/zen/join]]; nested: IllegalArgumentException[can't add node {192.168.209.169}{-aU5102ETMW8isf85FWEHA}{1IesX0gaSZWz2_O3eKDvfw}{192.168.209.169}{192.168.209.169:9300}, found existing node {192.168.209.168}{-aU5102ETMW8isf85FWEHA}{V4flu0jLRfixBA0Yf9740w}{192.168.209.168}{192.168.209.168:9300} with the same id but is a different node instance]; ]
原來是data里面有原來的數據,干掉
[pactera@ELK_002 data]$ pwd
/usr/elasticsearch-6.2.3/data
[pactera@ELK_002 data]$ rm -rf nodes/
3個節點都正常了
node 168 ,發現master 角色169
node 169 ,狀態黃變綠
node 170 ,發現master 角色169
稍后我需要研究下如何選舉master過程,這個cluster 3個master 3個data,實際生產應該是2個master然后多個data的。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。