91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

hadoop配置名稱節點HA原理

發布時間:2020-10-04 18:37:02 來源:網絡 閱讀:714 作者:AricaCui 欄目:大數據

Architecture


In a typical HA clusiter, two separate machines are configured as NameNodes. At any point in time, exactly one of the NameNodes is in an Active state, and the other is in a Standby state. The Active NameNode is responsible for all client operations in the cluster, while the Standby is simply acting as a slave, maintaining enough state to provide a fast failover if necessary.

In order for the Standby node to keep its state synchronized with the Active node, both nodes communicate with a group of separate daemons called “JournalNodes” (JNs). When any namespace modification is performed by the Active node, it durably logs a record of the modification to a majority of these JNs. The Standby node is capable of reading the edits from the JNs, and is constantly watching them for changes to the edit log. As the Standby Node sees the edits, it applies them to its own namespace. In the event of a failover, the Standby will ensure that it has read all of the edits from the JounalNodes before promoting itself to the Active state. This ensures that the namespace state is fully synchronized before a failover occurs.

In order to provide a fast failover, it is also necessary that the Standby node have up-to-date information regarding the location of blocks in the cluster. In order to achieve this, the DataNodes are configured with the location of both NameNodes, and send block location information and heartbeats to both.

It is vital for the correct operation of an HA cluster that only one of the NameNodes be Active at a time. Otherwise, the namespace state would quickly diverge between the two, risking data loss or other incorrect results. In order to ensure this property and prevent the so-called “split-brain scenario,” the JournalNodes will only ever allow a single NameNode to be a writer at a time. During a failover, the NameNode which is to become active will simply take over the role of writing to the JournalNodes, which will effectively prevent the other NameNode from continuing in the Active state, allowing the new Active to safely proceed with failover.

HardWare Resources


In order to deploy an HA cluster, you should prepare the following:

  • NameNode machines - the machines on which you run the Active and Standby NameNodes should have equivalent hardware to each other, and equivalent hardware to what would be used in a non-HA cluster.

  • JournalNode machines - the machines on which you run the JournalNodes. The JournalNode daemon is relatively lightweight, so these daemons may reasonably be collocated on machines with other Hadoop daemons, for example NameNodes, the JobTracker, or the YARN ResourceManager. Note: There must be at least 3 JournalNode daemons, since edit log modifications must be written to a majority of JNs. This will allow the system to tolerate the failure of a single machine. You may also run more than 3 JournalNodes, but in order to actually increase the number of failures the system can tolerate, you should run an odd number of JNs, (i.e. 3, 5, 7, etc.). Note that when running with N JournalNodes, the system can tolerate at most (N - 1) / 2 failures and continue to function normally.

Note that, in an HA cluster, the Standby NameNode also performs checkpoints of the namespace state, and thus it is not necessary to run a Secondary NameNode, CheckpointNode, or BackupNode in an HA cluster. In fact, to do so would be an error. This also allows one who is reconfiguring a non-HA-enabled HDFS cluster to be HA-enabled to reuse the hardware which they had previously dedicated to the Secondary NameNode.

Deployment


hdfs-site.xml

<property>

    <name>dfs.nameservices</name>

    <value>mycluster</value>

</property>

<property>

    <name>dfs.ha.namenodes.mycluster</name>

    <value>nn1,nn2</value>

</property>

<property>

<name>dfs.namenode.rpc-address.mycluster.nn1</name>

<value>192.168.153.201:8020</value>

</property>

<property>

<name>dfs.namenode.rpc-address.mycluster.nn2</name>

<value>192.168.153.205:8020</value>

</property>

<property>

    <name>dfs.namenode.http-address.mycluster.nn1</name>

    <value>192.168.153.201:50070</value>

</property>

<property>

    <name>dfs.namenode.http-address.mycluster.nn2</name>

    <value>192.168.153.205:50070</value>

</property>

<property>

    <name>dfs.namenode.shared.edits.dir</name>

    <value>qjournal://192.168.153.202:8485;192.168.153.203:8485;192.168.153.204:8485/mycluster</value>

</property>

<property>

    <name>dfs.client.failover.proxy.provider.mycluster</name>

    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

    <name>dfs.ha.fencing.methods</name>

    <value>sshfence</value>

</property>

<property>

    <name>dfs.ha.fencing.ssh.private-key-files</name>

    <value>/home/centos/.ssh/id_rsa</value>

</property>

<property>

    <name>dfs.journalnode.edits.dir</name>

    <value>/home/centos/hadoop/hdfs/journal</value>

</property>

core-site.xml

<property>

    <name>fs.defaultFS</name>

    <value>hdfs://mycluster</value>

</property>

Notes


1、Currently, only a maximum of two NameNodes may be configured per nameservice.


向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

仁怀市| 扎赉特旗| 安国市| 贵定县| 沈阳市| 泰兴市| 措勤县| 伊宁县| 张家口市| 康平县| 莱西市| 二连浩特市| 许昌市| 龙江县| 清水河县| 榆林市| 稷山县| 米林县| 嵊泗县| 兰坪| 徐州市| 保靖县| 罗甸县| 盖州市| 上林县| 富民县| 成都市| 黄陵县| 山阳县| 洛隆县| 中山市| 茂名市| 郴州市| 安阳市| 海宁市| 昌平区| 敦煌市| 化州市| 宁海县| 崇州市| 靖宇县|