91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Hadoop多節點測試環境快速部署半自動腳本的示例代碼

發布時間:2021-12-09 15:59:15 來源:億速云 閱讀:144 作者:小新 欄目:互聯網科技

小編給大家分享一下Hadoop多節點測試環境快速部署半自動腳本的示例代碼,希望大家閱讀完這篇文章之后都有所收獲,下面讓我們一起去探討吧!

本半自動部署包括兩個腳本hdp_ini.sh(環境初始化)和 hdp_bld.sh(創建hadoop)。
執行完第一個腳本,再檢查手動調整一下。
然后根據第二個腳本里說明配置好ssh passwordless, 再執行第二個腳本,基本上就可以快速的部署好測試環境。

hdp_ini.sh - Run on all the nodes in your cluster env!

#!/bin/bash

# Name: hdp_ini.sh
# Purpose: For Fast INITIALIZATION of the testing env, to save time for life :) 
# Author: Stone@dbinterest.com + Many guys on the internet
# Time: 03/09/2012
# User: root
# Reminder:  1) Remember to add the executable permission the script before start the script
#                          eg: chmod a+x env_ini.bash OR chmod 744 env_ini.bash
#                         2) Need to change the $HOSTNAME2BE and $HOST_IP variable on each node accordingly.
# Attention: 1) You might need run this script TWICE to have the server ip address in effect
#                         if the ifconfig output shows deivce other than "eth0"!!!
#            2) Please execute the script from the console inside the node, not form ssh tool
#            like xshell, SecureCRT, etc. to avoid connection lost.


############################################################################
### Different env variables setting. Please change this part according to
### your specific env ...
############################################################################

export HOST01='192.168.1.121 hdp01.dbinterest.local hdp01'
export HOST02='192.168.1.122 hdp02.dbinterest.local hdp02'
export HOST03='192.168.1.123 hdp03.dbinterest.local hdp03'

export BAKDIR='/root/bak_dir'

export HOSTSFILE='/etc/hosts'
export NETWORKFILE='/etc/sysconfig/network'

export CURRENT_HOSTNAME=`hostname`

### Modify hostname according to your node in the cluster

#export HOSTNAME2BE='hdp01.dbinterest.local'
#export HOSTNAME2BE='hdp02.dbinterest.local'
export HOSTNAME2BE='hdp03.dbinterest.local'

export ETH0STRING=`ifconfig | grep eth0`

export HWADDRSS=`ifconfig | grep HWaddr | awk '{print $5}'`
export IFCFG_FILE='/etc/sysconfig/network-scripts/ifcfg-eth0'

### Modify host IP address according to your node in the cluster

#export HOST_IP='192.168.1.121'
#export HOST_IP='192.168.1.122'
export HOST_IP='192.168.1.123'

export GATEWAYIP='192.168.1.1'
export DNSIP01='8.8.8.8'
export DNSIP02='8.8.4.4'

export FILE70='/etc/udev/rules.d/70-persistent-net.rules'

export DATETIME=`date +%Y%m%d%H%M%S`

export FQDN=`hostname -f`

### Make the backup directory for the different files
if [ -d $BAKDIR ];
        then
                echo "The backup directory $BAKDIR exists!"
        else
                echo "Making the backup directory $BAKDIR..."
                mkdir $BAKDIR
fi                

############################################################################
### Config the hosts file "/etc/hosts"
############################################################################

if [ -f $HOSTSFILE ];
        then
                cp $HOSTSFILE $BAKDIR/hosts\_$DATETIME.bak
                echo '127.0.0.1   localhost localhost.localdomain' > $HOSTSFILE
                echo '::1         localhost6 localhost6.localdomain6' >> $HOSTSFILE
                echo "$HOST01" >> $HOSTSFILE
                echo "$HOST02" >> $HOSTSFILE
                echo "$HOST03" >> $HOSTSFILE
        else
                echo "File $HOSTSFILE does not exists"
fi

############################################################################
### Config the network file "/etc/sysconfig/network"
############################################################################

if [ -f $NETWORKFILE ];
        then
                cp $NETWORKFILE $BAKDIR/network\_$DATETIME.bak
                echo 'NETWORKING=yes' > $NETWORKFILE
                echo "HOSTNAME=$HOSTNAME2BE" >> $NETWORKFILE
        else
                echo "File $NETWORKFILE does not exists"
fi

############################################################################
### Config the ifcfg-eth0 file "/etc/sysconfig/network-scripts/ifcfg-eth0"
############################################################################

if [ -f $IFCFG_FILE ];
        then
                cp $IFCFG_FILE $BAKDIR/ifcfg_file\_$DATETIME.bak
                echo 'DEVICE=eth0' > $IFCFG_FILE
                echo 'BOOTPROTO=static' >> $IFCFG_FILE
                echo "HWADDR=$HWADDRSS" >> $IFCFG_FILE
                echo "IPADDR=$HOST_IP" >> $IFCFG_FILE
                echo 'NETMASK=255.255.255.0' >> $IFCFG_FILE
                echo "GATEWAY=$GATEWAYIP" >> $IFCFG_FILE
                echo "DNS1=$DNSIP01" >> $IFCFG_FILE
                echo "DNS2=$DNSIP02" >> $IFCFG_FILE
                echo 'ONBOOT=yes' >> $IFCFG_FILE
fi

echo ''
echo "DEFAULT hostname is $CURRENT_HOSTNAME."
echo "Hostname is going to be changed to $HOSTNAME2BE..."
if [ "$CURRENT_HOSTNAME" != "$HOSTNAME2BE" ];
        then
                hostname $HOSTNAME2BE
        else
                echo "The hostname is already configured correctly!"
fi                                


############################################################################
### Check the current config setting for the different files
############################################################################
echo ''
echo -e "Current fully qualified domain name is: \n $FQDN"
echo "Current config setting for $HOSTSFILE, $NETWORKFILE and $IFCFG_FILE"
echo ''
echo $HOSTSFILE
cat $HOSTSFILE
echo ''
echo $NETWORKFILE
cat $NETWORKFILE
echo ''
echo $IFCFG_FILE
cat $IFCFG_FILE

############################################################################
### Stop Iptables and SELinux. The reboot will make those in effect!
############################################################################
echo ''
echo "Stopping Ipstables and SELinux ..."
service iptables stop
chkconfig iptables off

sed -i.bak 's/=enforcing/=disabled/g' /etc/selinux/config


############################################################################
### Restarting the network ...
############################################################################

echo ''
echo "Restarting network ..."
service network restart

############################################################################
### For the machine copying/cloning in the VMware env, network deive was 
### changed to "eth2" from "eth0" after the 1st time copying, and then "eth3" 
### the 2nd, then "eth4". For a consistent test env, all of them was changed
### to "eth0" ...
############################################################################

if [ -z "$ETH0STRING" ];
        then
                echo "Network device eth0 does NOT exists!!!"
            if [ -f $FILE70 ]; 
                    then        
                                echo "Now, deleting the the file $FILE70... and Rebooting..."
                                cp $FILE70 $BAKDIR/file70\_$DATETIME.bak
                                rm /etc/udev/rules.d/70-persistent-net.rules
                                reboot
                fi
else
                echo "Network device eth0 exists."
fi

hdp_bld.sh - Just run on the Master node.

#!/bin/bash

# Name: hdp_bld.sh
# Purpose: For Fast Hadoop Installation of the testing env, to save time for life :) 
# Author: Stone@dbinterest.com + Many guys on the internet
# Time: 04/09/2012
# User: hadoop user "hdpadm"
# Attention: Passwordless SSH access need to be set up 1st between different nodes
#            to allow the script to take effect!

############################################################################
### Different env variables setting. Please change this part according to
### your specific env ...
############################################################################
export HDPPKG='hadoop-1.0.3.tar.gz'
export HDPLOC='/home/hdpadm/hadoop'
export HDPADMHM='/home/hdpadm'
export MASTER01='hdp01.dbinterest.local'
export SLAVE01='hdp02.dbinterest.local'
export SLAVE02='hdp03.dbinterest.local'
export HDPLINK='http://archive.apache.org/dist/hadoop/core/stable/hadoop-1.0.3.tar.gz'

export SLAVES="hdp02.dbinterest.local hdp03.dbinterest.local"
export USER='hdpadm'

############################################################################
### For the script to run, the hadoop user "hdpadm" should be set up first!
### And the SSH Passwordless should be set up for all the nodes ...
############################################################################
#/usr/sbin/groupadd hdpadm
#/usr/sbin/useradd hdpadm -g hdpadm

# Run as new user "hdpadm" on each node
# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
# eg: ssh-copy-id -i ~/.ssh/id_rsa.pub hdp01.dbinterest.local
# Syntax: ssh-copy-id [-i [identity_file]] [user@]machine

############################################################################
### Get the Hadoop the packages and prepare the installation
############################################################################
if [ ! -d $HDPLOC ]; 
        then
                  mkdir $HDPLOC
                  cd $HDPLOC
fi


if [ ! -f "$HDPPKG" ]; 
        then
                echo "Getting the Hadoop the packages and prepare the installation..."
                wget $HDPLINK -O $HDPLOC/$HDPPKG
                tar xvzf $HDPLOC/$HDPPKG -C $HDPLOC
                rm -f $HDPLOC/$HDPPKG
fi

############################################################################
### Hadoop config Step by Step 
############################################################################

# Config the profile
echo "Configuring the profile..."
if [ $(getconf LONG_BIT) == 64 ]; then
  echo '' >> $HDPADMHM/.bash_profile        
  echo "#Added Configurations for Hadoop" >> $HDPADMHM/.bash_profile
  echo "export JAVA_HOME=/usr/jdk64/jdk1.6.0_31" >> $HDPADMHM/.bash_profile
else
  echo "export JAVA_HOME=/usr/jdk32/jdk1.6.0_31" >> $HDPADMHM/.bash_profile
fi

echo "export HADOOP_HOME=/home/hdpadm/hadoop/hadoop-1.0.3" >> $HDPADMHM/.bash_profile
echo "export PATH=\$PATH:\$HADOOP_HOME/bin:\$JAVA_HOME/bin" >> $HDPADMHM/.bash_profile
echo "export HADOOP_HOME_WARN_SUPPRESS=1" >> $HDPADMHM/.bash_profile

#hadoop core-site.xml
echo "Configuring the core-site.xml file..."
echo "<?xml version='1.0'?>" > $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "<?xml-stylesheet type='text/xsl' href='configuration.xsl'?>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "<configuration>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "  <property>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "    <name>fs.default.name</name>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "    <value>hdfs://$MASTER01:9000</value>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "  </property>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "  <property>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "    <name>hadoop.tmp.dir</name>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "    <value>$HDPLOC/tmp</value>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "  </property>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml
echo "</configuration>" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml

#hadoop hdfs-site.xml
echo "Configuring the hdfs-site.xml..."
echo "<?xml version='1.0'?>" > $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "<?xml-stylesheet type='text/xsl' href='configuration.xsl'?>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "<configuration>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "  <property>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "    <name>dfs.name.dir</name>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "    <value>$HDPLOC/name</value>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "  </property>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "  <property>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "    <name>dfs.data.dir</name>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "    <value>$HDPLOC/data</value>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "  </property>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "  <property>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "    <name>dfs.replication</name>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "    <value>2</value>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "  </property>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml
echo "</configuration>" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml

#hadoop mapred-site.xml
echo "Configuring mapred-site.xml file..."
echo "<?xml version='1.0'?>" > $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml
echo "<?xml-stylesheet type='text/xsl' href='configuration.xsl'?>" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml
echo "<configuration>" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml
echo "  <property>" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml
echo "    <name>mapred.job.tracker</name>" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml
echo "    <value>$MASTER01:9001</value>" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml
echo "  </property>" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml
echo "</configuration>" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xml

echo "Configuring the masters, slaves and hadoop-env.sh files..."
#hadoop "masters" config
echo "$MASTER01" > $HDPLOC/hadoop-1.0.3/conf/masters

#hadoop "slaves" config
echo "$SLAVE01" > $HDPLOC/hadoop-1.0.3/conf/slaves
echo "$SLAVE02" >> $HDPLOC/hadoop-1.0.3/conf/slaves

#hadoop "hadoop-env.sh" config
echo "export JAVA_HOME=/usr/jdk64/jdk1.6.0_31" > $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh
echo 'export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh 
echo 'export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh 
echo 'export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh 
echo 'export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh 
echo 'export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh 

# Copy the config files and hadoop folder from "master" to all "slaves"

for slave in $SLAVES
do
        echo "------Copying profile and hadoop directory-------" 
        scp $HDPADMHM/.bash_profile $USER@$slave:$HDPADMHM/.bash_profile
        ssh $USER@$slave source $HDPADMHM/.bash_profile
        scp -r $HDPLOC $USER@$slave:/home/hdpadm
done        

source $HDPADMHM/.bash_profile
#hadoop namenode -format
#$HADOOP_HOME/bin/start-all.sh

看完了這篇文章,相信你對“Hadoop多節點測試環境快速部署半自動腳本的示例代碼”有了一定的了解,如果想了解更多相關知識,歡迎關注億速云行業資訊頻道,感謝各位的閱讀!

向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

安泽县| 夹江县| 通城县| 布尔津县| 屏南县| 泰安市| 秭归县| 秦皇岛市| 前郭尔| 大港区| 武川县| 同心县| 宁南县| 安义县| 沽源县| 永城市| 克什克腾旗| 灵璧县| 阿拉善盟| 崇阳县| 镇雄县| 西贡区| 金湖县| 潜江市| 宜都市| 石棉县| 天水市| 镇赉县| 麻城市| 海盐县| 汪清县| 和政县| 白水县| 渝中区| 遵义市| 孝昌县| 洪洞县| 盐津县| 淄博市| 土默特右旗| 革吉县|