91超碰碰碰碰久久久久久综合_超碰av人澡人澡人澡人澡人掠_国产黄大片在线观看画质优化_txt小说免费全本

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

Hadoop hive sqoop zookeeper hb

發布時間:2020-07-12 09:12:35 來源:網絡 閱讀:1983 作者:linuxblind 欄目:關系型數據庫

6、問題及解決方案

1. 問題描述:

WARN util.NativeCodeLoader: Unable to load native-hadoop library foryour platform… using builtin-java classes where applicable

問題原因:默認lib為32位,不支持64位。

解決辦法:重新編譯64位庫 - 請注意在jdk1.8上會編譯出錯

# yum install cmake lzo-devel zlib-devel gccgcc-c++ autoconf automake libtool ncurses-devel openssl-deve

安裝maven

#wget http://mirror.cc.columbia.edu/pub/software/apache/maven/maven-3/3.2.3/binaries/apache-maven-3.2.3-bin.tar.gz

# tar zxfapache-maven-3.2.3-bin.tar.gz -C /usr/local

# cd /usr/local

# ln -sapache-maven-3.2.3 maven

# vim/etc/profile

exportMAVEN_HOME=/usr/local/maven

exportPATH=${MAVEN_HOME}/bin:${PATH}

# source/etc/profile

安裝ant

# wget http://apache.dataguru.cn//ant/binaries/apache-ant-1.9.4-bin.tar.gz

# tar zxf apache-ant-1.9.4-bin.tar.gz -C/usr/local

# vim /etc/profile

 exportANT_HOME=/usr/local/apache-ant-1.9.4

 exportPATH=$PATH:$ANT_HOME/bin

# source /etc/profile

安裝findbugs

#wget http://prdownloads.sourceforge.net/findbugs/findbugs-2.0.3.tar.gz?download

# tar zxf findbugs-2.0.3.tar.gz -C/usr/local

# vim /etc/profile

export FINDBUGS_HOME=/opt/findbugs-2.0.3

export PATH=$PATH:$FINDBUGS_HOME/bin

安裝protobuf

# wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz

# tar zxf protobuf-2.5.0.tar.gz

# cd protobuf-2.5.0

# ./configure && make && makeinstall

下載源碼包

#wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.5.0/hadoop-2.5.0-src.tar.gz

# tar zxf hadoop-2.5.0-src.tar.gz

# cd hadoop-2.5.0-src

# mvn clean install -DskipTests

# mvn package -Pdist,native -DskipTests -Dtar

替換舊的lib庫

# mv /data/hadoop-2.5.0/lib/native /data/hadoop-2.5.0/lib/native_old

# cp -r /data/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0/lib/native\

/data/hadoop-2.5.0/lib/native

# bin/hdfs getconf -namenodes

參考:

http://www.tuicool.com/articles/zaY7Rz

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/NativeLibraries.html#Supported_Platforms)

 

2.問題描述:

出現WARN hdfs.DFSClient:DataStreamer Exception,然后執行

sbin/stop-dfs.sh => namenode1: no datanode tostop

或hadoop dfsadmin -report查詢不到集群中文件系統的信息

問題原因:重新格式化文件系統時,namenode產生的新的namespaceID與datanode所持有的namespaceID不一致造成的。

解決方案:在我們格式化namenode前,應首先刪除dfs.data.dir所配置文件中的data文件夾下的所有內容。

 

3. 問題描述:

ERRORorg.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:Incompatible namespaceIDs in 

問題原因: 每次namenode format會重新創建一個namenodeId,而dfs.data.dir參數配置的目錄中包含的是上次format創建的id,和dfs.name.dir參數配置的目錄中的id不一致。namenode format清空了namenode下的數據,但是沒有清空datanode下的數據,導致啟動時失敗,所要做的就是每次fotmat前,清空dfs.data.dir參數配置的目錄. 
格式化hdfs的命令 

解決方案:bin/hadoop namenode -format

 

MapReduce學習bloghttp://www.cnblogs.com/xia520pi/archive/2012/05/16/2504205.html

 

4. 問題描述:

 [root@namenode1hadoop]# hadoop fs -put README.txt /         

15/01/04 21:50:49 WARN hdfs.DFSClient:DataStreamer Exception

org.apache.hadoop.ipc.RemoteException(java.io.IOException):File /README.txt._COPYING_ could only be replicated to 0 nodes instead ofminReplication (=1).  There are 6datanode(s) running and no node(s) are excluded in this operation.

問題原因:是由于hdfs-site.xml的下列配置有誤(下面的參數需要根據實際情況修改)

<property>

       <name>dfs.block.size</name>

       <value>268435456</value>

       <description>The default block size for newfiles</description>

  </property>

 

  <property>

       <name>dfs.datanode.max.xcievers</name>

        <value>10240</value>

       <description>

           An Hadoop HDFS datanode has an upper bound on the number of files thatit will serve at any one time.

       </description>

  </property>

 

  <property>

       <name>dfs.datanode.du.reserved</name>

       <value>32212254720</value>

       <description>Reserved space in bytes per volume. Always leave thismuch space free for non dfs use.</description>

  </property>

 

解決辦法:修改上面的配置,然后重新啟動。

 

5. 問題描述:

Hadoop hive sqoop zookeeper hb                           

問題原因:slf4j bindings 沖突

解決辦法:

# mv /var/data/hive-1.40/lib/hive-jdbc-0.14.0-standalone.jar/opt/

hive依然不能啟動時,檢查一下
1.查看hive-site.xml配置,會看到配置值含有"system:java.io.tmpdir"的配置項
2.新建文件夾/var/data/hive/iotmp
3.將含有"system:java.io.tmpdir"的配置項的值修改為如上地址
啟動hive,成功!

 

6.問題描述

HADOOP:Error Launching job : org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:Invalid resource request, requested memory < 0, or requested memory > maxconfigured, requestedMemory=1536, maxMemory=1024

 

問題原因:mapreduce默認需要的內存為1536M,分配的過小

<property>

            <name>mapreduce.map.memory.mb</name>

            <value>512</value>

    </property>

    <property>

           <name>mapreduce.map.java.opts</name>

           <value>-Xmx410m</value>

    </property>

    <property>

           <name>mapreduce.reduce.memory.mb</name>

           <value>512</value>

    </property>

    <property>

           <name>mapreduce.reduce.java.opts</name>

           <value>-Xmx410m</value>

    </property>

the 512 is value the yarn.scheduler.maximum-allocation-mb inyarn-site.xml, and the 1536 is default value ofyarn.app.mapreduce.am.resource.mb parameter in mapred-site.xml, make sure theallocation-mb>app.mapreduce.resouce will be ok.

解決辦法:

調整上面的參數為2048,并擴充內存

 

7.問題描述

Hadoop:java.lang.IncompatibleClassChangeError:

Found interface org.apache.hadoop.mapreduce.JobContext,but class was expected

問題原因: sqoop的版本和hadoop的版本不匹配

解決辦法:重新編譯sqoop,方法如下:

如何編譯sqoop

 

第一步:

Additionally,building the documentation requires these tools:

* asciidoc
* make
* python 2.5+
* xmlto
* tar
* gzip
yum -y install git
yum -y install asciidoc
yum -y install make
yum -y install xmlto
yum -y install tar
yum -y install gzip

 

第二步:

下載相關軟件包:

wget http://dist.codehaus.org/jetty/jetty-6.1.26/jetty-6.1.26.zip

wget http://mirrors.cnnic.cn/apache/sqoop/1.4.5/sqoop-1.4.5.tar.gz

 

mv jetty-6.1.26.zip/root/.m2/repository/org/mortbay/jetty/jetty/6.1.26/

 

第三步:

解壓并修改相關文件:

tar -zxvf sqoop-1.4.5.tar.gz; cd sqoop-1.4.5

 

修改:build.xml后內容如下

<elseif>

      <equalsarg1="${hadoopversion}" arg2="200" />

      <then>

        <propertyname="hadoop.version" value="2.5.0" />

        <propertyname="hbase94.version" value="0.94.2" />

        <propertyname="zookeeper.version" value="3.4.6" />

        <propertyname="hadoop.version.full" value="2.5.0" />

        <propertyname="hcatalog.version" value="0.13.0" />

        <propertyname="hbasecompatprofile" value="2" />

        <propertyname="avrohadoopprofile" value="2" />

      </then>

  </elseif>

 

修改550行和568行debug="${javac.debug}">

為:debug="${javac.debug}"includeantruntime="on">

 

修改:src/test/org/apache/sqoop/TestExportUsingProcedure.java

修改

修改第244行sql.append(StringUtils.repeat("?",",  ",

為:sql.append(StringUtils.repeat("?,",

 

以上配置完成修改后,執行:ant package

如果編譯成功會提示:BUILD SUCCESSFUL

 

第四步:打包我們需要的sqoop安裝包

編譯成功后,默認會在sqoop-1.4.5/build目錄下生成sqoop-1.4.5.bin__hadoop-2.5.0

tar -zcfsqoop-1.4.5.bin__hadoop-2.5.0.tar.gz sqoop-1.4.5.bin__hadoop-2.5.0

 

完畢! 參考:http://www.aboutyun.com/thread-8462-1-1.html

 

8.問題描述:

執行命令:

# sqoopexport --connect jdbc:mysql://10.40.214.9:3306/emails \

--usernamehive --password hive --table izhenxin \

--export-dir/user/hive/warehouse/maillog.db/izhenxin_total

 

Caused by:java.lang.RuntimeException: Can't parse input data: '@QQ.com'

        atizhenxin.__loadFromFields(izhenxin.java:378)

        at izhenxin.parse(izhenxin.java:306)

        atorg.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)

        ... 10 more

Caused by:java.lang.NumberFormatException: For input string: "@QQ.com"

15/01/19 23:15:21 INFO mapreduce.ExportJobBase: Transferred 0bytes in 46.0078 seconds (0 bytes/sec)

15/01/19 23:15:21 INFO mapreduce.ExportJobBase: Exported 0records.

15/01/19 23:15:21 ERROR tool.ExportTool: Error during export: Exportjob failed!

問題原因:

由于沒有指定的文件的全路徑導致的

事實上全路徑應該是:

# hadoop fs -ls/user/hive/warehouse/maillog.db/izhenxin_total/

Found 1 items

-rw-r--r--   2 rootsupergroup       2450 2015-01-19 23:50/user/hive/warehouse/maillog.db/izhenxin_total/000000_0

解決辦法:

# sqoop export --connectjdbc:mysql://10.40.214.9:3306/emails --username hive --password hive --tableizhenxin --export-dir /user/hive/warehouse/maillog.db/izhenxin_total/000000_0--input-fields-terminated-by '\t'

 

依然報錯:

mysql> create table izhenxin(id int(10)unsigned NOT NULL AUTO_INCREMENT,mail_domain varchar(32) DEFAULTNULL,sent_number int,bounced_number int, deffered_number int, PRIMARY KEY(`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='sent mail';   ##原來的表

##解決辦法:先刪除上面的表,然后創建下面的表以適應hive的表結構

mysql> create table izhenxin(mail_domainvarchar(32) DEFAULT NULL,sent_number int,bounced_number int, deffered_numberint) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='sent mail';

##最終輸出:

15/01/20 00:05:51 INFO mapreduce.ExportJobBase: Transferred6.9736 KB in 26.4035 seconds (270.4564 bytes/sec)

15/01/20 00:05:51 INFO mapreduce.ExportJobBase: Exported 132records.

 

mysql> select count(1) from izhenxin;

+----------+

| count(1) |

+----------+

|      132 |

+----------+

1 row in set (0.00 sec)

搞定!

 

9.問題描述:

15/01/27 10:48:56 INFO mapreduce.Job: Task Id :attempt_1420738964879_0244_m_000003_0, Status : FAILED

AttemptID:attempt_1420738964879_0244_m_000003_0 Timed out after600 secs

15/01/27 10:48:57 INFO mapreduce.Job:  map 75% reduce 0%

15/01/27 10:49:08 INFO mapreduce.Job:  map 100% reduce 0%

15/01/27 10:59:26 INFO mapreduce.Job: Task Id :attempt_1420738964879_0244_m_000003_1, Status : FAILED

AttemptID:attempt_1420738964879_0244_m_000003_1 Timed out after600 secs

15/01/27 10:59:27 INFO mapreduce.Job:  map 75% reduce 0%

15/01/27 10:59:38 INFO mapreduce.Job:  map 100% reduce 0%

15/01/27 11:09:55 INFO mapreduce.Job: Task Id :attempt_1420738964879_0244_m_000003_2, Status : FAILED

AttemptID:attempt_1420738964879_0244_m_000003_2 Timed out after600 secs

 

問題原因:

執行超時

 

解決辦法:

vim mapred-site.xml

<property>

 <name>mapred.task.timeout</name>

 <value>1800000</value> <!-- 30 minutes -->

</property>

 

方法2:

Configuration conf=new Configuration();

 long milliSeconds = 1000*60*60;<default is 600000, likewise can give any value)

 conf.setLong("mapred.task.timeout",milliSeconds);

 

方法3:

setmapred.tasktracker.expiry.interval=1800000;

setmapred.task.timeout= 1800000;


15/02/01 03:03:37 ERROR manager.SqlManager: Error reading fromdatabase: java.sql.SQLException: Streaming result set com

.mysql.jdbc.RowDataDynamic@4c0f73a3 is still active. Nostatements may be issued when any streaming result sets are open

 and in use on a givenconnection. Ensure that you have called .close() on any active streaming resultsets before attem

pting more queries.

java.sql.SQLException: Streaming result setcom.mysql.jdbc.RowDataDynamic@4c0f73a3 is still active. No statements may be

 issued when any streamingresult sets are open and in use on a given connection. Ensure that you havecalled .close() o

n any active streaming result sets before attempting morequeries.

 

mysql-connector-java-5.1.18-bin.jar  更換為: mysql-connector-java-5.1.32-bin.jar

 

問題:

由于2015年4月24日,openstack虛擬機整體宕機,造成hadoop運行異常,datanode無法啟動

解決辦法:

重新格式化namenode

然后刪除hdfs/data 并賦予可寫權限

/var/data/hadoop/bin/hadoop namenode -format

rm -rf /var/hadoop/tmp/dfs/data #下面兩條命令在所有節點都執行

chown -R 777 /var/hadoop/tmp/dfs/data

/var/data/hadoop/sbin/hadoop-daemons.sh start datanode

 

hdfs haadmin -transitionToActive namenode1  如果兩個namenode都是standby狀態,用該命令提升為active 


向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

县级市| 邵东县| 清水县| 昭平县| 台北县| 连山| 成都市| 百色市| 通山县| 新蔡县| 苏尼特左旗| 闻喜县| 云南省| 嘉善县| 封开县| 白沙| 东平县| 杂多县| 鹿邑县| 合川市| 工布江达县| 怀仁县| 长垣县| 盐池县| 互助| 茌平县| 商南县| 开鲁县| 鸡东县| 柳江县| 浦县| 台前县| 余江县| 衡南县| 鄂托克前旗| 信宜市| 上饶市| 达尔| 株洲市| 安图县| 荔波县|