DBILITY

독거 가능성 100% 노후에 라면값이라도 하게 센스를 발휘합시다!😅
Please click on the ad so that I can pay for ramen in my old age!
点击一下广告,让老后吃个泡面钱吧!
老後にラーメン代だけでもするように広告を一回クリックしてください。

apache hadoop HA 설치 본문

bigdata/hadoop

apache hadoop HA 설치

DBILITY 2018. 4. 17. 22:34
반응형

아래 매뉴얼과 [시작하세요! 하둡프로그래밍 - 정재화지음] 책을 참고하여 열심히 해보자.

사내 또는 사설 mvn repository가 있다면 pom에 추가하면, 빌드시간이 좀 줄어든다.

 

https://zookeeper.apache.org/doc/r3.4.11/zookeeperStarted.html#sc_RunningReplicatedZooKeeper

https://zookeeper.apache.org/doc/r3.4.11/zookeeperAdmin.html#sc_advancedConfiguration

http://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-common/SingleCluster.html#Setup_passphraseless_ssh

http://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-common/ClusterSetup.html

http://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

http://hadoop.apache.org/docs/r2.7.5/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html

http://hadoop.apache.org/docs/r2.7.5/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

https://blog.joeyandres.com/2017/10/23/automate-hadoop-cluster-with-systemd/

Cenos 7 다섯대의 서버를 준비한다.

http://www.dbility.com/25

 

Java 설치

http://www.dbility.com/248

 

Maven설치 - master서버에만 설치

http://www.dbility.com/237

 

Protocol buffer설치

http://www.dbility.com/236

 

ZooKeeper 설치

http://www.dbility.com/249

 

#전체서버에 user,group추가까지 실행
[root@big-master ~]# adduser hadoop; echo 'hadoop' | passwd --stdin hadoop; usermod -G datagroup hadoop
Changing password for user hadoop.
passwd: all authentication tokens updated successfully.
[root@big-master ~]# ssh big-slave1 "adduser hadoop; echo 'hadoop' | passwd --stdin hadoop; usermod -G datagroup hadoop"
Changing password for user hadoop.
passwd: all authentication tokens updated successfully.
[root@big-master ~]# ssh big-slave2 "adduser hadoop; echo 'hadoop' | passwd --stdin hadoop; usermod -G datagroup hadoop"
Changing password for user hadoop.
passwd: all authentication tokens updated successfully.
[root@big-master ~]# ssh big-slave3 "adduser hadoop; echo 'hadoop' | passwd --stdin hadoop; usermod -G datagroup hadoop"
Changing password for user hadoop.
passwd: all authentication tokens updated successfully.
[root@big-master ~]# ssh big-slave4 "adduser hadoop; echo 'hadoop' | passwd --stdin hadoop; usermod -G datagroup hadoop"
Changing password for user hadoop.
passwd: all authentication tokens updated successfully.

[root@big-master ~]# su - hadoop

[hadoop@big-master ~]$ ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RdxFiyZUnMae62F4AiT3DjbrsIWcJ5KbZiyp1syn8II hadoop@big-master
The key's randomart image is:
+---[RSA 4096]----+
|         .o=.+o  |
|      . oo. *. . |
|       + .ooo..  |
|        =..oo    |
|     o +S* . .   |
|    o * + + =    |
|..+o + B   = .   |
|Eo=+*.. .   .    |
|o..*o            |
+----[SHA256]-----+

[hadoop@big-master ~]$ cd .ssh/
[hadoop@big-master .ssh]$ ls
id_rsa  id_rsa.pub
[hadoop@big-master .ssh]$ cp id_rsa.pub ./authorized_keys

[hadoop@big-master .ssh]$ ssh-copy-id hadoop@big-master
[hadoop@big-master .ssh]$ ssh-copy-id hadoop@big-slave1
[hadoop@big-master .ssh]$ ssh-copy-id hadoop@big-slave2
[hadoop@big-master .ssh]$ ssh-copy-id hadoop@big-slave3
[hadoop@big-master .ssh]$ ssh-copy-id hadoop@big-slave4

[hadoop@big-master src]$ exit
logout
[root@big-master ~]# cd /usr/local/src/
[root@big-master src]# wget http://apache.tt.co.kr/hadoop/common/hadoop-2.7.6/hadoop-2.7.6-src.tar.gz
[root@big-master src]# wget https://dist.apache.org/repos/dist/release/hadoop/common/hadoop-2.7.6/hadoop-2.7.6-src.tar.gz.asc
[root@big-master src]# wget https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
[root@big-master src]# wget https://dist.apache.org/repos/dist/release/hadoop/common/hadoop-2.7.6/hadoop-2.7.6-src.tar.gz.mds
[root@big-master src]# gpg --import KEYS
[root@big-master src]# gpg --verify hadoop-2.7.6-src.tar.gz.asc
[root@big-master src]# sha1sum hadoop-2.7.6-src.tar.gz
d1390bec780b6695b2d18defa5e95296daa10220  hadoop-2.7.6-src.tar.gz
[root@big-master src]# tar -zxpf hadoop-2.7.6-src.tar.gz
[root@big-master src]# cd hadoop-2.7.6-src
[root@big-master hadoop-2.7.6-src]# mvn clean package -Pdist,native -DskipTests -Dtar -Dmaven.javadoc.skip=true
#maven repository에 library가 없다면 download에 상당한 시간이 소요된다.
[INFO] Executing tasks

main:
     [exec] $ tar cf hadoop-2.7.6.tar hadoop-2.7.6
     [exec] $ gzip -f hadoop-2.7.6.tar
     [exec]
     [exec] Hadoop dist tar available at: /usr/local/src/hadoop-2.7.6-src/hadoop-dist/target/hadoop-2.7.6.tar.gz
     [exec]
[INFO] Executed tasks
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Skipping javadoc generation
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main 2.7.6 ........................... SUCCESS [02:43 min]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [01:43 min]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 39.549 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 19.611 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.186 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 28.773 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 32.877 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [03:20 min]
[INFO] Apache Hadoop Auth ................................. SUCCESS [01:36 min]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 20.928 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [03:30 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [  1.465 s]
[INFO] Apache Hadoop KMS .................................. SUCCESS [ 15.461 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.078 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [01:24 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 11.995 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [01:08 min]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  1.092 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.078 s]
[INFO] hadoop-yarn ........................................ SUCCESS [  0.076 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [  5.390 s]
[INFO] hadoop-yarn-common ................................. SUCCESS [01:04 min]
[INFO] hadoop-yarn-server ................................. SUCCESS [  0.072 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [  1.430 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [  7.516 s]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  0.731 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [  1.501 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [  5.819 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [  1.041 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [  1.226 s]
[INFO] hadoop-yarn-server-sharedcachemanager .............. SUCCESS [  0.902 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [  0.066 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  0.619 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  0.486 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [  0.072 s]
[INFO] hadoop-yarn-registry ............................... SUCCESS [  1.065 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [  4.533 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.169 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [  3.875 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [  2.494 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  0.746 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [  2.552 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [  1.503 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 27.856 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  0.527 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  1.270 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [  3.320 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [  9.886 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 14.867 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [  0.469 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [  0.908 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  1.046 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  0.423 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  0.217 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  0.606 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [  9.236 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [  0.809 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [01:29 min]
[INFO] Apache Hadoop Azure support ........................ SUCCESS [ 18.418 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [  5.190 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.871 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  1.788 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [  4.788 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.056 s]
[INFO] Apache Hadoop Distribution 2.7.6 ................... SUCCESS [ 14.387 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 24:48 min
[INFO] Finished at: 2018-04-18T20:19:24+09:00
[INFO] ------------------------------------------------------------------------

[root@big-master hadoop-2.7.6-src]# cd hadoop-dist/target/
[root@big-master target]# cp -rf hadoop-2.7.6 /bigdata/
[root@big-master target]# cd /bigdata/
[root@big-master bigdata]# chown -R hadoop.hadoop hadoop-2.7.6/
[root@big-master bigdata]# ln -s /bigdata/hadoop-2.7.6/ /hadoop
[root@big-master bigdata]# vi /etc/profile

export JAVA_HOME=/jdk
export HADOOP_HOME=/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"

export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
:qw!

[root@big-master bigdata]# rsync -az /etc/profile big-slave1:/etc/profile
[root@big-master bigdata]# rsync -az /etc/profile big-slave2:/etc/profile
[root@big-master bigdata]# rsync -az /etc/profile big-slave3:/etc/profile
[root@big-master bigdata]# rsync -az /etc/profile big-slave4:/etc/profile

[root@big-master bigdata]# su - hadoop
[hadoop@big-master ~]$ cd /bigdata/hadoop-2.7.6/etc/hadoop/
#hdfs가 사용할 directory
[hadoop@big-master hadoop]$ mkdir -p /bigdata/repository/dfs/namenode
[hadoop@big-master hadoop]$ mkdir -p /bigdata/repository/dfs/datanode
[hadoop@big-master hadoop]$ mkdir -p /bigdata/repository/dfs/journalnode
[hadoop@big-master hadoop]$ mkdir -p /bigdata/repository/tmp
[hadoop@big-master hadoop]$ mkdir -p /bigdata/repository/yarn/nm-local-dir
[hadoop@big-master hadoop]$ mkdir -p /bigdata/repository/yarn/system/rmstore
[hadoop@big-master hadoop]$ chgrp datagroup /bigdata/repository/
#한꺼번에 실행 가능
[hadoop@big-master hadoop]$ ssh big-slave1 "mkdir -p /bigdata/repository/dfs/namenode; mkdir -p /bigdata/repository/dfs/datanode; \
mkdir -p /bigdata/repository/dfs/journalnode; mkdir -p /bigdata/repository/tmp; mkdir -p /bigdata/repository/yarn/nm-local-dir; \
mkdir -p /bigdata/repository/yarn/system/rmstore; chgrp datagroup /bigdata/repository"
[hadoop@big-master hadoop]$ ssh big-slave2 "mkdir -p /bigdata/repository/dfs/namenode; mkdir -p /bigdata/repository/dfs/datanode; \
mkdir -p /bigdata/repository/dfs/journalnode; mkdir -p /bigdata/repository/tmp; mkdir -p /bigdata/repository/yarn/nm-local-dir; \
mkdir -p /bigdata/repository/yarn/system/rmstore; chgrp datagroup /bigdata/repository"
[hadoop@big-master hadoop]$ ssh big-slave3 "mkdir -p /bigdata/repository/dfs/namenode; mkdir -p /bigdata/repository/dfs/datanode; \
mkdir -p /bigdata/repository/dfs/journalnode; mkdir -p /bigdata/repository/tmp; mkdir -p /bigdata/repository/yarn/nm-local-dir; \
mkdir -p /bigdata/repository/yarn/system/rmstore; chgrp datagroup /bigdata/repository"
[hadoop@big-master hadoop]$ ssh big-slave4 "mkdir -p /bigdata/repository/dfs/namenode; mkdir -p /bigdata/repository/dfs/datanode; \
mkdir -p /bigdata/repository/dfs/journalnode; mkdir -p /bigdata/repository/tmp; mkdir -p /bigdata/repository/yarn/nm-local-dir; \
mkdir -p /bigdata/repository/yarn/system/rmstore; chgrp datagroup /bigdata/repository"

[hadoop@big-master hadoop]$ vi hadoop-env.sh
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/jdk
export HADOOP_HOME=/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"

export HADOOP_PID_DIR=${HADOOP_HOME}/pids
export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
:wq!

[hadoop@big-master hadoop]$ vi yarn-env.sh
export JAVA_HOME=/jdk
export HADOOP_HOME=/hadoop
export HADOOP_PREFIX=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export YARN_OPTS="$YARN_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"
:wq!

[hadoop@big-master hadoop]$ vi slaves
big-slave1
big-slave2
big-slave3
big-slave4
:wq!

[hadoop@big-master hadoop]$ cp slaves include_datanode
[hadoop@big-master hadoop]$ cp slaves include_nodemanager
[hadoop@big-master hadoop]$ touch exclude_datanode
[hadoop@big-master hadoop]$ touch exclude_nodemanager
[hadoop@big-master hadoop]$ vi core-site.xml

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop-cluster</value>
  </property>
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>big-master:2181,big-slave1:2181,big-slave2:2181</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/bigdata/repository/tmp</value>
  </property>
</configuration>
:wq!

[hadoop@big-master hadoop]$ vi hdfs-site.xml

<configuration>
  <property>
    <name>dfs.hosts</name>
    <value>/hadoop/etc/hadoop/include_datanode</value>
  </property>
  <property>
    <name>dfs.hosts.exclude</name>
    <value>/hadoop/etc/hadoop/exclude_datanode</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.permissions</name>
    <value>false</value>
  </property>
  <property>
    <name>dfs.nameservices</name>
    <value>hadoop-cluster</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/bigdata/repository/dfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/bigdata/repository/dfs/datanode</value>
  </property>
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/bigdata/repository/dfs/journalnode</value>
  </property>
  <property>
    <name>dfs.ha.namenodes.hadoop-cluster</name>
    <value>namenode1,namenode2</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.hadoop-cluster.namenode1</name>
    <value>big-master:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.hadoop-cluster.namenode2</name>
    <value>big-slave1:8020</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.hadoop-cluster.namenode1</name>
    <value>big-master:50070</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.hadoop-cluster.namenode2</name>
    <value>big-slave1:50070</value>
  </property>
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://big-master:8485;big-slave1:8485;big-slave2:8485/hadoop-cluster</value>
  </property>
  <property>
    <name>dfs.client.failover.proxy.provider.hadoop-cluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
  </property>
  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/home/hadoop/.ssh/id_rsa</value>
  </property>
  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
</configuration>
:wq!

[hadoop@big-master hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@big-master hadoop]$ vi mapred-site.xml
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.child.java.opts</name>
    <value>-Djava.security.egd=file:/dev/../dev/urandom</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>big-master:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>big-master:19888</value>
  </property>
</configuration>
:wq!

[hadoop@big-master hadoop]$ vi yarn-site.xml
<configuration>
  <property>
    <name>yarn.resourcemanager.nodes.include-path</name>
    <value>/hadoop/etc/hadoop/include_nodemanager</value>
  </property>
  <property>
    <name>yarn.resourcemanager.nodes.exclude-path</name>
    <value>/hadoop/etc/hadoop/exclude_nodemanager</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/bigdata/repository/yarn/nm-local-dir</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>big-master</value>
  </property>
<!--
  <property>
    <name>yarn.web-proxy.address</name>
    <value>0.0.0.0:8089</value>
  </property>
-->
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
    <name>yarn.resourcemanager.zk-state-store.parent-path</name>
    <value>/bigdata/repository/yarn/system/rmstore</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yarn-cluster</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>big-master</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>big-slave1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>big-master:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>big-slave1:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>big-master:2181,big-slave1:2181,big-slave2:2181</value>
  </property>
  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.log.server.url</name>
    <value>http://big-master:19888/jobhistory/logs</value>
  </property>
  <property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
  </property>
  <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property> 
</configuration>
:wq!

[hadoop@big-master hadoop]$ cd /bigdata/
[hadoop@big-master bigdata]$ ls
apache-maven-3.5.3  hadoop-2.7.6  jdk1.8.0_162  repository  zookeeper-3.4.11
[hadoop@big-master bigdata]$ rsync -az /bigdata/hadoop-2.7.6/ big-slave1:/bigdata/hadoop-2.7.6/
[hadoop@big-master bigdata]$ rsync -az /bigdata/hadoop-2.7.6/ big-slave2:/bigdata/hadoop-2.7.6/
[hadoop@big-master bigdata]$ rsync -az /bigdata/hadoop-2.7.6/ big-slave3:/bigdata/hadoop-2.7.6/
[hadoop@big-master bigdata]$ rsync -az /bigdata/hadoop-2.7.6/ big-slave4:/bigdata/hadoop-2.7.6/
[hadoop@big-master bigdata]$ exit
logout
[root@big-master hadoop]#
[root@big-master hadoop]# ssh big-slave1 "ln -s /bigdata/hadoop-2.7.6 /hadoop"
[root@big-master hadoop]# ssh big-slave2 "ln -s /bigdata/hadoop-2.7.6 /hadoop"
[root@big-master hadoop]# ssh big-slave3 "ln -s /bigdata/hadoop-2.7.6 /hadoop"
[root@big-master hadoop]# ssh big-slave4 "ln -s /bigdata/hadoop-2.7.6 /hadoop"

[hadoop@big-master hadoop]$ hdfs zkfc -formatZK

[hadoop@big-master hadoop]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /hadoop/hadoop/hadoop-hadoop-journalnode-big-master.out

[hadoop@big-master hadoop]$ ssh big-slave1
[hadoop@big-slave1 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /hadoop/hadoop/hadoop-hadoop-journalnode-big-slave1.out
[hadoop@big-slave1 ~]$ exit
logout
Connection to big-slave1 closed.
[hadoop@big-master hadoop]$ ssh big-slave2
[hadoop@big-slave2 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /hadoop/hadoop/hadoop-hadoop-journalnode-big-slave2.out
[hadoop@big-slave2 ~]$ exit
logout
Connection to big-slave2 closed.

[hadoop@big-master hadoop]$ hdfs namenode -format
[hadoop@big-master hadoop]$ hadoop-daemon.sh start namenode
starting namenode, logging to /hadoop/hadoop/hadoop-hadoop-namenode-big-master.out

[hadoop@big-master hadoop]$ hadoop-daemon.sh start zkfc
starting zkfc, logging to /hadoop/hadoop/hadoop-hadoop-zkfc-big-master.out

# hadoop-daemon.sh가 아님에 주의  hadoop-daemons.sh 사용
[hadoop@big-master hadoop]$ hadoop-daemons.sh start datanode
big-slave1: starting datanode, logging to /hadoop/hadoop/hadoop-hadoop-datanode-big-slave1.out
big-slave2: starting datanode, logging to /hadoop/hadoop/hadoop-hadoop-datanode-big-slave2.out
big-slave3: starting datanode, logging to /hadoop/hadoop/hadoop-hadoop-datanode-big-slave3.out
big-slave4: starting datanode, logging to /hadoop/hadoop/hadoop-hadoop-datanode-big-slave4.out

[hadoop@big-master hadoop]$ ssh big-slave1

[hadoop@big-slave1 ~]$ hdfs namenode -bootstrapStandby

[hadoop@big-slave1 ~]$ hadoop-daemon.sh start namenode
[hadoop@big-slave1 ~]$ hadoop-daemon.sh start zkfc

#resource manager standby는  수동 시작해줘야 함...
#Assuming a standby RM is up and running, the Standby automatically redirects all web requests to the Active, except for the “About” page.
[hadoop@big-master hadoop]$ ssh big-slave1
[hadoop@big-slave1 ~]$ vi /hadoop/etc/hadoop/yarn-site.xml
#big-master를 big-slave1으로 변경 후 저장
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>big-slave1</value>
  </property>
:wq!


[hadoop@big-master hadoop]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /hadoop/logs/yarn-hadoop-resourcemanager-big-master.out
big-slave1: starting nodemanager, logging to /hadoop/logs/yarn-hadoop-nodemanager-big-slave1.out
big-slave3: starting nodemanager, logging to /hadoop/logs/yarn-hadoop-nodemanager-big-slave3.out
big-slave2: starting nodemanager, logging to /hadoop/logs/yarn-hadoop-nodemanager-big-slave2.out
big-slave4: starting nodemanager, logging to /hadoop/logs/yarn-hadoop-nodemanager-big-slave4.out

[hadoop@big-slave1 ~]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /hadoop/logs/yarn-hadoop-resourcemanager-big-slave1.out
[hadoop@big-slave1 ~]$
[hadoop@big-slave1 ~]$ yarn rmadmin -getServiceState rm1
active
[hadoop@big-slave1 ~]$ yarn rmadmin -getServiceState rm2
standby
[hadoop@big-slave1 ~]$ exit
[hadoop@big-master hadoop]$ yarn rmadmin -transitionToActive rm2 --forcemanual
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.

It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.

You may abort safely by answering 'n' or hitting ^C now.

Are you sure you want to continue? (Y or N) Y
18/04/19 11:48:34 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@8b87145
[hadoop@big-master hadoop]$ yarn rmadmin -getServiceState rm2
active
[hadoop@big-master hadoop]$ yarn rmadmin -getServiceState rm1
standby

[hadoop@big-master hadoop]$ mr-jobhistory-daemon.sh start historyserver
[hadoop@big-master hadoop]$ hdfs dfsadmin -report
[hadoop@big-master hadoop]$ hdfs dfsadmin -refreshNodes
[hadoop@big-master hadoop]$ start-balancer.sh -threshold 5
또는
[hadoop@big-master hadoop]$ hdfs balancer -threshold 5
#systemd용 script
[hadoop@big-master hadoop]$ vi /hadoop/sbin/hadoop-service.sh
#!/bin/bash

start() {
        source "/etc/profile"
        start-dfs.sh
        start-yarn.sh
        ssh big-slave1 "cd /hadoop/sbin ; ./yarn-daemon.sh start resourcemanager"
        mr-jobhistory-daemon.sh start historyserver
}

stop() {
        source "/etc/profile"
        mr-jobhistory-daemon.sh stop historyserver
        ssh big-slave1 "cd /hadoop/sbin ; ./yarn-daemon.sh stop resourcemanager"
        stop-yarn.sh
        stop-dfs.sh
}

case $1 in
        start|stop) "$1" ;;
esac

exit 0

:wq!

[hadoop@big-master hadoop]$ chmod 755 /hadoop/sbin/hadoop-service.sh
[hadoop@big-master hadoop]$ exit
logout
[root@big-master bigdata]# vi /etc/systemd/system/hadoop.service
[Unit]
Description = hadoop dfs/yarn ( ver. 2.7.6 )
After = network.target zookeeper.service
Requires = network.target

[Service]
Type=oneshot
User=hadoop
Group=hadoop
ExecStart = /hadoop/sbin/hadoop-service.sh start
ExecStop = /hadoop/sbin/hadoop-service.sh stop
RemainAfterExit=yes


[Install]
WantedBy=multi-user.target
:wq!
[root@big-master bigdata]# chmod 644 /etc/systemd/system/hadoop.service
[root@big-master bigdata]# systemctl stop hadoop.service
[root@big-master bigdata]# systemctl status hadoop.service
● hadoop.service - hadoop dfs/yarn ( ver. 2.7.6 )
   Loaded: loaded (/etc/systemd/system/hadoop.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

May 01 18:18:42 big-master hadoop-service.sh[13618]: big-slave4: stopping datanode
May 01 18:18:42 big-master hadoop-service.sh[13618]: big-slave2: stopping datanode
May 01 18:18:42 big-master hadoop-service.sh[13618]: big-slave3: stopping datanode
May 01 18:18:45 big-master hadoop-service.sh[13618]: Stopping journal nodes [big-master big-slave1 big-slave2]
May 01 18:18:50 big-master hadoop-service.sh[13618]: big-slave1: stopping journalnode
May 01 18:18:50 big-master hadoop-service.sh[13618]: big-slave2: stopping journalnode
May 01 18:18:50 big-master hadoop-service.sh[13618]: big-master: stopping journalnode
May 01 18:18:51 big-master hadoop-service.sh[13618]: Stopping ZK Failover Controllers on NN hosts [big-master big-slave1]
May 01 18:18:57 big-master hadoop-service.sh[13618]: big-slave1: stopping zkfc
May 01 18:18:57 big-master systemd[1]: Stopped hadoop dfs/yarn ( ver. 2.7.6 ).
[root@big-master bigdata]# journalctl -xe
-- A session with the ID 54 has been terminated.
May 01 18:18:57 big-master hadoop-service.sh[13618]: big-slave1: stopping zkfc
May 01 18:18:57 big-master sshd[14157]: Received disconnect from 192.168.100.180 port 56492:11: disconnected by user
May 01 18:18:57 big-master sshd[14157]: Disconnected from 192.168.100.180 port 56492
May 01 18:18:57 big-master hadoop-service.sh[13618]: big-master: stopping zkfc
May 01 18:18:57 big-master systemd[1]: Stopped hadoop dfs/yarn ( ver. 2.7.6 ).
-- Subject: Unit hadoop.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit hadoop.service has finished shutting down.
May 01 18:18:57 big-master sshd[14152]: pam_unix(sshd:session): session closed for user hadoop
May 01 18:18:57 big-master polkitd[684]: Unregistered Authentication Agent for unix-process:13612:642208 (system bus name :1.205, obje
May 01 18:18:57 big-master systemd-logind[688]: Removed session 58.
-- Subject: Session 58 has been terminated
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
--
-- A session with the ID 58 has been terminated.
May 01 18:18:57 big-master systemd[1]: Removed slice User Slice of hadoop.
-- Subject: Unit user-1001.slice has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-1001.slice has finished shutting down.
May 01 18:18:57 big-master systemd[1]: Stopping User Slice of hadoop.
-- Subject: Unit user-1001.slice has begun shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-1001.slice has begun shutting down.
[root@big-master bigdata]# systemctl enable hadoop.service
Created symlink from /etc/systemd/system/multi-user.target.wants/hadoop.service to /etc/systemd/system/hadoop.service.
[root@big-master bigdata]# ls /etc/systemd/system
basic.target.wants                           default.target        hadoop.service           sysinit.target.wants        zookeeper.service
dbus-org.freedesktop.NetworkManager.service  default.target.wants  multi-user.target.wants  system-update.target.wants
dbus-org.freedesktop.nm-dispatcher.service   getty.target.wants    sockets.target.wants     zookeeper
#시작시 기동되는지 확인을 위해 시스템 재시작
[root@big-master bigdata]# systemctl reboot

#재시작 후 zookeeper cli에서 ha 확인
[zookeeper@big-master ~]$ zkCli.sh
Connecting to localhost:2181
2018-05-06 19:15:00,216 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0, built on 11/01/2017 18:06 GMT
2018-05-06 19:15:00,221 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=big-master
2018-05-06 19:15:00,221 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_162
2018-05-06 19:15:00,225 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2018-05-06 19:15:00,225 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/bigdata/jdk1.8.0_162/jre
2018-05-06 19:15:00,225 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/zookeeper/bin/../build/classes:/zookeeper/bin/../build/lib/*.jar:/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/zookeeper/bin/../lib/log4j-1.2.16.jar:/zookeeper/bin/../lib/jline-0.9.94.jar:/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-3.4.11.jar:/zookeeper/bin/../src/java/lib/*.jar:/zookeeper/bin/../conf::/jdk/lib:/bigdata/lib
2018-05-06 19:15:00,225 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-05-06 19:15:00,225 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2018-05-06 19:15:00,226 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<na>
2018-05-06 19:15:00,226 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2018-05-06 19:15:00,226 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2018-05-06 19:15:00,226 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-693.21.1.el7.x86_64
2018-05-06 19:15:00,226 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=zookeeper
2018-05-06 19:15:00,226 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/home/zookeeper
2018-05-06 19:15:00,227 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/home/zookeeper
2018-05-06 19:15:00,228 [myid:] - INFO  [main:ZooKeeper@441] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921
Welcome to ZooKeeper!
2018-05-06 19:15:00,263 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1035] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2018-05-06 19:15:00,354 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@877] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2018-05-06 19:15:00,367 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1302] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10000004e850008, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[bigdata, zookeeper, yarn-leader-election, hadoop-ha]
[zk: localhost:2181(CONNECTED) 1] quit
Quitting...
2018-05-06 19:16:50,585 [myid:] - INFO  [main:ZooKeeper@687] - Session: 0x10000004e850008 closed
2018-05-06 19:16:50,587 [myid:] - INFO  [main-EventThread:ClientCnxn$EventThread@520] - EventThread shut down for session: 0x10000004e850008
[zookeeper@big-master ~]$

길다..

반응형

'bigdata > hadoop' 카테고리의 다른 글

hadoop mapreduce - Container killed on request. Exit code is 143  (0) 2018.04.14
hadoop 2.7.5 compile  (0) 2018.04.11
hadoop job control  (0) 2017.04.16
hadoop 2.6.4 window 10 pseudo distribution mode 설치  (0) 2017.04.01
hadoop 2.x winutils  (0) 2017.03.27
Comments