搭建Hadoop完全分布式环境

注意:

1.默认操作用户为Moss 需要root权限时会说明是root用户即sudo

2.三台虚拟机需要修改host映射和hostname

#host 文件

192.168.137.51 hadoop51

192.168.137.51 hadoop52

192.168.137.51 hadoop53

3.要配置静态ip

集群规划

hadoop102 hadoop103 hadoop104
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode
YARN NodeManager ResourceManager NodeManager NodeManager

配置虚拟机

在opt目录下创建 module software

1
2
3
4
[Moss@hadoop51 opt]$ ll                                                                     
total 0
drwxr-xr-x. 4 Moss Moss 46 Aug 18 21:40 module //安装目录
drwxr-xr-x. 2 Moss Moss 108 Aug 18 21:23 software //放安装包

准备软件包 hadoop-3.3.4.tar.gz jdk-8u212-linux-x64.tar.gz

1
2
3
4
5
[Moss@hadoop51 software]$ ll                                                                
total 881960
-rw-rw-r--. 1 Moss Moss 12649765 Aug 18 21:22 apache-zookeeper-3.7.1-bin.tar.gz
-rw-rw-r--. 1 Moss Moss 695457782 Aug 18 21:23 hadoop-3.3.4.tar.gz
-rw-rw-r--. 1 Moss Moss 195013152 Aug 18 21:23 jdk-8u212-linux-x64.tar.gz

解压hadoop-3.3.4.tar.gz jdk-8u212-linux-x64.tar.gz 到module

命令: tar -zxvf xxx.tar.gz -C ../module (工作目录software)

1
2
3
4
[Moss@hadoop51 module]$ ll                                                                  
total 0
drwxr-xr-x. 12 Moss Moss 239 Aug 19 00:35 hadoop-3.3.4
drwxr-xr-x. 7 Moss Moss 245 Apr 2 2019 jdk1.8.0_212

配置环境变量

不在profile里更改原因是遵循linux规范

cd /etc/profile.d

sudo vim env.sh(名字可以随便,后缀.sh)

添加如下内容

1
2
3
4
5
6
#JDK的环境                                                                                  
export JAVA_HOME=/opt/module/jdk1.8.0_212
export PATH=$PATH:$JAVA_HOME/bin
#HADOOP的环境变量
export HADOOP_HOME=/opt/module/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

刷新

suorce /etc/profile

验证

hadoop version

1
2
3
4
5
6
7
8
9
[Moss@hadoop51 profile.d]$ hadoop version                                                   
Hadoop 3.3.4
Source code repository https://github.com/apache/hadoop.git -r a585a73c3e02ac62350c136643a5e
7f6095a3dbb
Compiled by stevel on 2022-07-29T12:32Z
Compiled with protoc 3.7.1
From source with checksum fb9dd8918a7b8a5b430d61af858f6ec
This command was run using /opt/module/hadoop-3.3.4/share/hadoop/common/hadoop-common-3.3.4.
jar

java -version

1
2
3
4
[Moss@hadoop51 profile.d]$ java -version                                                    
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

克隆三台虚拟机hadoop52 hadoop53(注意修改静态ip)

配置ssh免密登录

工作目录任意目录

三台虚拟机都执行以下命令

ssh-keygen -t rsa

ssh-copy-id hadoop51

ssh-copy-id hadoop52

ssh-copy-id hadoop53

编写异步分发脚本

cd /home/Moss/bin(没有可以自己创建,Moss是我的用户名)

1
2
3
4
5
[Moss@hadoop51 bin]$ ll                                                                     
total 12
-rwxr--r--. 1 Moss Moss 134 Aug 19 01:31 jpsall.sh
-rwxr--r--. 1 Moss Moss 1071 Aug 19 02:08 mycluster.sh
-rwxr--r--. 1 Moss Moss 653 Aug 19 00:33 my_rsync.sh

vim my_rsync.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash                                                                                 
#判断脚本的参数不为空
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#分发内容的逻辑
#遍历集群所有机器挨个发送
for host in hadoop52 hadoop53
do
#遍历发送内容列表 一个个发
for file in $@
do
#判断当前文件是否存在
if [ -e $file ]
then
#存在
#1. 获取父级路径
pdir=$(cd -P $(dirname $file); pwd)
#2. 获取分发内容名称
fname=$(basename $file)
#3. ssh登录
ssh $host "mkdir -p $pdir"
#4. 发送
rsync -av $pdir/$fname $host:$pdir
else
#不存在
echo "$file不存在"
fi
done
done

vim jpsall.sh

1
2
3
4
5
6
7
#!/bin/bash                                                                                 

for host in hadoop51 hadoop52 hadoop53
do
echo =============== $host ===============
ssh $host jps
done

添加权限

chmod 744 jpsall.sh

chmod 744 my_rsync.sh

集群配置

cd /opt/module/hadoop-3.3.4/etc/hadoop

接下来配置core-site.xml hdfs-site.xml yarn-site.xml mapred-site.xml workers

core-site.xml

  • 指定namenode节点
  • 指定元数据和真实数据的存储位置

vim core-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<configuration>
<!-- 指定NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop51:8020</value>
</property>
<!-- 指定hadoop数据的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-3.3.4/data</value>
</property>
<!-- 配置HDFS网页登录使用的静态用户为Moss-->
<property>
<name>hadoop.http.staticuser.user</name>
<value>Moss</value>
</property>
</configuration>

hdfs-site.xml

  • 配置namenode前端登录地址
  • 指定secondarynamenode节点

vim hdfs-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
<configuration>
<!-- nn web端访问地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop51:9870</value>
</property>
<!--2nn web端访问地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop53:9868</value>
</property>
</configuration>

yarn-site.xml

  • 指定MR走shuffle
  • 指定ResourceManager的地址
  • 环境变量的继承
  • 开启日志聚集功能

vim yarn-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
<configuration>
<!-- 指定MR走shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定ResourceManager的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop52</value>
</property>
<!-- 环境变量的继承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<!--开启日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!--设置日志聚集服务器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://hadoop102:19888/jobhistory/logs</value>
</property>
<!--设置日志保留时间为7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>

yarn-site.xml

  • 指定MapReduce程序运行在Yarn上
  • 历史服务器web端地址

vim yarn-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<configuration>
<!--指定MapReduce程序运行在Yarn上-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop51:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop51:19888</value>
</property>
</configuration>

workers

  • 工作节点 datanode

vim workers

hadoop51

hadoop52

hadoop53

同步到其他机器上

cd /opt/module/hadoop-3.3.4/etc

my_rsync.sh hadoop

启动集群

编写启动集群脚本

cd /home/Moss/bin

vim mycluster.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
if [ $# -lt 1 ]                                                                             
then
echo "No Args Input..."
exit ;
fi

case $1 in
"start")
echo " ============= 启动 hadoop集群 ================"
echo " --------------- 启动 hdfs ---------------"
ssh hadoop51 "/opt/module/hadoop-3.3.4/sbin/start-dfs.sh"
echo " --------------- 启动 yarn ---------------"
ssh hadoop52 "/opt/module/hadoop-3.3.4/sbin/start-yarn.sh"
echo " --------------- 启动 historyserver ---------------"
ssh hadoop51 "/opt/module/hadoop-3.3.4/bin/mapred --daemon start historyserver"
;;
"stop")
echo " ============== 关闭 hadoop集群 ================"
echo " --------------- 关闭 historyserver ---------------"
ssh hadoop51 "/opt/module/hadoop-3.3.4/bin/mapred --daemon stop historyserver"
echo " --------------- 关闭 yarn ---------------"
ssh hadoop52 "/opt/module/hadoop-3.3.4/sbin/stop-yarn.sh"
echo " --------------- 关闭 hdfs ---------------"
ssh hadoop51 "/opt/module/hadoop-3.3.4/sbin/stop-dfs.sh"
;;
*)
echo "Input Args Error..."
;;
esac

chmod 744 mycluster.sh

mycluster.sh start

1
2
3
4
5
6
7
8
9
10
[Moss@hadoop51 bin]$ mycluster.sh start                                                     
============= 启动 hadoop集群 ================
--------------- 启动 hdfs ---------------
Starting namenodes on [hadoop51]
Starting datanodes
Starting secondary namenodes [hadoop53]
--------------- 启动 yarn ---------------
Starting resourcemanager
Starting nodemanagers
--------------- 启动 historyserver ---------------

jpsall.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Moss@hadoop51 bin]$ jpsall.sh                                                              
=============== hadoop51 ===============
21040 NameNode
21465 NodeManager
21689 Jps
21627 JobHistoryServer
21196 DataNode
=============== hadoop52 ===============
19296 ResourceManager
19425 NodeManager
19763 Jps
19119 DataNode
=============== hadoop53 ===============
17793 NodeManager
17908 Jps
17598 DataNode
17710 SecondaryNameNode