
在安装Zookeeper之前,首先需要确保的就是主机名称(可选)、hosts都已经更改,并且JDK成功安装。
vi /etc/hosts
- 10.5.172.214 slave-01
- 10.5.172.215 slave-02
- 10.5.172.216 slave-03
修改 /etc/hosts之后执行service network restart 使修改后的配置生效
1、安装Zookeeper
使用命令“tar -zxvf”命令将gz压缩文件解压。笔者Zookeeper的安装目录为:“/home/”,解压后的目录为/home/zookeeper-3.4.9”,最好确保Master、Slave1、Slave2机器上的Zookeeper安装路径一致。
2、配置Zookeeper的环境变量
成功安装Zookeeper后,接下来要做的事情就是配置Zookeeper的环境变量,并通过命令“source /etc/profile”命令使修改后的配置生效,如下所示:
vi /etc/profile
- #ZOOKEEPER
- ZOOKEEPER=/home/zookeeper-3.4.9
- PATH=$PATH:$ZOOKEEPER/bin
source /etc/profile
3、修改Zookeeper的配置文件
首先将/home/zookeeper-3.4.9/conf/zoo_sample.cfg文件复制一份,并更名为zoo.cfg,如下所示:
cd zookeeper-3.4.9/conf/
cp zoo_sample.cfg zoo.cfg
- # The number of milliseconds of each tick
- tickTime=2000
- # The number of ticks that the initial
- # synchronization phase can take
- initLimit=10
- # The number of ticks that can pass between
- # sending a request and getting an acknowledgement
- syncLimit=5
- # the directory where the snapshot is stored.
- # do not use /tmp for storage, /tmp here is just
- # example sakes.
- dataDir=/home/sl/data
- dataLogDir=/home/sl/log
- # the port at which the clients will connect
- clientPort=2181
- # the maximum number of client connections.
- # increase this if you need to handle more clients
- #maxClientCnxns=60
- #
- # Be sure to read the maintenance section of the
- # administrator guide before turning on autopurge.
- #
- # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
- #
- # The number of snapshots to retain in dataDir
- #autopurge.snapRetainCount=3
- # Purge task interval in hours
- # Set to "0" to disable auto purge feature
- #autopurge.purgeInterval=1
- server.1=slave-01:3333:4444
- server.2=slave-02:3333:4444
- server.3=slave-03:3333:4444
server.A=B:C:D:其中 A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。
根据dataDir和dataLogDir变量创建相应的目录。
mkdir -p /home/sl/data
mkdir -p /home/sl/log
4、创建myid文件
在dataDir目录下创建一个myid文件,然后分别在myid文件中按照zoo.cfg文件的server.A中A的数值,在不同机器上的该文件中填写相应的值。
echo "1" > /home/sl/data/myid
echo "2" > /home/sl/data/myid
echo "3" > /home/sl/data/myid
5、启动Zookeeper
执行命令“zkServer.sh start”将会启动Zookeeper。在此大家需要注意,和在Master启动Hadoop不同,不同节点上的Zookeeper需要单独启动。而执行命令“zkServer.sh stop”将会停止Zookeeper。
开发人员可以使用命令“JPS”查看Zookeeper是否成功启动,以及执行命令“zkServer.sh status”查看Zookeeper集群状态,如下所示:
- #10.5.172.214
- JMX enabled by default
- Using config: /home/zookeeper-3.4.9/bin/../conf/zoo.cfg
- Mode: follower
- #10.5.172.215
- JMX enabled by default
- Using config: /home/zookeeper-3.4.9/bin/../conf/zoo.cfg
- Mode: leader
- #10.5.172.216
- JMX enabled by default
- Using config: /home/zookeeper-3.4.9/bin/../conf/zoo.cfg
- Mode: follower
Zookeeper集群在启动的过程中,查阅zookeeper.out,会有如下异常:
- java.net.ConnectException: Connection refused
- at java.net.PlainSocketImpl.socketConnect(Native Method)
- at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
- at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
- at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
- at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
- at java.net.Socket.connect(Socket.java:579)
- at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
- at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
- at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
- at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
- at java.lang.Thread.run(Thread.java:745)
上述异常可以忽略,因为集群环境中某些子节点还没有启动zookeeper。
本文地址:https://www.chensj.net/?post=72
未标注转载均为本站远程,转载请注明文章出处: