RocketMQ 的主备模式
按之前所说,只有 RocketMQ 的多主多从异步复制是可以出产利用的,因此只在这个场景下测试。别的,动静回收 Push 顺序模式消费。
假设集群回收2主2备的模式,需要启动4个 Broker,设置文件如下,
brokerName=broker-a brokerId=0 listenPort=10911 storePathRootDir=/home/arnes/alibaba-rocketmq/data/store-a-async storePathCommitLog=/home/arnes/alibaba-rocketmq/data/store-a-async/commitlog brokerRole=ASYNC_MASTER brokerName=broker-a brokerId=1 listenPort=10921 storePathRootDir=/home/arnes/alibaba-rocketmq/data/store-a-async-slave storePathCommitLog=/home/arnes/alibaba-rocketmq/data/store-a-async-slave/commitlog brokerRole=SLAVE brokerName=broker-b brokerId=0 listenPort=20911 storePathRootDir=/home/arnes/alibaba-rocketmq/data/store-b-async storePathCommitLog=/home/arnes/alibaba-rocketmq/data/store-b-async/commitlog brokerRole=ASYNC_MASTER brokerRole=ASYNC_MASTER brokerName=broker-b brokerId=1 listenPort=20921 storePathRootDir=/home/arnes/alibaba-rocketmq/data/store-b-async-slave storePathCommitLog=/home/arnes/alibaba-rocketmq/data/store-b-async-slave/commitlog brokerRole=SLAVE
别的,每个机构共通的设置项如下,
brokerClusterName=DefaultCluster brokerIP1=192.168.232.23 namesrvAddr=192.168.232.23:9876 deleteWhen=04 fileReservedTime=120 flushDiskType=ASYNC_FLUSH
其他配置均回收默认。启动 NameServer 和所有 Broker,并试运行一下 Producer,然后看一下 TestTopic1 当前的环境,
$ sh mqadmin topicRoute -n 192.168.232.23:9876 -t TopicTest1
{
"brokerDatas":[
{
"brokerAddrs":{0:"192.168.232.23:20911",1:"192.168.232.23:20921"
},
"brokerName":"broker-b"
},
{
"brokerAddrs":{0:"192.168.232.23:10911",1:"192.168.232.23:10921"
},
"brokerName":"broker-a"
}
],
"filterServerTable":{},
"queueDatas":[
{
"brokerName":"broker-a",
"perm":6,
"readQueueNums":4,
"topicSynFlag":0,
"writeQueueNums":4
},
{
"brokerName":"broker-b",
"perm":6,
"readQueueNums":4,
"topicSynFlag":0,
"writeQueueNums":4
}
]
}
可见,TestTopic1 在2个 Broker 上,且每个 Broker 备机也在运行。下面开始主备切换的尝试,别离启动 Consumer 和 Producer 历程,昆山软件公司,动静回收 Pull 顺序模式消费。在动静发送吸收进程中,利用 kill -9 停掉 broker-a 的主历程,模仿溘然宕机。此时,TestTopic1 的状态如下,
$ sh mqadmin topicRoute -n 192.168.232.23:9876 -t TopicTest1
{
"brokerDatas":[
{
"brokerAddrs":{0:"192.168.232.23:20911",1:"192.168.232.23:20921"
},
"brokerName":"broker-b"
},
{
"brokerAddrs":{1:"192.168.232.23:10921"
},
"brokerName":"broker-a"
}
],
"filterServerTable":{},
"queueDatas":[
{
"brokerName":"broker-a",
"perm":6,
"readQueueNums":4,
"topicSynFlag":0,
"writeQueueNums":4
},
{
"brokerName":"broker-b",
"perm":6,
"readQueueNums":4,
"topicSynFlag":0,
"writeQueueNums":4
}
]
}
broker-a 的节点已经淘汰为只有1个从节点。然后启动broker-a 的主节点,模仿规复,再看一下 TestTopic1 的状态,
$ sh mqadmin topicRoute -n 192.168.232.23:9876 -t TopicTest1
{
"brokerDatas":[
{
"brokerAddrs":{0:"192.168.232.23:20911",1:"192.168.232.23:20921"
},
"brokerName":"broker-b"
},
{
"brokerAddrs":{0:"192.168.232.23:10911",1:"192.168.232.23:10921"
},
"brokerName":"broker-a"
}
],
"filterServerTable":{},
"queueDatas":[
{
"brokerName":"broker-a",
"perm":6,
"readQueueNums":4,
"topicSynFlag":0,
"writeQueueNums":4
},
{
"brokerName":"broker-b",
"perm":6,
"readQueueNums":4,
"topicSynFlag":0,
"writeQueueNums":4
}
]
}
此时,RocketMQ 已经规复。