Redis集群环境搭建(实验)
环境信息:
集群中至少有奇数个主节点,所以至少三个主节点,
每个节点至少一个备份节点,所以共6个节点(master和slave各3个)
节点信息: (我这里准备了3台主机,每台主机运行一个master和一个slave)
节点1:192.168.2.100:6379 master
节点2:192.168.2.100:6380 slave
节点3:192.168.2.200:6379 master
节点4:192.168.2.200:6380 slave
节点5:192.168.2.201:6379 master
节点6:192.168.2.201:6380 slave
master和slave安装路径:
master:/usr/local/redis-3.0.6-6379
slave:/usr/local/redis-3.0.6-6380
master配置文件:
daemonize yes //开启后台运行
pidfile /var/run/redis_6379.pid //pid文件
port 6379 //端口
bind 192.168.2.100 //默认127.0.0.1,需要改为其他节点可访问的地址
logfile "/usr/local/redis-3.0.6-6379/redis_6379.log" //log文件路径
dir /usr/local/redis-3.0.6-6379/ //RDB文件路径
appendonly yes //开启AOF持久化
cluster-enabled yes //开启集群
cluster-config-file nodes-6379.conf //集群配置文件
cluster-node-timeout 15000 //请求超时,默认15秒
slave配置文件:
daemonize yes
pidfile /var/run/redis_6380.pid
port 6380
bind 192.168.2.100
logfile "/usr/local/redis-3.0.6-6380/redis_6380.log"
dir /usr/local/redis-3.0.6-638/
appendonly yes
cluster-enabled yes
cluster-config-file nodes-6380.conf
cluster-node-timeout 15000
启动redis
# redis-server redis.conf //6个节点
# ps -ef |grep redis
root 22584 1 0 17:41 ? 00:00:00 redis-server 192.168.2.100:6379 [cluster]
root 22599 1 0 17:41 ? 00:00:00 redis-server 192.168.2.100:6380 [cluster]
root 22606 6650 0 17:41 pts/0 00:00:00 grep --color=auto redis
安装ruby环境:(redis-trib.rb命令,需要在ruby环境中执行)
# yum -y install ruby ruby-devel rubygems rpm-build
# gem install redis
Successfully installed redis-3.2.1
Parsing documentation for redis-3.2.1
1 gem installed
可能会遇到的报错:
ERROR: Could not find a valid gem 'redis' (>= 0), here is why:
Unable to download data from https://rubygems.org/ - no such name (https://rubygems.org/latest_specs.4.8.gz)
手动下载:https://rubygems.global.ssl.fastly.net/gems/redis-3.2.1.gem
执行:# gem install -l ./redis-3.2.1.gem
创建集群:
将redis-trib.rb复制到/usr/local/bin目录下
# cp /usr/local/redis-3.0.6-6379/class="lazy" data-src/redis-trib.rb /usr/local/bin/
# redis-trib.rb create --replicas 1 192.168.2.100:6379 192.168.2.100:6380 192.168.2.200:6379 192.168.2.200:6380 192.168.2.201:6379 192.168.2.201:6380 //创建集群
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.2.100:6379 //3个master节点
192.168.2.200:6379
192.168.2.201:6379
Adding replica 192.168.2.200:6380 to 192.168.2.100:6379 //3个slave节点
Adding replica 192.168.2.100:6380 to 192.168.2.200:6379
Adding replica 192.168.2.201:6380 to 192.168.2.201:6379
M: 098e7eb756b6047fde988ab3c0b7189e1724ecf5 192.168.2.100:6379
slots:0-5460 (5461 slots) master //hash slot [0-5460]
S: 7119dec91b086ca8fe69f7878fa42b1accd75f0f 192.168.2.100:6380
replicates 5844b4272c39456b0fdf73e384ff8c479547de47
M: 5844b4272c39456b0fdf73e384ff8c479547de47 192.168.2.200:6379
slots:5461-10922 (5462 slots) master //hash slot [5461-10922]
S: 227f51028bbe827f27b4e40ed7a08fcc7d8df969 192.168.2.200:6380
replicates 098e7eb756b6047fde988ab3c0b7189e1724ecf5
M: 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 192.168.2.201:6379
slots:10923-16383 (5461 slots) master //hash slot [10923-16383]
S: 2faf68564a70372cfc06c1afff197019cc6a39f3 192.168.2.201:6380
replicates 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.2.100:6379)
M: 098e7eb756b6047fde988ab3c0b7189e1724ecf5 192.168.2.100:6379
slots:0-5460 (5461 slots) master
M: 7119dec91b086ca8fe69f7878fa42b1accd75f0f 192.168.2.100:6380
slots: (0 slots) master
replicates 5844b4272c39456b0fdf73e384ff8c479547de47
M: 5844b4272c39456b0fdf73e384ff8c479547de47 192.168.2.200:6379
slots:5461-10922 (5462 slots) master
M: 227f51028bbe827f27b4e40ed7a08fcc7d8df969 192.168.2.200:6380
slots: (0 slots) master
replicates 098e7eb756b6047fde988ab3c0b7189e1724ecf5
M: 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 192.168.2.201:6379
slots:10923-16383 (5461 slots) master
M: 2faf68564a70372cfc06c1afff197019cc6a39f3 192.168.2.201:6380
slots: (0 slots) master
replicates 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69
[OK] All nodes agree about slots configuration. //所有节点都同意槽配置
>>> Check for open slots... //开始检查槽
>>> Check slots coverage... //检查槽的覆盖率
[OK] All 16384 slots covered. //所有的槽(16384个槽)都被覆盖(分配)
到此,redis集群环境部署完成。
免责声明:
① 本站未注明“稿件来源”的信息均来自网络整理。其文字、图片和音视频稿件的所属权归原作者所有。本站收集整理出于非商业性的教育和科研之目的,并不意味着本站赞同其观点或证实其内容的真实性。仅作为临时的测试数据,供内部测试之用。本站并未授权任何人以任何方式主动获取本站任何信息。
② 本站未注明“稿件来源”的临时测试数据将在测试完成后最终做删除处理。有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341