三、部署ETCD集群
etcd 是基于 Raft 的分布式 KV 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 集群持久化存储所有 API 对象、运行数据。
- 本文档 etcd 集群节点名称和 IP规划 如下
- k8s-master1:192.168.66.62
- k8s-master2:192.168.66.63
- k8s-master3:192.168.66.64
1、下载和分发 etcd 二进制文件
- ETCD仓库地址:https://github.com/etcd-io
- 如果网络原因,请在本地下载好安装包并上传至服务器
- https://etcd.io/docs/v3.3.12/upgrades/upgrade_3_4/ 3.4之后如果使用ETCD V2 API注册Pod集群网络信息会报错,但是flannel网络不支持ETCD v3 API,所以网络我们使用calio,支持更大的集群规模,如果你使用flannel,那么你需要安装
ETCDv3.3的版本
1.1:解压安装
[root@k8s-master1 ~]# cd /opt/k8s/work/
[root@k8s-master1 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gz
[root@k8s-master1 work]# tar -xvf etcd-v3.5.5-linux-amd64.tar.gz
1.2:分发各ETCD节点
[root@k8s-master1 work]# for node_ip in ${ETCD_IPS[@]}
do
echo ">>> ${node_ip}"
scp etcd-v3.5.5-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
done
2、创建 etcd 证书和私钥
2.1:创建证书签名请求
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.66.62",
"192.168.66.63",
"192.168.66.64"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "k8s",
"OU": "dqz"
}
]
}
EOF
2.2:生成证书和私钥
[root@k8s-master1 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@k8s-master1 work]# ls etcd*pem
etcd-key.pem etcd.pem
2.3:分发证书和私钥至各etcd节点
[root@k8s-master1 work]# for node_ip in ${ETCD_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
done
3、创建 etcd 的 systemd unit 模板文件
[root@k8s-master1 work]# cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
--data-dir=${ETCD_DATA_DIR} \\
--wal-dir=${ETCD_WAL_DIR} \\
--name=##ETCD_NAME## \\
--cert-file=/etc/etcd/cert/etcd.pem \\
--key-file=/etc/etcd/cert/etcd-key.pem \\
--trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
--peer-cert-file=/etc/etcd/cert/etcd.pem \\
--peer-key-file=/etc/etcd/cert/etcd-key.pem \\
--peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--listen-peer-urls=https://##ETCD_IP##:2380 \\
--initial-advertise-peer-urls=https://##ETCD_IP##:2380 \\
--listen-client-urls=https://##ETCD_IP##:2379,http://127.0.0.1:2379 \\
--advertise-client-urls=https://##ETCD_IP##:2379 \\
--initial-cluster-token=etcd-cluster-0 \\
--initial-cluster=${ETCD_NODES} \\
--initial-cluster-state=new \\
--auto-compaction-mode=periodic \\
--auto-compaction-retention=1 \\
--max-request-bytes=33554432 \\
--quota-backend-bytes=6442450944 \\
--heartbeat-interval=250 \\
--election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4、为各ETCD节点创建和分发 etcd systemd unit 文件
4.1:替换模板文件中的变量
[root@k8s-master1 work]# for (( i=0; i < 3; i++ ))
do
sed -e "s/##ETCD_NAME##/${ETCD_NAMES[i]}/" -e "s/##ETCD_IP##/${ETCD_IPS[i]}/" etcd.service.template > etcd-${ETCD_IPS[i]}.service
done
[root@k8s-master1 work]# ls *.service
etcd-192.168.66.62.service etcd-192.168.66.63.service etcd-192.168.66.64.service
4.2:分发生成的 systemd unit 文件
[root@k8s-master1 work]# for node_ip in ${ETCD_IPS[@]}
do
echo ">>> ${node_ip}"
scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
done
5、启动ETCD服务
- 注意:有可能ETCD节点1启动失败,而另外2个节点启动成功,这是正常情况,请重启ETCD节点1即可
[root@k8s-master1 work]# for node_ip in ${ETCD_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR} && chmod 0700 /data/k8s/etcd/data"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd"
done
6、检查启动结果
[root@k8s-master1 work]# for node_ip in ${ETCD_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status etcd|grep Active"
done
7、验证服务状态
7.1:任一etcd节点执行以下命令
[root@k8s-master1 work]# for node_ip in ${ETCD_IPS[@]}
do
echo ">>> ${node_ip}"
/opt/k8s/bin/etcdctl \
--endpoints=https://${node_ip}:2379 \
--cacert=/etc/kubernetes/cert/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem endpoint health
done
- 各服务节点全部为
healthy,则代表etcd集群状态正常

7.2:查看当前leader
[root@k8s-master1 work]# /opt/k8s/bin/etcdctl \
-w table --cacert=/opt/k8s/work/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem \
--endpoints=${ETCD_ENDPOINTS} endpoint status


谢谢分享,学习一下
不错的教程,跟着走一遍
写得很详细。学习了
谢谢老师分享
老师教程很详细
老师教程很详细
要被博主卷死了
学而时习之不亦乐乎
学习了,多谢分享
每天都来blog逛一逛。学下1哈,进步1哈