五、集群节点高可用访问 kube-apiserver
1、部署环境说明
kubernetes master 节点运行如下组件:
kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:
kube-scheduler
和kube-controller-manager
会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性;- kube-apiserver 是无状态的,可以通过 kube-nginx 进行代理访问从而保证服务可用性;
注意: 如果三台Master节点仅仅作为集群管理节点的话,那么则无需部署containerd
、kubelet
、kube-proxy
组件;但是如果后期要部署mertics-server
、istio
组件服务时有可能会出现无法运行的情况,那么就需要在master节点部署containerd
、kubelet
、kube-proxy
组件
1.1:下载程序包并解压
#将k8s-server压缩包上传至服务器/opt/k8s/work目录下,并进行解压
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 work]# tar -zxvf kubernetes/kubernetes-src.tar.gz
1.2:分发二进制文件
- 将解压后的二进制文件拷贝到所有的K8S-Master集群的节点服务器上
- 将kuberlet,kube-proxy分发给所有worker节点,存储目录
/opt/k8s/bin
[root@k8s-master1 ~]# cd /opt/k8s/work
#拷贝kubernetes下的所有二进制文件至Master节点
[root@k8s-master1 work]# for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_ip}:/opt/k8s/bin/
ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
done
#拷贝kuberlet,kube-proxy到所有worker节点
[root@k8s-master1 work]# for node_ip in ${WORK_IPS[@]}
do
echo ">>> ${node_ip}"
scp kubernetes/server/bin/{kube-proxy,kubelet} root@${node_ip}:/opt/k8s/bin/
ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
done
2、集群节点高可用访问 kube-apiserver
2.1:基于 nginx 代理的 kube-apiserver 高可用方案
- 控制节点的
kube-controller-manager
、kube-scheduler
是多实例部署且连接本机的kube-apiserver
,所以只要有一个实例正常,就可以保证高可用; - 集群内的 Pod 使用 K8S 服务域名 kubernetes 访问
kube-apiserver
;kube-dns
会自动解析出多个kube-apiserver
节点的 IP,所以也是高可用的; 在每个节点起一个 nginx 进程,后端对接多
文章来源(Source):浅时光博客 个 apiserver 实例,nginx 对它们做健康检查和负载均衡;kube
、文章来源(Source):https://dqzboy.com letkube-proxy
、controller-manager
、scheduler
通过本地的 nginx(监听 127.0.0.1)访问kube-apiserver
,从而实现 kube-apiserver 的高可用;
2.2:下载和编译 nginx
- 官网nginx下载地址:https://nginx.org/download/
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# wget https://repo.huaweicloud.com/nginx/nginx-1.21.4.tar.gz
[root@k8s-master1 work]# tar -zxvf nginx-1.21.4.tar.gz
[root@k8s-master1 work]# cd nginx-1.21.4/
[root@k8s-master1 nginx-1.21.4]# mkdir nginx-prefix
[root@k8s-master1 nginx-1.21.4]# ./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
#进行编译并安装
[root@k8s-master1 nginx-1.21.4]# make && make install
2.3:验证编译的Nginx
[root@k8s-master1 nginx-1.21.4]# /opt/k8s/work/nginx-1.21.4/nginx-prefix/sbin/nginx -v
nginx version: nginx/1.21.4
2.4:集群节点部署 nginx
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
done
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
scp /opt/k8s/work/nginx-1.21.4/nginx-prefix/sbin/nginx root@${node_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
ssh root@${node_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
done
[root@k8s-master1 work]# cat > kube-nginx.conf <<EOF
worker_processes auto;
worker_cpu_affinity auto;
worker_rlimit_nofile 65535;
events {
use epoll;
worker_connections 65535;
accept_mutex on;
multi_accept on;
}
stream {
upstream backend {
hash $remote_addr consistent;
server 192.168.66.62:6443 max_fails=3 fail_timeout=30s;
server 192.168.66.63:6443 max_fails=3 fail_timeout=30s;
server 192.168.66.64:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 127.0.0.1:8443;
proxy_connect_timeout 1s;
proxy_pass backend;
}
}
EOF
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-nginx.conf root@${node_ip}:/opt/k8s/kube-nginx/conf/kube-nginx.conf
done
2.5:配置 systemd unit 文件,启动服务
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# cat > kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-nginx.service root@${node_ip}:/etc/systemd/system/
done
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx"
done
2.6:检查 kube-nginx 服务运行状态
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status kube-nginx |grep 'Active:'"
done