七、部署work节点
kubernetes worker 节点运行如下组件
- docker
- kubelet
- kube-proxy
- calico
- kube-nginx
注意:
- k8s-master1实现与其他work节点免密认证
- 特别说明:本章节部署的组件是在所有节点都进行部署,包括master和worker节点
- 如果未注明,则所有操作在
k8s-master1
节点进行远程操作集群中的work节点
1、安装依赖包
- 所有节点安装对应依赖包,所有操作均在k8s-master1节点操作
[root@k8s-master1 ~]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "yum install -y epel-release"
ssh root@${node_ip} "yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git"
done
2、部署docker
2.1、下载和分发 Docker 二进制文件
2.1.1:下载程序包
[root@k8s-master1 ~]# cd /opt/k8s/work/
[root@k8s-master1 work]# wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
[root@k8s-master1 work]# tar -xzvf docker-19.03.9.tgz
2.1.2:分发程序包
- 将程序包分发给各个所有节点
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp docker/* root@${node_ip}:/opt/k8s/bin/
ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
done
3.2、创建和分发 systemd unit 文件
3.2.1:创建systemd unit 文件
[root@k8s-master1 work]# cat > docker.service <<"EOF"
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
WorkingDirectory=##DOCKER_DIR##
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart=/opt/k8s/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
- 更改
IPTABLES
防火墙策略
[root@k8s-master1 work]# for i in 192.168.66.{65..67}; do echo ">>> $i";ssh root@$i "iptables -P FORWARD ACCEPT"; done
[root@k8s-master1 work]# for i in 192.168.66.{65..67}; do echo ">>> $i";ssh root@$i "echo '/sbin/iptables -P FORWARD ACCEPT' >> /etc/rc.local"; done
3.2.2:分发 systemd unit 文件到所有 worker 机器
[root@k8s-master1 work]# sed -i -e "s|##DOCKER_DIR##|${DOCKER_DIR}|" docker.service
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp docker.service root@${node_ip}:/etc/systemd/system/
done
3.3:配置和分发 docker 配置文件
- 使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效)
3.3.1:配置docker加速
- 由于网络环境原因。默认去下载官方的docker hub会下载失败所以使用阿里云的docker加速器
- 登入自己的阿里云生成docker.json,阿里云镜像地址
[root@k8s-master1 work]# cat > docker-daemon.json <<EOF
{
"registry-mirrors": ["https://a7ye1cuu.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
"insecure-registries": ["docker02:35000"],
"max-concurrent-downloads": 20,
"live-restore": true,
"max-concurrent-uploads": 10,
"debug": true,
"data-root": "${DOCKER_DIR}/data",
"exec-root": "${DOCKER_DIR}/exec",
"log-opts": {
"max-size": "100m",
"max-file": "5"
}
}
EOF
3.3.2:分发至work节点
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/docker/ ${DOCKER_DIR}/{data,exec}"
scp docker-daemon.json root@${node_ip}:/etc/docker/daemon.json
done
3.4:启动 docker 服务
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
done
3.5:检查服务运行状态
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status docker|grep Active"
done
3.6:检查 docker0 网桥
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "/usr/sbin/ip addr show docker0"
done
4、部署 kubelet 组件
4.1、创建 kubelet bootstrap kubeconfig 文件
4.1.1:创建文件
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
# 创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
--description kubelet-bootstrap-token \
--groups system:bootstrappers:${node_name} \
--kubeconfig ~/.kube/config)
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
done
4.1.2:查看 kubeadm 为各节点创建的 token
[root@k8s-node1 work]# kubeadm token list --kubeconfig ~/.kube/config

4.1.3:查看各 token 关联的 Secret
[root@k8s-master1 work]# kubectl get secrets -n kube-system|grep bootstrap-token

4.2、分发 bootstrap kubeconfig 文件到所有 worker 节点
[root@k8s-master1 work]# for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
done
4.3、创建和分发 kubelet 参数配置文件
4.3.1:创建 kubelet 参数配置模板文件
- 注意:需要 root 账户运行
[root@k8s-master1 work]# cat <<EOF | tee kubelet-config.yaml.template
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
mode: Webhook
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
- "${CLUSTER_DNS_SVC_IP}"
podCIDR: "${CLUSTER_CIDR}"
maxPods: 220
serializeImagePulls: false
hairpinMode: promiscuous-bridge
cgroupDriver: cgroupfs
runtimeRequestTimeout: "15m"
rotateCertificates: true
serverTLSBootstrap: true
readOnlyPort: 0
port: 10250
address: "##NODE_IP##"
EOF
4.3.2:为各节点创建和分发 kubelet 配置文件
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
done
4.4、创建和分发 kubelet systemd unit 文件
4.4.1:创建 kubelet systemd unit 文件模板
[root@k8s-master1 work]# cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/cert \\
--network-plugin=cni \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/k8s/bin \\
--root-dir=${K8S_DIR}/kubelet \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-config.yaml \\
--hostname-override=##NODE_NAME## \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/hu279318344/pause-amd64:3.1 \\
--image-pull-progress-deadline=15m \\
--volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
--logtostderr=true \\
--v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF
4.4.2:为各节点创建和分发 kubelet systemd unit 文件
[root@k8s-master1 work]# for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
done
4.5、授予 kube-apiserver 访问 kubelet API 的权限
[root@k8s-master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created
4.6、Bootstrap Token Auth 和授予权限
[root@k8s-master1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
4.7、自动 approve CSR 请求,生成 kubelet client 证书
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# cat > csr-crb.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: approve-node-server-renewal-csr
apiGroup: rbac.authorization.k8s.io
EOF
[root@k8s-master1 work]# kubectl apply -f csr-crb.yaml
4.8、启动 kubelet 服务
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet"
ssh root@${node_ip} "/usr/sbin/swapoff -a"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done
4.9、查看kubelet情况
- 稍等一会,三个节点的 CSR 都被自动
approved
Pending 的 CSR
用于创建 kubelet server 证书
[root@k8s-master1 work]# kubectl get csr

- 所有节点均注册(Ready 状态是预期的,现在查看状态显示为
NotReady
为正常想象,因为没有部署网络插件,后续安装了网络插件后就好)
[root@k8s-master1 work]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady <none> 18h v1.18.6
k8s-node2 NotReady <none> 18h v1.18.6
k8s-node3 NotReady <none> 18h v1.18.6
- kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥
- 注意在node节点执行以下命令查看
ls -l /etc/kubernetes/kubelet.kubeconfig
ls -l /etc/kubernetes/cert/kubelet-client-*
- 可以看到没有自动生成
kubelet server
证书文件

4.10、手动 approve server cert csr
- 基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手
文章来源(Source):https://www.dqzboy.com 动 approve
[root@k8s-master1 ~]# kubectl get csr

- 手动approve
[root@k8s-master1 ~]# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve

- 在node节点查看,自动生成了server证书
ls -l /etc/kubernetes/cert/kubelet-*

4.11、kubelet api 认证和授权
[root@k8s-master1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.66.65:10250/metrics
Unauthorized
[root@k8s-master1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.66.66:10250/metrics
Unauthorized
[root@k8s-master1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.66.67:10250/metrics
Unauthorized
[root@k8s-master1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.66.65:10250/metrics
Unauthorized
4.12、证书认证和授权
- 权限不足的证书
[root@k8s-master1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.66.65:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
- 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@k8s-master1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.66.65:10250/metrics|head
4.13、bear token 认证和授权
- 创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限;
[root@k8s-master1 ~]# kubectl create sa kubelet-api-test
serviceaccount/kubelet-api-test created
[root@k8s-master1 ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
clusterrolebinding.rbac.authorization.k8s.io/kubelet-api-test created
[root@k8s-master1 ~]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@k8s-master1 ~]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[root@k8s-master1 ~]# echo ${TOKEN}
[root@k8s-master1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.66.65:10250/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
4.14:cadvisor 和 metrics
- cadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况的服务。
- 浏览器访问 https://192.168.66.65:10250/metrics和 https://192.168.66.65:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。
注意:
- kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
- 创建和导入相关证书,然后访问上面的 10250 端口;

4.14.1:导入根证书
- 将根证书 ca.pem 从服务器中(/opt/k8s/work)导出道自己的PC电脑中,然后再导入到操作系统,并设置永久信任
[root@k8s-master1 ~]# cd /opt/k8s/work/
[root@k8s-master1 work]# ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
[root@k8s-master1 work]# sz ca.pem












- 已经被信任但是,连接提示未授权401提示

4.14.2:创建浏览器证书
- 这里使用kubectl 命令行工具时创建的 admin 证书、私钥和上面的 ca 证书,创建一个浏览器可以使用
PKCS#12/PFX
格式的证书:
[root@k8s-master1 ~]# cd /opt/k8s/work/
[root@k8s-master1 work]# openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem
Enter Export Password: #设置密码
Verifying - Enter Export Password:
- 将创建的
admin.pfx
导入到系统的证书中








- 重启浏览器,再次访问,这里选中上面导入的
admin.pfx



5、部署 kube-proxy 组件
- kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中
service
和endpoint
的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。 - 以下操作没有特殊说明,则全部在k8s-master1节点通过远程调用进行操作
5.1:创建 kube-proxy 证书
5.1.2:创建证书签名请求
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "dqz"
}
]
}
EOF
5.1.3:生成证书和私钥
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@k8s-master1 work]# ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
5.2:创建和分发 kubeconfig 文件
5.2.1:创建kubeconfig文件
[root@k8s-master1 work]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
[root@k8s-master1 work]# kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
[root@k8s-master1 work]# kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
[root@k8s-master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
5.2.2:分文章来源(Source):https://www.dqzboy.com 发kubeconfig文件
- 分发至所有work节点机器
[root@k8s-master1 work]# for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
done
5.3:创建 kube-proxy 配置文件
5.3.1:创建 kube-proxy config 文件模板
[root@k8s-master1 work]# cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
burst: 200
kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
iptables:
masqueradeAll: false
ipvs:
scheduler: rr
excludeCIDRs: []
EOF
5.3.2:为各节点创建和分发 kube-proxy 配置文件
[root@k8s-master1 work]# for (( i=0; i < 6; i++ ))
do
echo ">>> ${NODE_NAMES[i]}"
sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template
scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
done
5.4:创建和分发 kube-proxy syst
5.4.1:创建文件
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy-config.yaml \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5.4.2:分发文件
[root@k8s-master1 work]# for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
scp kube-proxy.service root@${node_name}:/etc/systemd/system/
done
5.5:启动kube-proxy服务
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
ssh root@${node_ip} "modprobe ip_vs_rr"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
done
5.6:检查启动结果
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
done
5.7:查看监听端口
- 登入各个work节点执行以下命令查看kube-proxy服务启动状况
[root@k8s-node1 ~]# netstat -lnpt|grep kube-prox
tcp 0 0 192.168.66.65:10249 0.0.0.0:* LISTEN 35911/kube-proxy
tcp 0 0 192.168.66.65:10256 0.0.0.0:* LISTEN 35911/kube-proxy
5.8:查看 ipvs 路由规则
[root@k8s-master1 work]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
done
>>> 192.168.66.62
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 192.168.66.62:6443 Masq 1 0 0
-> 192.168.66.63:6443 Masq 1 0 0
-> 192.168.66.64:6443 Masq 1 0 0
>>> 192.168.66.63
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 192.168.66.62:6443 Masq 1 0 0
-> 192.168.66.63:6443 Masq 1 0 0
-> 192.168.66.64:6443 Masq 1 0 0
>>> 192.168.66.64
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 192.168.66.62:6443 Masq 1 0 0
-> 192.168.66.63:6443 Masq 1 0 0
-> 192.168.66.64:6443 Masq 1 0 0
>>> 192.168.66.65
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 192.168.66.62:6443 Masq 1 0 0
-> 192.168.66.63:6443 Masq 1 0 0
-> 192.168.66.64:6443 Masq 1 0 0
>>> 192.168.66.66
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 192.168.66.62:6443 Masq 1 0 0
-> 192.168.66.63:6443 Masq 1 0 0
-> 192.168.66.64:6443 Masq 1 0 0
>>> 192.168.66.67
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 192.168.66.62:6443 Masq 1 0 0
-> 192.168.66.63:6443 Masq 1 0 0
-> 192.168.66.64:6443 Masq 1 0 0
- 可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口
6、部署 calico 网络
- kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。
- calico 使用 IPIP 或 BGP 技术(默认为 IPIP)为各节点创建一个可以互通的 Pod 网络。
6.1、安装 calico 网络插件
[root@k8s-master1 ~]# cd /opt/k8s/work
[root@k8s-master1 work]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
6.1.1:修改配置
- 将 Pod 网段地址修改为 10.68.0.0/16,这个根据实际情况来填写;
- calico 自动探查互联网卡,如果有多块网卡,则可以配置用于互联的网络接口命名正则表达式,如上面的 en.*(根据自己服务器的网络接口名修改);
- 修改
typha_service_name
- 更改
CALICO_IPV4POOL_IPIP
为Never
使用 BGP 模式;它会以daemon原文链接:https://www.dqzboy.com set方式安装在所有node主机,每台主机启动一个bird(BGP client),它会将calico网络内的所有node分配的ip段告知集群内的主机,并通过本机的网卡eth0或者ens33转发数据;特别注意:基于OpenStack云主机搭建的k8s集群,这里CALICO_IPV4POOL_IPIP
值保持默认,如果使用BGP会出现Pod无法ping通的情况!!! - 另外增加
IP_AUTODETECTION_METHOD
为interface
使用匹配模式,默认是first-found
模式,在复杂网络环境下还是有出错的可能
#备份默认文件
[root@k8s-master1 work]# cp calico.yaml calico.yaml.bak
[root@k8s-master1 work]# vim calico.yaml
# IP automatic detection
- name: IP_AUTODETECTION_METHOD
value: "interface=en.*"




6.1.2:运行 calico 插件
[root@k8s-master1 work]# kubectl apply -f calico.yaml

6.2、查看 calico 运行状态
[root@k8s-master1 work]# kubectl get pods -n kube-system -o wide

- 使用 docker命令查看 calico 使用的镜像,在worker节点执行
[root@k8s-node1 ~]# docker images

6.3、安装calicoctl
[root@k8s-master-1 ~]# for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "cd /opt/k8s/work && wget https://github.com/projectcalico/calicoctl/releases/download/v3.18.3/calicoctl && chmod +x calicoctl && mv calicoctl /usr/local/bin"
done
[root@k8s-master-1 ~]# calicoctl version
Client Version: v3.18.3
Git commit: 8871aca3
6.4、查看calico状态
[root@k8s-master1 ~]# calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+-------------+
| 192.168.66.64 | node-to-node mesh | up | 06:32:38 | Established |
| 192.168.66.66 | node-to-node mesh | up | 06:32:37 | Established |
| 192.168.66.67 | node-to-node mesh | up | 06:32:38 | Established |
| 192.168.66.63 | node-to-node mesh | up | 06:32:54 | Established |
| 192.168.66.65 | node-to-node mesh | up | 06:43:40 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
博主文章不错
博主牛逼