Kubernetes / 云计算

二进制部署多Master高可用K8S集群-v1.18(三)

浅时光 · 7月26日 · 2020年 13221次已读

七、部署work节点


kubernetes worker 节点运行如下组件

  • docker
  • kubelet
  • kube-proxy
  • calico
  • kube-nginx

本文档work节点地址如下

  • 192.168.66.65
  • 192.168.66.66
  • 192.168.66.67

注意:

  1. k8s-master1实现与其他work节点免密认证
  2. 如果未注明,则所有操作在k8s-master1节点进行远程操作集群中的work节点

1、安装依赖包

  • 所有work节点安装对应依赖包,所有操作均在k8s-master1节点操作
[[email protected] ~]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "yum install -y epel-release"
    ssh [email protected]${node_ip} "yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git"
  done

2、kube-apiserver 高可用

  • 本文档讲解使用 nginx 4 层透明代理功能实现 K8S worker节点高可用访问 kube-apiserver 集群的步骤。
  • 注意:如果没有特殊指明,本文档的所有操作均在k8s-Master1节点上执行,然后远程分发文件和执行命令。

2.1:基于 nginx 代理的 kube-apiserver 高可用方案

  • 控制节点的 kube-controller-managerkube-scheduler 是多实例部署且连接本机的 kube-apiserver,所以只要有一个实例正常,就可以保证高可用;
  • 集群内的 Pod 使用 K8S 服务域名 kubernetes 访问 kube-apiserverkube-dns 会自动解析出多个 kube-apiserver 节点的 IP,所以也是高可用的;
  • 在每个节点起一个 nginx 进程,后端对接多个 apiserver 实例,nginx 对它们做健康检查和负载均衡;
  • kubeletkube-proxycontroller-managerscheduler 通过本地的 nginx(监听 127.0.0.1)访问 kube-apiserver,从而实现 kube-apiserver 的高可用;

2.2:下载和编译 nginx

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# wget https://mirrors.huaweicloud.com/nginx/nginx-1.16.1.tar.gz

[[email protected] work]# yum -y install gcc-c++
[[email protected] work]# tar -zxvf nginx-1.16.1.tar.gz

[[email protected] work]# cd nginx-1.16.1/
[[email protected] work]# mkdir nginx-prefix
[[email protected] nginx-1.16.1]# ./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module

[[email protected] nginx-1.16.1]# make && make install

2.4:文章来源(Source):https://www.dqzboy.comWORK节点部署 nginx

2.4.1:WORK节点创建目录

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
  done

2.4.2:拷贝二进制程序

  • 注意:根据自己配置的nginx编译安文章来源(Source):https://www.dqzboy.com装的路径进行填写下面的路径
  • 重命名二进制文件为 kube-nginx
[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
    scp /opt/k8s/work/nginx-1.16.1/nginx-prefix/sbin/nginx  [email protected]${node_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
    ssh [email protected]${node_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
  done

2.4.3: 配置 nginx,开启 4 层透明转发功能

  • 注意:upstream backend中的 server 列表为集群中各 kube-apiserver 的节点 IP,需要根据实际情况修改
[[email protected] work]# cat > kube-nginx.conf <<EOF
worker_processes 1;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;
        server 192.168.66.62:6443        max_fails=3 fail_timeout=30s;
        server 192.168.66.63:6443        max_fails=3 fail_timeout=30s;
        server 192.168.66.64:6443        max_fails=3 fail_timeout=30s;
    }

    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF

2.4.4:分发配置文件

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-nginx.conf  [email protected]${node_ip}:/opt/k8s/kube-nginx/conf/kube-nginx.conf
  done

2.5:配置 systemd unit 文件,启动服务

2.5.1:配置 kube-nginx systemd unit 文件

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# cat > kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

2.5.2:分发 systemd unit 文件

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kube-nginx.service  [email protected]${node_ip}:/etc/systemd/system/
  done

2.5.3:启动 kube-nginx 服务

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx"
  done

2.6:检查 kube-nginx 服务运行状态

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl status kube-nginx |grep 'Active:'"
  done

3、部署docker

3.1、下载和分发 docker 二进制文件

3.1.1:下载程序包

[[email protected] ~]# cd /opt/k8s/work/
[[email protected] work]# wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
[[email protected] work]# tar -xzvf docker-19.03.9.tgz

3.1.2:分发程序包

  • 将程序包分发给各个work节点
[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp docker/*  [email protected]${node_ip}:/opt/k8s/bin/
    ssh [email protected]${node_ip} "chmod +x /opt/k8s/bin/*"
  done

3.2、创建和分发 systemd unit 文件

3.2.1:创建systemd unit 文件

[[email protected] work]# cat > docker.service <<"EOF"
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
 
[Service]
WorkingDirectory=##DOCKER_DIR##
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart=/opt/k8s/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
 
[Install]
WantedBy=multi-user.target
EOF
  • 更改IPTABLES防火墙策略
[[email protected] work]# for i in 192.168.66.{65..67}; do echo ">>> $i";ssh [email protected]$i "iptables -P FORWARD ACCEPT"; done

[[email protected] work]# for i in 192.168.66.{65..67}; do echo ">>> $i";ssh [email protected]$i "echo '/sbin/iptables -P FORWARD ACCEPT' >> /etc/rc.local"; done

3.2.2:分发 systemd unit 文件到所有 worker 机器

[[email protected] work]# sed -i -e "s|##DOCKER_DIR##|${DOCKER_DIR}|" docker.service

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp docker.service [email protected]${node_ip}:/etc/systemd/system/
  done

3.3:配置和分发 docker 配置文件

  • 使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效)

3.3.1:配置docker加速

  • 由于网络环境原因。默认去下载官方的docker hub会下载失败所以使用阿里云的docker加速器
  • 登入自己的阿里云生成docker.json,阿里云镜像地址
[[email protected] work]# cat > docker-daemon.json <<EOF
{
    "registry-mirrors": ["https://a7ye1cuu.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
    "insecure-registries": ["docker02:35000"],
    "max-concurrent-downloads": 20,
    "live-restore": true,
    "max-concurrent-uploads": 10,
    "debug": true,
    "data-root": "${DOCKER_DIR}/data",
    "exec-root": "${DOCKER_DIR}/exec",
    "log-opts": {
      "max-size": "100m",
      "max-file": "5"
    }
}
EOF

3.3.2:分发至work节点

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p  /etc/docker/ ${DOCKER_DIR}/{data,exec}"
    scp docker-daemon.json [email protected]${node_ip}:/etc/docker/daemon.json
  done

3.4:启动 docker 服务

文章来源(Source):https://www.dqzboy.com
[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
  done

3.5:检查服务运行状态

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl status docker|grep Active"
  done

3.6:检查 docker0 网桥

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "/usr/sbin/ip addr show docker0"
  done

4、部署 kubelet 组件

4.1、创建 kubelet bootstr文章来源(Source):https://www.dqzboy.comap kubeconfig 文件

4.1.1:创建文件

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# for node_name in ${WORK_NAMES[@]}
  do
    echo ">>> ${node_name}"

    # 创建 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)

    # 设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
  done

4.1.2:查看 kubeadm 为各节点创建的 token

[[email protected] work]# kubeadm token list --kubeconfig ~/.kube/config
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

4.1.3:查看各 token 关联的 Secret

[[email protected] work]# kubectl get secrets  -n kube-system|grep bootstrap-token
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

4.2、分发 bootstrap kubeconfig 文件到所有 worker 节点

[[email protected] work]# for node_name in ${WORK_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kubelet-bootstrap-${node_name}.kubeconfig [email protected]${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

4.3、创建和分发 kubelet 参数配置文件

4.3.1:创建 kubelet 参数配置模板文件

  • 注意:需要 root 账户运行
[[email protected] work]# cat <<EOF | tee kubelet-config.yaml.template
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
  - "${CLUSTER_DNS_SVC_IP}"
podCIDR: "${CLUSTER_CIDR}"
maxPods: 220
serializeImagePulls: false
hairpinMode: promiscuous-bridge
cgroupDriver: cgroupfs
runtimeRequestTimeout: "15m"
rotateCertificates: true
serverTLSBootstrap: true
readOnlyPort: 0
port: 10250
address: "##NODE_IP##"
EOF

4.3.2:为各节点创建和分发 kubelet 配置文件

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do 
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
    scp kubelet-config-${node_ip}.yaml.template [email protected]${node_ip}:/etc/kubernetes/kubelet-config.yaml
  done

4.4、创建和分发 kubelet systemd unit 文件

4.4.1:创建 kubelet systemd unit 文件模板

[[email protected] work]# cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
 
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --network-plugin=cni \\
  --cni-conf-dir=/etc/cni/net.d \\
  --root-dir=${K8S_DIR}/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##NODE_NAME## \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF

4.4.2:为各节点创建和分发 kubelet systemd unit 文件

[[email protected] work]# for node_name in ${WORK_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
    scp kubelet-${node_name}.service [email protected]${node_name}:/etc/systemd/system/kubelet.service
  done

4.5、授予 kube-apiserver 访问 kubelet API 的权限

[[email protected] work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master

clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created

4.6、Bootstrap Token Auth 和授予权限

[[email protected] work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

4.7、自动 approve CSR 请求,生成 kubelet client 证书

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

[[email protected] work]# kubectl apply -f csr-crb.yaml

4.8、启动 kubelet 服务

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p ${K8S_DIR}/kubelet"
    ssh [email protected]${node_ip} "/usr/sbin/swapoff -a"
    ssh [email protected]${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done

4.9、查看kubelet情况

  • 稍等一会,三个节点的 CSR 都被自动 approved
  • Pending 的 CSR 用于创建 kubelet server 证书
[[email protected] work]# kubectl get csr
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
  • 所有节点均注册(Ready 状态是预期的,现在查看状态显示为NotReady为正常想象,因为没有部署网络插件,后续安装了网络插件后就好)
[[email protected] work]# kubectl get node
NAME          STATUS         ROLES    AGE   VERSION
k8s-node1   NotReady    <none>   18h   v1.18.6
k8s-node2   NotReady    <none>   18h   v1.18.6
k8s-node3   NotReady    <none>   18h   v1.18.6
  • kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥
  • 注意在node节点执行以下命令查看
ls -l /etc/kubernetes/kubelet.kubeconfig
ls -l /etc/kubernetes/cert/kubelet-client-*
  • 可以看到没有自动生成 kub文章来源(Source):https://www.dqzboy.comelet server 证书文件
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

4.10、手动 approve server cert csr

  • 基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve
[[email protected] ~]# kubectl get csr
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
  • 手动approve
[[email protected] ~]# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
  • 在node节点查看,自动生成了server证书
ls -l /etc/kubernetes/cert/kubelet-*
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

4.11、kubelet api 认证文章来源(Source):https://www.dqzboy.com和授权

[[email protected] ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.66.65:10250/metrics
Unauthorized

[[email protected] ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.66.66:10250/metrics
Unauthorized

[[email protected] ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.66.67:10250/metrics
Unauthorized

[[email protected] ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.66.65:10250/metrics
Unauthorized

4.12、证书认证和授权

  • 权限不足的证书
[[email protected] ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.66.65:10250/metrics

Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
  • 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[[email protected] ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.66.65:10250/metrics|head

4.13、bear token 认证和授权

  • 创建一个 ServiceAcc文章来源(Source):https://www.dqzboy.comount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限;
[[email protected] ~]# kubectl create sa kubelet-api-test

serviceaccount/kubelet-api-test created

[[email protected] ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test

clusterrolebinding.rbac.authorization.k8s.io/kubelet-api-test created

[[email protected] ~]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[[email protected] ~]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[[email protected] ~]# echo ${TOKEN}

[[email protected] ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.66.65:10250/metrics |head

# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

4.14:cadvisor 和 metrics

注意:

  • kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
  • 创建和导入相关证书,然后访问上面的 10250 端口;
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

4.14.1:导入根证书

  • 将根证书 ca.pem 从服务器中(/opt/k8s/work)导出道自己的PC电脑中,然后再导入到操作系统,并设置永久信任
[[email protected] ~]# cd /opt/k8s/work/
[[email protected] work]# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
[[email protected] work]# sz ca.pem
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
文章来源(Source):https://www.dqzboy.com
  • 已经被信任但是,连接提示未授权401提示
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

4.14.2:创建浏览器证书

  • 这里使用kubectl 命令行工具时创建的 admin 证书、私钥和上面的 ca 证书,创建一个浏览器可以使用 PKCS#12/PFX 格式的证书:
[[email protected] ~]# cd /opt/k8s/work/
[[email protected] work]# openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem
Enter Export Password:	#设置密码
Verifying - Enter Export Password:
  • 将创建的 admin.pfx 导入到系统的证书中
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
  • 重启浏览器,再次访问,这里选中上面导入的 admin.pfx
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

5、部署 kube-proxy 组件

  • kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中 serviceendpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。
  • 以下操作没有特殊说明,则全部在k8s-master1节点通过远程调用进行操作

5.1:创建 kube-proxy 证书

5.1.2:创建证书签名请求

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

5.1.3:生成证书和私钥

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

[[email protected] work]# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

5.2:创建和分发 kubeconfig 文件

5.2.1:创建kubeconfig文件

[[email protected] work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

[[email protected] work]# kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

[[email protected] work]# kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

[[email protected] work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

5.2.2:分发kubeconfig文件

  • 分发至所有work节点机器
[[email protected] work]# for node_name in ${WORK_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig [email protected]${node_name}:/etc/kubernetes/
  done

5.3:创建 kube-proxy 配置文件

5.3.1:创建 kube-proxy config 文件模板

[[email protected] work]# cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
iptables:
  masqueradeAll: false
ipvs:
  scheduler: rr
  excludeCIDRs: []
EOF

5.3.2:为各work节点创建和分发 kube-proxy 配置文件

[[email protected] work]# for (( i=0; i < 3; i++ ))
  do 
    echo ">>> ${WORK_NAMES[i]}"
    sed -e "s/##NODE_NAME##/${WORK_NAMES[i]}/" -e "s/##NODE_IP##/${WORK_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${WORK_NAMES[i]}.yaml.template
    scp kube-proxy-config-${WORK_NAMES[i]}.yaml.template [email protected]${WORK_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
  done

5.4:创建和分发 kube-proxy systemd unit 文件

5.4.1:创建文件

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5.4.2:分发文件

[[email protected] work]# for node_name in ${WORK_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    scp kube-proxy.service [email protected]${node_name}:/etc/systemd/system/
  done

5.5:启动kube-proxy服务

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh [email protected]${node_ip} "modprobe ip_vs_rr"
    ssh [email protected]${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done

5.6:检查启动结果

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl status kube-proxy|grep Active"
  done

5.7:查看监听端文章来源(Source):https://www.dqzboy.com

  • 登入各个work节点执行以下命令查看kube-proxy服务启动状况
[[email protected] ~]# netstat -lnpt|grep kube-prox
tcp        0      0 192.168.66.65:10249     0.0.0.0:*               LISTEN      35911/kube-proxy    
tcp        0      0 192.168.66.65:10256     0.0.0.0:*               LISTEN      35911/kube-proxy

5.8:查看 ipvs 路由规则

[[email protected] work]# for node_ip in ${WORK_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "/usr/sbin/ipvsadm -ln"
  done

>>> 192.168.66.65
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.66.62:6443           Masq    1      0          0         
  -> 192.168.66.63:6443           Masq    1      0          0         
  -> 192.168.66.64:6443           Masq    1      0          0         
>>> 192.168.66.66
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.66.62:6443           Masq    1      0          0         
  -> 192.168.66.63:6443           Masq    1      0          0         
  -> 192.168.66.64:6443           Masq    1      0          0         
>>> 192.168.66.67
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.66.62:6443           Masq    1      0          0         
  -> 192.168.66.63:6443           Masq    1      0          0         
  -> 192.168.66.64:6443           Masq    1      0          0   
  • 可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口

6、部署 calico 网络

  • kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。
  • calico 使用 IPIP 或 BGP 技术(默认为 IPIP)为各节点创建一个可以互通的 Pod 网络。

6.1、安装 calico 网络插件

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# curl https://docs.projectcalico.org/manifests/calico.yaml -O

6.1.2:修改配置

  • 将 Pod 网段地址修改为 10.68.0.0/16,这个根据实际情况来填写;
  • calico 自动探查互联网卡,如果有多块网卡,则可以配置用于互联的网络接口命名正则表达式,如上面的 en.*(根据自己服务器的网络接口名修改);
  • 修改typha_service_name
  • 更改 CALICO_IPV4POOL_IPIPNever 使用 BGP 模式;它会以daemonset方式安装在所有node主机,每台主机启动一个bird(BGP client),它会将calico网络内的所有node分配的ip段告知集群内的主机,并通过本机的网卡eth0或者ens33转发数据
  • 另外增加 IP_AUTODETECTION_METHODinterface 使用匹配模式,默认是first-found模式,在复杂网络环境下还是有出错的可能
#备份默认文件
[[email protected] work]# cp calico.yaml calico.yaml.bak
[[email protected] work]# vim calico.yaml

# IP automatic detection
- name: IP_AUTODETECTION_METHOD
value: "interface=en.*"
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客

6.1.3:运行 calico 插件

[[email protected] work]# kubectl apply -f  calico.yaml

6.2、查看 calico 运行状态

[[email protected] work]# kubectl get pods -n kube-system -o wide
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
  • 使用 docker命令查看 calico 使用的镜像,在worker节点执行
[[email protected] ~]# docker images
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
二进制部署多Master高可用K8S集群-v1.18(三)-浅时光博客
1 条回应
  1. Ricky未知2020-8-13 · 2:12

    博主牛逼

本站已安全运行: | 耗时 0.577 秒 | 查询 107 次 | 内存 23.19 MB