Kubernetes / 云计算

二进制部署多Master高可用K8S集群-v1.18(一)

浅时光 · 7月26日 · 2020年 15128次已读

前提说明


本文中用到的组件和每个节点对应使用的组件不在进行详细的介绍,如果您不太了解各个组件的作用,请查看本博客的这篇文章进行了解二进制搭建高可用K8S集群-v1.17,还有就是多查看官方文档,官方文档的介绍最准确!!!

一、架构规划


1、平台规划

二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客

2、配置规划

二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客

注意:以上配置仅供参考,如果你维护的集群很大,那么Master集群的机器配置和性能一定要好

3、角色分配

角色服务器IP组件安装方式系统版本
K8s-master1192.168.66.62kube-apiserver kube-controller-manager kube-scheduler etcd二进制安装CentOS 7.8
K8s-master2192.168.66.63kube-apiserver kube-controller-man文章来源(Source):https://www.dqzboy.comager kube-scheduler etcd二进制安装CentOS 7.8
K8s-master3192.168.66.64kube-apiserver kube-controller-manager kube-scheduler etcd二进制安装CentOS 7.8
K8s-node1192.168.66.65kubelet kube-proxy docker二进制安装CentOS 7.8
K8s-node2192.168.66.66Kubelet kube-proxy docker二进制安装CentOS 7.8
K8s-node3192.168.66.67Kubelet kube-proxy docker二进制安装CentOS 7.8

二、系统初始化


1、配置主机名

  • 所有节点执行
[[email protected] ~]# hostnamectl set-hostname k8s-master1
[[email protected] ~]# hostnamectl set-hostname k8s-master2
[[email protected] ~]# hostnamectl set-hostname k8s-master3

[[email protected] ~]# hostnamectl set-hostname k8s-node1
[[email protected] ~]# hostnamectl set-hostname k8s-node2
[[email protected] ~]# hostnamectl set-hostname k8s-node3

2、文章来源(Source):https://www.dqzboy.com配置免密登入

  • 先配置主机名称解析
[[email protected] ~]# vi /etc/hosts
192.168.66.62 k8s-master1
192.168.66.63 k8s-master2
192.168.66.64 k8s-master3
192.168.66.65 k8s-node1
192.168.66.66 k8s-node2
192.168.66.67 k8s-node3
  • K8S-master1实现ssh免密登入其他节点
[[email protected] ~]# ssh-keygen -t rsa
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
  • 传给各节点,实现通过主机名解析
[[email protected] ~]# for i in 192.168.66.{62..67}; do echo ">>> $i";scp /etc/hosts [email protected]$i:/etc/; done
  • 配置K8S-Master1节点通过主机名实现免密登入认证
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]
[[email protected] ~]# ssh-copy-id [email protected]

3、关闭防火墙

  • 所有节点执行
[[email protected] ~]# for i in 192.168.66.{62..67}; do echo ">>> $i";ssh [email protected]$i "systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld"; done

4、关闭SELinux

  • 所有节点执行
[[email protected] ~]# for i in 192.168.66.{62..67}; do echo ">>> $i";ssh [email protected]$i "sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config"; done 

[[email protected] ~]# for i in 192.168.66.{62..67}; do echo ">>> $i";ssh [email protected]$i "setenforce 0 && getenforce"; done

5、安装依赖包

  • 所有节点执行
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "yum -y install gcc gcc-c++ libaio make cmake zlib-devel openssl-devel pcre pcre-devel wget git curl lynx lftp mailx mutt rsync ntp net-tools vim lrzsz screen sysstat yum-plugin-security yum-utils createrepo bash-completion zip unzip bzip2 tree tmpwatch pinfo man-pages lshw pciutils gdisk system-storage-manager git  gdbm-devel sqlite-devel";done

[root[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "yum install -y epel-release";done

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git";done

6、配置时间同步

  • master1节点去同步互联网时间,其他节点与master1节点进行时间同步
[[email protected] ~]# vim /etc/chrony.conf
二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客
  • 启动服务
[[email protected] ~]# systemctl start chronyd.service 
[[email protected] ~]# systemctl enable chronyd.service
[[email protected] ~]# systemctl status chronyd.service
  • 其他节点关闭ntpd服务,我们这里使用chronyd服务
[[email protected] ~]# for i in 192.168.66.{63..67};do echo ">>> $i";ssh [email protected]$i "systemctl stop ntpd && systemctl disable ntpd && systemctl status ntpd";done
  • 登入各个节点服务器进行手动修改
vim /etc/chrony.conf
二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客
  • 所有节点启动服务,在master1节点操作
[[email protected] ~]# for i in 192.168.66.{63..67};do echo ">>> $i";ssh [email protected]$i "systemctl restart chronyd.service && systemctl enable chronyd.service && systemctl status chronyd.service";done
  • 检查时间同步状态 ^*表示已经同步,在master1节点操作
[[email protected] ~]# for i in 192.168.66.{63..67};do echo ">>> $i";ssh [email protected]$i "chronyc sources ";done
二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客
  • 调整系统 TimeZone,在master1节点操作
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "timedatectl set-timezone Asia/Shanghai";done
  • 将当前的 UTC 时间写入硬件时钟,在master1节点操作
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "timedatectl set-local-rtc 0";done
  • 重启依赖于系统时间的服务,在master1节点操作
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "systemctl restart rsyslog && systemctl restart crond";done

7、关闭文章来源(Source):https://www.dqzboy.com交换分区

如果不关闭交换分区,会导致K8S服务无法启动

  • 所有节点执行
  • 关闭swap
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "swapoff -a && free -h|grep Swap";done
  • 永久关闭
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "sed -ri '[email protected]/dev/mapper/[email protected]#/dev/mapper/[email protected]' /etc/fstab && grep /dev/mapper/centos-swap /etc/fstab";done

8、优化系统内核

  • 必须关闭 tcp_文章来源(Source):https://www.dqzboy.comtw_recycle,否则和 NAT 冲突,会导致服务不通;关闭 IPV6,防止触发 docker BUG;
[[email protected] ~]# cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.neigh.default.gc_thresh1=1024
net.ipv4.neigh.default.gc_thresh2=2048
net.ipv4.neigh.default.gc_thresh3=4096
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

[[email protected] ~]# cat > /etc/modules-load.d/netfilter.conf <<EOF
# Load nf_conntrack.ko at boot
nf_conntrack
EOF

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";scp /etc/modules-load.d/netfilter.conf [email protected]$i:/etc/modules-load.d/;done

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";scp /etc/sysctl.d/kubernetes.conf [email protected]$i:/etc/sysctl.d/;done
  • 重启之后在执行sysctl -p
[[email protected] ~]# for i in 192.168.66.{63..67};do echo ">>> $i";ssh [email protected]$i "reboot";done
[[email protected] ~]# reboot

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "modprobe br_netfilter;sysctl -p /etc/sysctl.d/kubernetes.conf";done

9、配置环境变量

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc && source /root/.bashrc";done

10、创建相关的目录

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "mkdir -p /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert";done

11、关闭无关的服务

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "systemctl stop postfix && systemctl disable postfix";done

12、升级系统内核文章来源(Source):https://www.dqzboy.com

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm";done

[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "yum --enablerepo=elrepo-kernel install -y kernel-lt";done
  • 设置开机从新内核启动
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "grub2-set-default 0";done

13、添加Docker用户

  • 在每台服务器上添加Docker用户
[[email protected] ~]# for i in 192.168.66.{62..67};do echo ">>> $i";ssh [email protected]$i "useradd -m docker";done

[[email protected] ~]# for i in 192.168.66.{62..67};do echo $i;ssh [email protected]$i "id docker";done

14、配置全局环境变量

  • 在所有节点上的profile文件的最后都需要添加下面的参数;注意集群IP主机名更改为自己的服务器地址和主机名
[[email protected] ~]# vim /etc/profile
#----------------------------K8S-----------------------------#
# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

#各机器 IP 数组包含Master与Node节点
export NODE_IPS=(192.168.66.62 192.168.66.63 192.168.66.64 192.168.66.65 192.168.66.66 192.168.66.67)

#各机器IP 对应的主机名数组包含Master与Node节点
export NODE_NAMES=(k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3)

# Master集群节点IP
export MASTER_IPS=(192.168.66.62 192.168.66.63 192.168.66.64)

# WORK集群数组IP
export WORK_IPS=(192.168.66.65 192.168.66.66 192.168.66.67)

# WORK集群IP对应主机名数组
export WORK_NAMES=(k8s-node1 k8s-node2 k8s-node3)

#ETCD集群IP数组
export ETCD_IPS=(192.168.66.62 192.168.66.63 192.168.66.64)

# ETCD集群节点IP对应主机名数组
export ETCD_NAMES=(k8s-master1 k8s-master2 k8s-master3)

# etcd 集群服务地址列表;注意IP地址根据自己的ETCD集群服务器地址填写
export ETCD_ENDPOINTS="https://192.168.66.62:2379,https://192.168.66.63:2379,https://192.168.66.64:2379"

# etcd 集群间通信的 IP 和端口;注意此处改为自己的实际ETCD所在服务器主机名
export ETCD_NODES="k8s-master1=https://192.168.66.62:2380,k8s-master2=https://192.168.66.63:2380,k8s-master3=https://192.168.66.64:2380"

# kube-apiserver 的反向代理(kube-nginx)地址端口
export KUBE_APISERVER="https://127.0.0.1:8443"

# 节点间互联网络接口名称;根据自己服务器网卡实际名称进行修改
export IFACE="ens33"

# etcd 数据目录
export ETCD_DATA_DIR="/data/k8s/etcd/data"

# etcd WAL 目录,建议是 SSD 磁盘分区,或者和 ETCD_DATA_DIR 不同的磁盘分区
export ETCD_WAL_DIR="/data/k8s/etcd/wal"

# k8s 各组件数据目录
export K8S_DIR="/data/k8s/k8s"

## DOCKER_DIR 和 CONTAINERD_DIR 二选一
# docker 数据目录
export DOCKER_DIR="/data/k8s/docker"

# containerd 数据目录
export CONTAINERD_DIR="/data/k8s/containerd"

## 以下参数一般不需要修改
# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c"

# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段
# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 保证)
SERVICE_CIDR="10.254.0.0/16"

# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="172.30.0.0/16"

# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="30000-32767"

# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"

# 集群 DNS 域名(末尾不带点号)
export CLUSTER_DNS_DOMAIN="cluster.local"

# 将二进制目录 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH
  • 将配置传给各Mast文章来源(Source):https://www.dqzboy.comer集群节点和ETCD集群和worker节点服务器
[[email protected] ~]# for i in 192.168.66.{62..67};do echo $i;scp /etc/profile [email protected]$i:/etc/;done

#最后登入各个节点执行以下命令使其生效
source /etc/profile

15、重启服务器

[[email protected] ~]# for i in 192.168.66.{63..67};do echo $i;ssh [email protected]$i "sync && reboot";done
  • 最后重启K8S-Master1节点
[[email protected] ~]# sync && reboot

三、创建CA根证书和秘钥


  • 将该证书分发至所有节点,包括master和node

1、安装cfssl工具集

注意:所有命令和文件在k8s-master1上在执行,然后将文件分发给其他节点

[[email protected] ~]# mkdir -p /opt/k8s/cert && cd /opt/k8s

[[email protected] k8s]# wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 
[[email protected] k8s]# mv cfssl_1.4.1_linux_amd64 /opt/k8s/bin/cfssl

[[email protected] k8s]# wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 
[[email protected] k8s]# mv cfssljson_1.4.1_linux_amd64 /opt/k8s/bin/cfssljson

[[email protected] k8s]# wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64 
[[email protected] k8s]# mv cfssl-certinfo_1.4.1_linux_amd64 /opt/k8s/bin/cfssl-certinfo

[[email protected] k8s]# chmod +x /opt/k8s/bin/*
[[email protected] k8s]# export PATH=/opt/k8s/bin:$PATH

2、创建根证书(CA)

2.1:创建配置文件

[[email protected] k8s]# cd /opt/k8s/work
[[email protected] work]# cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF

2.2:创建证书签名请求文件

[[email protected] work]# cat > ca-csr.json <<EOF
{
  "CN": "kubernetes-ca",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ],
  "ca": {
    "expiry": "876000h"
 }
}
EOF

2.3:生成CA证书和私钥

[[email protected] work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[[email protected] work]# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3、分发证书文件

  • 将生成的 CA 证书、秘钥文件、配置文件拷贝到所有节点的 /etc/kubernetes/cert 目录下:
[[email protected] work]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p /etc/kubernetes/cert"
    scp ca*.pem ca-config.json [email protected]${node_ip}:/etc/kubernetes/cert
  done

四、部署ETCD集群


1、下载和分发 etcd 二进制文件

  • 注意:我这里ETCD与K8S-Master集群节点都在同一机器上部署的
[[email protected] ~]# cd /opt/k8s/work/
[[email protected] work]# tar -zxvf etcd-v3.4.10-linux-amd64.tar.gz

[[email protected] work]# for node_ip in ${ETCD_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp etcd-v3.4.10-linux-amd64/etcd* [email protected]${node_ip}:/opt/k8s/bin
    ssh [email protected]${node_ip} "chmod +x /opt/k8s/bin/*"
  done 

2、创建 etcd 证书和私钥

2.1:创建证文章来源(Source):https://www.dqzboy.com书签名请求

  • 注意:这里的IP地址一定文章来源(Source):https://www.dqzboy.com要根据自己的实际ETCD集群IP填写
[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.66.62",
    "192.168.66.63",
    "192.168.66.64"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "opsnull"
    }
  ]
}
EOF

2.2:生成证书和私钥

[[email protected] work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
    -ca-key=/opt/k8s/work/ca-key.pem \
    -config=/opt/k8s/work/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

[[email protected]r1 work]# ls etcd*pem
etcd-key.pem  etcd.pem

2.3:分发证书和私钥至各etcd节点

[[email protected] work]# for node_ip in ${ETCD_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p /etc/etcd/cert"
    scp etcd*.pem [email protected]${node_ip}:/etc/etcd/cert/
  done

3、创建 etcd 的 systemd unit 模板文件

[[email protected] work]# cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
  --data-dir=${ETCD_DATA_DIR} \\
  --wal-dir=${ETCD_WAL_DIR} \\
  --name=##ETCD_NAME## \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --listen-peer-urls=https://##ETCD_IP##:2380 \\
  --initial-advertise-peer-urls=https://##ETCD_IP##:2380 \\
  --listen-client-urls=https://##ETCD_IP##:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://##ETCD_IP##:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --auto-compaction-mode=periodic \\
  --auto-compaction-retention=1 \\
  --max-request-bytes=33554432 \\
  --quota-backend-bytes=6442450944 \\
  --heartbeat-interval=250 \\
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

4、为各ETCD节点创建和分发 etcd systemd unit 文件

4.1:替换模板文件中的变量

[[email protected] work]# for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##ETCD_NAME##/${ETCD_NAMES[i]}/" -e "s/##ETCD_IP##/${ETCD_IPS[i]}/" etcd.service.template > etcd-${ETCD_IPS[i]}.service 
  done

[[email protected] work]# ls *.service
etcd-192.168.66.62.service  etcd-192.168.66.63.service  etcd-192.168.66.64.service

4.2:分发生成的 systemd unit 文章来源(Source):https://www.dqzboy.com文件

  • 文件重命名为 etcd.service;
[[email protected] work]# for node_ip in ${ETCD_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp etcd-${node_ip}.service [email protected]${node_ip}:/etc/systemd/system/etcd.service
  done

4.3:检查配置文件

[[email protected] work]# ls /etc/systemd/system/etcd.service 
/etc/systemd/system/etcd.service
[[email protected] work]# vim /etc/systemd/system/etcd.service
  • 确认脚本文件中的IP地址和数据存储地址是否都正确

5、启动ETCD服务

[[email protected] work]# for node_ip in ${ETCD_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR} && chmod 0700 /data/k8s/etcd/data"
    ssh [email protected]${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd"
  done

6、检查启动结果

[[email protected] work]# for node_ip in ${ETCD_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl status etcd|grep Active"
  done
  • 确保启动后没有报错,注意:状态为running并不代表ETCD各节点之间通信正常
[[email protected] work]# for node_ip in ${ETCD_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl status etcd"
  done

7、验证服务状态

7.1:任一etcd节点执行以下命令

[[email protected] work]# for node_ip in ${ETCD_IPS[@]}
  do
    echo ">>> ${node_ip}"
    /opt/k8s/bin/etcdctl \
    --endpoints=https://${node_ip}:2379 \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
  done
  • 各服务节点全部为healthy,则代表etcd集群状态正常
二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客

8、查看当前leader

[[email protected] work]# /opt/k8s/bin/etcdctl \
  -w table --cacert=/opt/k8s/work/ca.pem \
  --cert=/etc/etcd/cert/etcd.pem \
  --key=/etc/etcd/cert/etcd-key.pem \
  --endpoints=${ETCD_ENDPOINTS} endpoint status
  • 可以看到当前ETCD集群leader为:192.168.66.62这台服务器
二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客

五、部署kubectl命令行工具


1、下载和分发 kubectl 二进制文件

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# tar -zxvf kubernetes-client-linux-amd64.tar.gz
kubernetes/
kubernetes/client/
kubernetes/client/bin/
kubernetes/client/bin/kubectl
  • 分发到其他Master集群node节点
[[email protected] work]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp kubernetes/client/bin/kubectl [email protected]${node_ip}:/opt/k8s/bin/
    ssh [email protected]${node_ip} "chmod +x /opt/k8s/bin/*"
  done

2、创建admin证书和私钥

2.1:创建证书签名请求

[[email protected] ~]# cd /opt/k8s/work
[[email protected] work]# cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "opsnull"
    }
  ]
}
EOF

2.2:生成证书和私钥

[[email protected] work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin

[[email protected] work]# ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

3、创建 kubeconfig 文件

  • 设置集群参数
[[email protected] work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=https://${NODE_IPS[0]}:6443 \
  --kubeconfig=kubectl.kubeconfig
  • 设置客户端参数
[[email protected] work]# kubectl config set-credentials admin \
  --client-certificate=/opt/k8s/work/admin.pem \
  --client-key=/opt/k8s/work/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig 
  • 设置上下文参数
[[email protected] work]# kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig 
  • 设置默认上下文
[[email protected] work]# kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig 

4、分发kubeconfig文件

[[email protected] work]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig root@${node_ip}:~/.kube/config
  done 

5、确认kubectl已经可以使用

  • 确保Master节点和Work节点的kubectl命令都可以使用
[[email protected] work]# kubectl --help
二进制部署多Master高可用K8S集群-v1.18(一)-浅时光博客
2 条回应
  1. 迷恋未知2020-7-26 · 18:40

    过来学习!!

  2. 温柔的才Music未知2020-8-13 · 12:59

    可以可以,很详细

本站已安全运行: | 耗时 0.429 秒 | 查询 104 次 | 内存 22.24 MB