k8s 高可用 master 节点恢复

如果 etcd 数据库是部署在 master 节点上的则需要同时恢复 etcd 数据库。

在正常 master 节点查看 etcd 列表:

# 查看 etcd 集群成员列表
ETCDCTL_API=3 etcdctl member list --endpoints=https://192.168.2.151:2379 --endpoints=https://192.168.2.152:2379 --endpoints=https://192.168.2.153:2379  --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key
Bash

在正常 master 节点查看 ectd 状态:

# etcd 集群节点健康信息筛选出不健康的节点
ETCDCTL_API=3 etcdctl endpoint health --endpoints=https://192.168.2.151:2379 --endpoints=https://192.168.2.152:2379 --endpoints=https://192.168.2.153:2379  --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key

https://192.168.2.153:2379 is healthy: successfully committed proposal: took = 10.685441ms
https://192.168.2.151:2379 is healthy: successfully committed proposal: took = 11.972997ms
https://192.168.2.152:2379 is unhealthy: failed to commit proposal: context deadline exceeded
Bash

在正常 master 节点移除不健康 etcd 数据库:

ETCDCTL_API=3 etcdctl --key /etc/kubernetes/pki/apiserver-etcd-client.key --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --cacert /etc/kubernetes/pki/etcd/ca.crt member remove a64cd8ddf3d578de
Bash

清理故障 master 节点数据:

sudo mv /etc/kubernetes/ /etc/kubernetes-backup/
sudo mkdir /etc/kubernetes/

# 在该节点上删除 /var/lib/etcd
sudo mv /var/lib/etcd /var/lib/etcd.bak
sudo mkdir /var/lib/etcd
Bash

重新加入集群:

# 在健康 master 节点执行
# 将控制平面证书加密并上传到集群
kubeadm init phase upload-certs --upload-certs
...
[upload-certs] Using certificate key:
ee7cdf97abe2993b9c66cbcfa175b468f4ce5a23e11d477c5b902775a7a36e77
# 创建一个新的 Bootstrap Token,用于验证新节点加入集群的合法性。
kubeadm token create --print-join-command
kubeadm join api-server:8443 --token ab2u13.b20gyt91bdz5eqxy --discovery-token-ca-cert-hash sha256:efb37c407e4eaef751d402eed838e44b2defeb4c45e03b6d2151e62ca915e0f7 

# 在需要加入集群的节点上执行
# 节点重新加入集群
kubeadm join api-server:8443 --token ab2u13.b20gyt91bdz5eqxy --discovery-token-ca-cert-hash sha256:efb37c407e4eaef751d402eed838e44b2defeb4c45e03b6d2151e62ca915e0f7 --control-plane --certificate-key ee7cdf97abe2993b9c66cbcfa175b468f4ce5a23e11d477c5b902775a7a36e77   
Bash

如果加入集群报错,重新彻底清理再重新加入:

kubeadm reset -f
cd /tmp # 有时候在当前目录下可能与要卸载的包重名的而导致卸载报错,可以切个目录
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
 
 
rm -rf /run/flannel
rm -rf /opt/cni
rm -rf /etc/cni/net.d
rm -rf /run/xtables.lock
 
 
systemctl stop kubelet
yum remove kube* -y
 
for i in `df |grep kubelet |awk '{print $NF}'`;do umount -l $i ;done # 先卸载所有kubelet挂载否则下条命令无法删除
rm -rf /var/lib/kubelet
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
 
iptables -F
 
reboot # 重新启动,从头再来 


yum install -y kubelet-1.30* kubeadm-1.30* kubectl-1.30*
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
Bash
上一篇
下一篇