CKA-真题练习 (下)
14、升级集群
考题
设置配置环境:
[candidate@node-1] $ kubectl config use-context mk8s
Task
现有的 Kubernetes 集群正在运行版本 1.24.2。仅将 master 节点上的所有 Kubernetes 控制平面和节点组件升级到版本 1.24.3。 确保在升级之前 drain master 节点,并在升级后 uncordon master 节点。
可以使用以下命令,通过 ssh 连接到 master 节点:
ssh master01
可以使用以下命令,在该 master 节点上获取更高权限: sudo -i
另外,在主节点上升级 kubelet 和 kubectl。
请不要升级工作节点,etcd,container 管理器,CNI 插件, DNS 服务或任何其他插件。
解答
# cordon 停止调度,将 node 调为 SchedulingDisabled。新 pod 不会被调度到该 node,但在该 node 的旧 pod 不受影响。
# drain 驱逐节点。首先,驱逐该 node 上的 pod,并在其他节点重新创建。接着,将节点调为 SchedulingDisabled。
# 所以 kubectl cordon master01 这条,可写可不写。但我一向做事严谨,能复杂,就绝不简单了事。所以就写了。
实践:
root@master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane 216d v1.24.2
node01 Ready <none> 216d v1.24.2
node02 Ready <none> 216d v1.24.2
root@master01:~# kubectl cordon master01
node/master01 cordoned
root@master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready,SchedulingDisabled control-plane 216d v1.24.2
node01 Ready <none> 216d v1.24.2
node02 Ready <none> 216d v1.24.2
root@master01:~# kubectl drain master01 --ignore-daemonsets
node/master01 already cordoned
WARNING: ignoring DaemonSet-managed Pods: calico-system/calico-node-48784, kube-system/kube-proxy-g7ct6
evicting pod tigera-operator/tigera-operator-5dc8b759d9-5w6q9
evicting pod kube-system/coredns-74586cf9b6-82vwb
evicting pod calico-system/calico-typha-68d6c564f-4vlvm
evicting pod kube-system/coredns-74586cf9b6-ppqhp
pod/calico-typha-68d6c564f-4vlvm evicted
pod/tigera-operator-5dc8b759d9-5w6q9 evicted
pod/coredns-74586cf9b6-82vwb evicted
pod/coredns-74586cf9b6-ppqhp evicted
node/master01 drained
root@master01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready,SchedulingDisabled control-plane 216d v1.24.2
node01 Ready <none> 216d v1.24.2
node02 Ready <none> 216d v1.24.2
root@master01:~#
升级打印:
root@master01:~# apt-cache show kubeadm | grep 1.24.3
Version: 1.24.3-00
Filename: pool/kubeadm_1.24.3-00_amd64_a185bea971069e698ed5104545741e561a7c629ebb5587aabc0b2abc9bf79af7.deb
root@master01:~# apt-get install kubeadm=1.24.3-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
kubeadm
1 upgraded, 0 newly installed, 0 to remove and 153 not upgraded.
Need to get 9,002 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.24.3-00 [9,002 kB]
Fetched 9,002 kB in 37s (242 kB/s)
(Reading database ... 110770 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.24.3-00_amd64.deb ...
Unpacking kubeadm (1.24.3-00) over (1.24.2-00) ...
Setting up kubeadm (1.24.3-00) ...
root@master01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:29:09Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
root@master01:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.24.2
[upgrade/versions] kubeadm version: v1.24.3
I0216 17:39:46.339483 109707 version.go:255] remote version is much newer: v1.26.1; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.10
[upgrade/versions] Latest version in the v1.24 series: v1.24.10
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.24.2 v1.24.10
Upgrade to the latest version in the v1.24 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.24.2 v1.24.10
kube-controller-manager v1.24.2 v1.24.10
kube-scheduler v1.24.2 v1.24.10
kube-proxy v1.24.2 v1.24.10
CoreDNS v1.8.6 v1.8.6
etcd 3.5.3-0 3.5.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.24.10
Note: Before you can perform this upgrade, you have to update kubeadm to v1.24.10.
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
root@master01:~#
15、备份还原etcd
考题
设置配置环境
此项目无需更改配置环境。但是,在执行此项目之前,请确保您已返回初始节点。
[candidate@master01] $ exit #注意,这个之前是在 master01 上,所以要 exit 退到 node01,如果已经是 node01 了,就不要再 exit 了。
Task
首先,为运行在 https://11.0.1.111:2379 上的现有 etcd 实例创建快照并将快照保存到 /var/lib/backup/etcd-snapshot.db (注意,真实考试中,这里写的是 https://127.0.0.1:2379)
为给定实例创建快照预计能在几秒钟内完成。 如果该操作似乎挂起,则命令可能有问题。用 CTRL + C 来取消操作,然后重试。 然后还原位于/data/backup/etcd-snapshot-previous.db 的现有先前快照。
提供了以下 TLS 证书和密钥,以通过 etcdctl 连接到服务器。
CA 证书: /opt/KUIN00601/ca.crt 客户端证书: /opt/KUIN00601/etcd-client.crt 客户端密钥: /opt/KUIN00601/etcd-client.key
相关文章:
CKA-真题练习 (上)
升级 kubeadm 集群
K8S中的cordon、uncordon和drain
为者常成,行者常至
自由转载-非商用-非衍生-保持署名(创意共享3.0许可证)