点击上方蓝字关注我们

这里主要是介绍kubernetes的版本升级方法,从v1.22.3升级到v1.22.17版本,根据kubernetes要求,在升级过程中要想把这个组件升级到v1.22.17,先要把kubeadm升级到v1.22.17版本,然后才能升级各个组件
一、环境
CentOS Linux release 7.7.1908 (Core) 3.10.0-1062.el7.x86_64
kubeadm-1.22.3-0.x86_64
kubelet-1.22.3-0.x86_64
kubectl-1.22.3-0.x86_64
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 105d v1.22.3
k8s-master02 Ready control-plane 105d v1.22.3
k8s-master03 Ready control-plane 105d v1.22.3
k8s-node01 Ready <none> 24d v1.22.3
k8s-node02 Ready <none> 103d v1.22.3
k8s-node03 Ready <none> 105d v1.22.3
二、先把要升级的节点设置为不可能调度
#kubectl cordon k8s-master02
node/k8s-master02 cordoned
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 105d v1.22.17
k8s-master02 Ready,SchedulingDisabled control-plane 105d v1.22.17
k8s-master03 Ready control-plane 105d v1.22.17
k8s-node01 Ready <none> 24d v1.22.17
k8s-node02 Ready <none> 103d v1.22.17
k8s-node03 Ready <none> 105d v1.22.17
三、把要升级的节点上的服务驱逐到别的节点上
# kubectl drain k8s-master02 --ignore-daemonsets --force
already cordoned
WARNING: ignoring DaemonSet-managed Pods: devops/node-exporter-2bjln, kube-flannel/kube-flannel-ds-dsjvk, kube-system/kube-proxy-kq56m
evicting pod ingress-nginx/ingress-nginx-controller-55888bbc94-5h7mp
evicted
evicted
四、查看kueadm的所有版本
1、使用阿里云的仓库
cat >/etc/yum.repos.d/kubernetes.repo.bak <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2、列出当前的所有版本
#这可以列出所有的版本,我只贴出一部分
#yum list kubeadm.x86_64 --showduplicates
kubeadm.x86_64 1.21.12-0 kubernetes
kubeadm.x86_64 1.21.13-0 kubernetes
kubeadm.x86_64 1.21.14-0 kubernetes
kubeadm.x86_64 1.22.0-0 kubernetes
kubeadm.x86_64 1.22.1-0 kubernetes
kubeadm.x86_64 1.22.2-0 kubernetes
kubeadm.x86_64 1.22.3-0 kubernetes
kubeadm.x86_64 1.22.4-0 kubernetes
kubeadm.x86_64 1.22.5-0 kubernetes
kubeadm.x86_64 1.22.6-0 kubernetes
kubeadm.x86_64 1.22.7-0 kubernetes
kubeadm.x86_64 1.22.8-0 kubernetes
kubeadm.x86_64 1.22.9-0 kubernetes
kubeadm.x86_64 1.22.10-0 kubernetes
kubeadm.x86_64 1.22.11-0 kubernetes
kubeadm.x86_64 1.22.12-0 kubernetes
kubeadm.x86_64 1.22.13-0 kubernetes
kubeadm.x86_64 1.22.14-0 kubernetes
kubeadm.x86_64 1.22.15-0 kubernetes
kubeadm.x86_64 1.22.16-0 kubernetes
kubeadm.x86_64 1.22.17-0 kubernetes
3、查看升级计划
#通过这个升级计划,会建议里升级到哪个版本
# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.22.0
[upgrade/versions] kubeadm version: v1.22.17
I0222 11:29:39. 31327 version.go:255] remote version is much newer: v1.26.1; falling back to: stable-1.22
[upgrade/versions] Target version: v1.22.17
[upgrade/versions] Latest version in the v1.22 series: v1.22.17
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 6 x v1.22.3 v1.22.17
Upgrade to the latest version in the v1.22 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.22.0 v1.22.17
kube-controller-manager v1.22.0 v1.22.17
kube-scheduler v1.22.0 v1.22.17
kube-proxy v1.22.0 v1.22.17
CoreDNS v1.8.4 v1.8.4
etcd 3.5.0-0 3.5.6-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.22.17
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
五、升级kueadm
# yum install kubeadm-1.22.17-0.x86_64 -y --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.tuna.tsinghua.edu.cn
* elrepo: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.bupt.edu.cn
* updates: mirrors.tuna.tsinghua.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.22.3-0 will be updated
---> Package kubeadm.x86_64 0:1.22.17-0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================
Updating:
kubeadm x86_64 1.22.17-0 kubernetes 9.3 M
Transaction Summary
=====================================================================================================================================================
Upgrade 1 Package
Total download size: 9.3 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
8c09ebbc5153ff29e5a01be398b984c73f63c519f8ac2cbc30e76-kubeadm-1.22.17-0.x86_64.rpm | 9.3 MB 00:00:32
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kubeadm-1.22.17-0.x86_64 1/2
Cleanup : kubeadm-1.22.3-0.x86_64 2/2
Verifying : kubeadm-1.22.17-0.x86_64 1/2
Verifying : kubeadm-1.22.3-0.x86_64 2/2
Updated:
kubeadm.x86_64 0:1.22.17-0
Complete!
六、升级其它组件
#这里避免升级etcd,所以加了"--etcd-upgrade=false"
# kubeadm upgrade apply v1.22.17 --etcd-upgrade=false
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.17"
[upgrade/versions] Cluster version: v1.22.0
[upgrade/versions] kubeadm version: v1.22.17
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.17"...
Static pod: kube-apiserver-k8s-master03 hash: 4be77bb42044da85627a3bc22
Static pod: kube-controller-manager-k8s-master03 hash: 930fc0ab2d9cc85badf9b56f69e5405f
Static pod: kube-scheduler-k8s-master03 hash: 1f2c35a1ca637b7ed926d
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-02-22-11-32-09/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master03 hash: 4be77bb42044da85627a3bc22
Static pod: kube-apiserver-k8s-master03 hash: 0d8f4226a7b07db07c
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-02-22-11-32-09/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master03 hash: 930fc0ab2d9cc85badf9b56f69e5405f
Static pod: kube-controller-manager-k8s-master03 hash: dd7bdee28fe84c1fea6778d41
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-02-22-11-32-09/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master03 hash: 1f2c35a1ca637b7ed926d
Static pod: kube-scheduler-k8s-master03 hash: e5494ea4c4fa231c20be0bf6ed
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.17". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
七、升级kubelet组件
#yum install kubelet-1.22.17-0.x86_64 -y
#重启kubelet服务
#systemctl daemon-reload
#systemctl restart kubelet
八、重新设置已升级的节点为可调度


版权声明:
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若内容造成侵权、违法违规、事实不符,请将相关资料发送至xkadmin@xkablog.com进行投诉反馈,一经查实,立即处理!
转载请注明出处,原文链接:https://www.xkablog.com/hd-nodejs/25063.html