本文主要描述如何将 kubeadm
集群从 1.8.x 版本升级到 1.9.x 版本,包括从 1.8.x 版本升级到 1.8.y 版本,和从版本 1.9.x 版本到 1.9.y 版本(y > x
)。
如果您目前安装的是集群是 1.7 版本,也可以查看 kubeadm clusters 集群从 1.7 版本升级到 1.8 版本
升级之前:
kubeadm
Kubernetes 集群。另外还需要禁用节点的交换分区。kubeadm upgrade
可以更新 etcd。默认情况下,从 Kubernetes 1.8 版本升级到 1.9 版本时,kubeadm upgrade
也会升级 etcd 到 3.1.10 版本。这是由于 etcd3.1.10 是官方验证的 etcd 版本对于 kubernetes1.9。kubeadm 为您提供了自动化的升级过程。kubeadm upgrade
命令 不会触及任何工作负载,只有 kubernetes 内部组件。作为最佳实践,您应当备份,因为备份相当的重要。例如,任何应用程序级别的状态(如应用程序可能依赖的数据库,如 mysql 或 mongoDB)必须预先备份。
Caution: <!–All the containers will get restarted after the upgrade, due to container spec hash value gets changed.
—>
注意: 由于容器的具体哈希值改变了,所有的容器在升级之后会重新启动。
同时,也要注意只有小范围的升级是支持的。例如,您只可以从 1.8 版本升级到 1.9 版本,但是不能从 1.7 版本升级到 1.9 版本。
在您的 master 节点上执行这些命令:
使用 curl
命令进行安装最新的版本的 kubeadm
,例如:
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version
export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm
chmod a+rx /usr/bin/kubeadm
Caution:注意: 在您的系统上升级控制面板之前升级
kubeadm
包会导致升级失败。 尽管kubeadm
ships 在 kubernetes 仓库中,手动安装kubeadm
是重要的。kubeadm 团队在努力解决这种手动安装的限制。
验证 kubeadm 下载工作是否正常,并是否有达到预期的版本:
kubeadm version
在master节点上运行如下命令:
kubeadm upgrade plan
<!–
You should see output similar to this:
—> 可以得到类型的结果: <!–
[preflight] Running pre-flight checks
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
[upgrade/health] Checking Node health: All Nodes are healthy
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Fetching available versions to upgrade to:
[upgrade/versions] Cluster version: v1.8.1
[upgrade/versions] kubeadm version: v1.9.0
[upgrade/versions] Latest stable version: v1.9.0
[upgrade/versions] Latest version in the v1.8 series: v1.8.6
Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.8.1 v1.8.6
Upgrade to the latest version in the v1.8 series:
COMPONENT CURRENT AVAILABLE
API Server v1.8.1 v1.8.6
Controller Manager v1.8.1 v1.8.6
Scheduler v1.8.1 v1.8.6
Kube Proxy v1.8.1 v1.8.6
Kube DNS 1.14.4 1.14.5
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.8.6
_____________________________________________________________________
Components that must be upgraded manually after you've upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.8.1 v1.9.0
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
API Server v1.8.1 v1.9.0
Controller Manager v1.8.1 v1.9.0
Scheduler v1.8.1 v1.9.0
Kube Proxy v1.8.1 v1.9.0
Kube DNS 1.14.5 1.14.7
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.9.0
Note: Before you do can perform this upgrade, you have to update kubeadm to v1.9.0
_____________________________________________________________________
—>
[preflight] Running pre-flight checks
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
[upgrade/health] Checking Node health: All Nodes are healthy
[upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Fetching available versions to upgrade to:
[upgrade/versions] Cluster version: v1.8.1
[upgrade/versions] kubeadm version: v1.9.0
[upgrade/versions] Latest stable version: v1.9.0
[upgrade/versions] Latest version in the v1.8 series: v1.8.6
在升级控制面板并使用 'kubeadm upgrade apply' 后,必须手动升级组件:
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.8.1 v1.8.6
升级到最新的 v1.8 系列的版本:
COMPONENT CURRENT AVAILABLE
API Server v1.8.1 v1.8.6
Controller Manager v1.8.1 v1.8.6
Scheduler v1.8.1 v1.8.6
Kube Proxy v1.8.1 v1.8.6
Kube DNS 1.14.4 1.14.5
您可以通过以下的命令来进行升级:
kubeadm upgrade apply v1.8.6
_____________________________________________________________________
在升级控制面板并使用 'kubeadm upgrade apply' 后,必须手动升级组件:
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.8.1 v1.9.0
升级到最新和稳定的版本:
COMPONENT CURRENT AVAILABLE
API Server v1.8.1 v1.9.0
Controller Manager v1.8.1 v1.9.0
Scheduler v1.8.1 v1.9.0
Kube Proxy v1.8.1 v1.9.0
Kube DNS 1.14.5 1.14.7
您可以通过以下命令来进行升级:
kubeadm upgrade apply v1.9.0
请注意:在您执行升级之前,您必须升级 kubeadm 到 v1.9.0 版本
_____________________________________________________________________
kubeadm upgrade plan
命令检查您的集群是否处于可升级的状态并且获取可以以用户友好方式升级的版本。
检查 coreDNS 版本,包括 --feature-gates=CoreDNS=true
标志来验证存放 kube-dns 在某个位置的 coreDNS 版本。
选择一个版本来进行升级和运行,例如:
kubeadm upgrade apply v1.9.0
<!–
You should see output similar to this:
—> 可以得到如下类似的输出:
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to upgrade to version "v1.9.0"
[upgrade/versions] Cluster version: v1.8.1
[upgrade/versions] kubeadm version: v1.9.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.9.0"...
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/etcd.yaml"
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/staticpods] Writing upgraded Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/kube-scheduler.yaml"
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.9.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.
升级具有默认内部的 DNS 的 coreDNS 集群,调用具有 --feature-gates=CoreDNS=true
标记的 kubeadm upgrade apply
。
kubeadm upgrade apply
按照如下进行:
Ready
状态
-控制面板是健康的kube-dns
和kube-proxy
清单并强制创建所必须的RBAC规则Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see if there are additional upgrade steps necessary.
—>
容器网络接口(CNI)提供者具有升级说明指导。 检查这个插件页面来找到 CNI 提供者和查看是否需要额外的升级步骤。
在集群中涉及 $HOST
的每个主机,执行如下命令来升级 kubelet
:
准备主机维修,并标记为不可调度和驱逐工作负载:
kubectl drain $HOST --ignore-daemonsets
<!–
When running this command against the master host, this error is expected and can be safely ignored (since there are static pods running on the master):
—> 当在 master 主机上运行这个命令,这个错误是可以预料的并且可以忽略(因为静态的 pods 运行在 master 上)
node "master" already cordoned
error: pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): etcd-kubeadm, kube-apiserver-kubeadm, kube-controller-manager-kubeadm, kube-scheduler-kubeadm
$HOST
节点上使用特定的包管理器升级 kubernetes 包版本:如果主机运行 Debian-based 发行版如 ubuntu,运行如下:
apt-get update
apt-get upgrade
如果主机运行centos或者类似,运行如下:
yum update
现在 kubelet
新的版本运行在主机上。在$HOST
上使用如下命令验证:
systemctl status kubelet
通过标记可计划的将主机从新联机:
kubectl uncordon $HOST
<!–
After upgrading kubelet
on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster):
—>
在所以主机升级 kubelet
后,通过从任意位置运行以下命令例如从集群外来验证所有节点是否可用:
kubectl get nodes
如果上面命令的 STATUS
列显示所有的主机的 Ready
,就完成了。
<!–
—>
<!–
If kubeadm upgrade
somehow fails and fails to roll back, for example due to an unexpected shutdown during execution,
you can run kubeadm upgrade
again as it is idempotent and should eventually make sure the actual state is the desired state you are declaring.
—>
如果 kubeadm upgrade
以某种方式失败了并无法回滚,原因有在执行过程中出现意外关机,可以再次运行 kubeadm upgrade
,因为它是幂等的,并且最终确保实际状态是期待的状态。
可以使用 kubeadm upgrade
来更改运行的集群并使用具有 --force
参数的 x.x.x --> x.x.x
,这样可以恢复坏的状态。
此页是否对您有帮助?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.