任务

Edit This Page

将 kubeadm 集群从 v1.13 升级到 v1.14

本页介绍了如何将 kubeadm 创建的 Kubernetes 集群从 1.13.x 版本升级到 1.14.x 版本,以及从版本 1.14.x 升级到 1.14.y ,其中 y > x

高版本升级工作流如下:

  1. 升级主控制平面节点。
  2. 升级其他控制平面节点。
  3. 升级 worker 节点。
Note:

随着 Kubernetes v1.14 的发布,用于升级 HA 和单个控制平面集群的 kubeadm 指令被合并到一个文档中。

准备开始

附加信息

确定要升级到哪个版本

  1. 找到最新的稳定版 1.14:
apt update
apt-cache policy kubeadm
# 在列表中查找最新的 1.14 版本
# 它看起来应该是 1.14.x-00 ,其中 x 是最新的补丁
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# 在列表中查找最新的 1.14 版本
# 它看起来应该是 1.14.x-00 ,其中 x 是最新的补丁

升级第一个控制平面节点

  1. 在第一个控制平面节点上,升级 kubeadm :
# 用最新的修补程序版本替换 1.14.x-00 中的 x
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.14.x-00 && \
apt-mark hold kubeadm
# 用最新的修补程序版本替换 1.14.x-00 中的 x
yum install -y kubeadm-1.14.x-0 --disableexcludes=kubernetes

  1. 验证下载是否有效并具有预期版本:

    kubeadm version
  1. 在主节点上,运行:

    sudo kubeadm upgrade plan

    您应该可以看到与下面类似的输出:

    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.13.3
    [upgrade/versions] kubeadm version: v1.14.0
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     2 x v1.13.3   v1.14.0
    
    Upgrade to the latest version in the v1.13 series:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.13.3   v1.14.0
    Controller Manager   v1.13.3   v1.14.0
    Scheduler            v1.13.3   v1.14.0
    Kube Proxy           v1.13.3   v1.14.0
    CoreDNS              1.2.6     1.3.1
    Etcd                 3.2.24    3.3.10
    
    You can now apply the upgrade by executing the following command:
    
            kubeadm upgrade apply v1.14.0
    
    _____________________________________________________________________

    此命令检查您的集群是否可以升级,并可以获取到升级的版本。

  1. 选择要升级到的版本,然后运行相应的命令。例如:

    sudo kubeadm upgrade apply v1.14.x
    • x 替换为您为此升级选择的修补程序版本。

    您应该可以看见与下面类似的输出:

    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade/version] You have chosen to change the cluster version to "v1.14.0"
    [upgrade/versions] Cluster version: v1.13.3
    [upgrade/versions] kubeadm version: v1.14.0
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"...
    Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400
    Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2
    Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239
    Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239
    Static pod: etcd-myhost hash: 64a28f011070816f4beb07a9c96d73b6
    [apiclient] Found 1 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests043818770"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400
    Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400
    Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400
    Static pod: kube-apiserver-myhost hash: b8a6533e241a8c6dab84d32bb708b8a1
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2
    Static pod: kube-controller-manager-myhost hash: 6f77d441d2488efd9fc2d9a9987ad30b
    [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4
    Static pod: kube-scheduler-myhost hash: a24773c92bb69c3748fcce5e540b7574
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  1. 手动升级你的 CNI 供应商插件。

    您的容器网络接口(CNI)应该提供了程序自身的升级说明。 检查插件页面查找您 CNI 所提供的程序,并查看是否需要其他升级步骤。

  1. 升级控制平面节点上的 kubelet 和 kubectl :
# 用最新的修补程序版本替换 1.14.x-00 中的 x
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \
apt-mark hold kubelet kubectl
# 用最新的修补程序版本替换 1.14.x-00 中的 x
yum install -y kubelet-1.14.x-0 kubectl-1.14.x-0 --disableexcludes=kubernetes

  1. 重启 kubelet

    sudo systemctl restart kubelet

升级其他控制面板节点

  1. 与第一个控制平面节点相同,但使用:

    sudo kubeadm upgrade node experimental-control-plane
    

instead of:

sudo kubeadm upgrade apply

也不需要 sudo kubeadm upgrade plan

升级工作节点

工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,以不影响运行工作负载所需的最小容量。

升级 kubeadm

  1. 在所有工作节点升级 kubeadm :
# 用最新的修补程序版本替换 1.14.x-00 中的 x
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.14.x-00 && \
apt-mark hold kubeadm
# 用最新的修补程序版本替换 1.14.x-00 中的 x
yum install -y kubeadm-1.14.x-0 --disableexcludes=kubernetes

节点临界点

  1. 通过将节点标记为不可计划并逐出工作负载,为维护做好准备。运行:

    kubectl drain $NODE --ignore-daemonsets

    您应该可以看见与下面类似的输出:

    node/ip-172-31-85-18 cordoned
    WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx
    node/ip-172-31-85-18 drained

升级 kubelet 配置

  1. 升级 kubelet 配置:

    sudo kubeadm upgrade node config --kubelet-version v1.14.x

    用最新的修补程序版本替换 1.14.x-00 中的 x

升级 kubelet 与 kubectl

  1. 通过运行适用于您的 Linux 发行版包管理器升级 Kubernetes 软件包版本:
# 用最新的修补程序版本替换 1.14.x-00 中的 xs
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \
apt-mark hold kubelet kubectl
# 用最新的修补程序版本替换 1.14.x-00 中的 x
yum install -y kubelet-1.14.x-0 kubectl-1.14.x-0 --disableexcludes=kubernetes

  1. 重启 kubelet

    sudo systemctl restart kubelet

节点未临界点

  1. 通过将节点标记为可调度,让节点重新上线:

    kubectl uncordon $NODE

验证集群的状态

在所有节点上升级 kubelet 后,通过从 kubectl 可以访问集群的任何位置运行以下命令,验证所有节点是否再次可用:

kubectl get nodes

STATUS 列应显示所有节点为 Ready 状态,并且版本号已经被更新。

从故障状态恢复

如果 kubeadm upgrade 失败并且没有回滚,例如由于执行期间意外关闭,您可以再次运行 kubeadm upgrade。 此命令是幂等的,并最终确保实际状态是您声明的所需状态。 要从故障状态恢复,您还可以运行 kubeadm upgrade --force 而不去更改集群正在运行的版本。

它是怎么工作的

kubeadm upgrade apply 做了以下工作:

kubeadm upgrade node experimental-control-plane 实验控制平面在其他控制平面节点上执行以下操作: - 从集群中获取 kubeadm ClusterConfiguration。 - 可选地备份 kube-apiserver 证书。 - 升级控制平面组件的静态 Pod 清单。

反馈