This command initializes a Kubernetes control-plane node.
Run this command in order to set up the Kubernetes control plane.
Run this command in order to set up the Kubernetes control plane.
The “init” command executes the following phases:
preflight Run pre-flight checks
kubelet-start Writes kubelet settings and (re)starts the kubelet
certs Certificate generation
/ca Generates the self-signed Kubernetes CA to provision identities for other Kubernetes components
/apiserver Generates the certificate for serving the Kubernetes API
/apiserver-kubelet-client Generates the Client certificate for the API server to connect to kubelet
/front-proxy-ca Generates the self-signed CA to provision identities for front proxy
/front-proxy-client Generates the client for the front proxy
/etcd-ca Generates the self-signed CA to provision identities for etcd
/etcd-server Generates the certificate for serving etcd
/apiserver-etcd-client Generates the client apiserver uses to access etcd
/etcd-peer Generates the credentials for etcd nodes to communicate with each other
/etcd-healthcheck-client Generates the client certificate for liveness probes to healtcheck etcd
/sa Generates a private key for signing service account tokens along with its public key
kubeconfig Generates all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
/admin Generates a kubeconfig file for the admin to use and for kubeadm itself
/kubelet Generates a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
/controller-manager Generates a kubeconfig file for the controller manager to use
/scheduler Generates a kubeconfig file for the scheduler to use
control-plane Generates all static Pod manifest files necessary to establish the control plane
/apiserver Generates the kube-apiserver static Pod manifest
/controller-manager Generates the kube-controller-manager static Pod manifest
/scheduler Generates the kube-scheduler static Pod manifest
etcd Generates static Pod manifest file for local etcd.
/local Generates the static Pod manifest file for a local, single-node local etcd instance.
upload-config Uploads the kubeadm and kubelet configuration to a ConfigMap
/kubeadm Uploads the kubeadm ClusterConfiguration to a ConfigMap
/kubelet Uploads the kubelet component config to a ConfigMap
upload-certs Upload certificates to kubeadm-certs
mark-control-plane Mark a node as a control-plane
bootstrap-token Generates bootstrap tokens used to join a node to a cluster
addon Installs required addons for passing Conformance tests
/coredns Installs the CoreDNS addon to a Kubernetes cluster
/kube-proxy Installs the kube-proxy addon to a Kubernetes cluster
kubeadm init [flags]
--apiserver-advertise-address string | |
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used. | |
--apiserver-bind-port int32 Default: 6443 | |
Port for the API Server to bind to. | |
--apiserver-cert-extra-sans stringSlice | |
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names. | |
--cert-dir string Default: "/etc/kubernetes/pki" | |
The path where to save and store the certificates. | |
--certificate-key string | |
Key used to encrypt the control-plane certificates in the kubeadm-certs Secret. | |
--config string | |
Path to a kubeadm configuration file. | |
--cri-socket string | |
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket. | |
--dry-run | |
Don't apply any changes; just output what would be done. | |
--experimental-upload-certs | |
Upload control-plane certificates to the kubeadm-certs Secret. | |
--feature-gates string | |
A set of key=value pairs that describe feature gates for various features. Options are: |
|
-h, --help | |
help for init | |
--ignore-preflight-errors stringSlice | |
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks. | |
--image-repository string Default: "k8s.gcr.io" | |
Choose a container registry to pull control plane images from | |
--kubernetes-version string Default: "stable-1" | |
Choose a specific Kubernetes version for the control plane. | |
--node-name string | |
Specify the node name. | |
--pod-network-cidr string | |
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node. | |
--service-cidr string Default: "10.96.0.0/12" | |
Use alternative range of IP address for service VIPs. | |
--service-dns-domain string Default: "cluster.local" | |
Use alternative domain for services, e.g. "myorg.internal". | |
--skip-certificate-key-print | |
Don't print the key used to encrypt the control-plane certificates. | |
--skip-phases stringSlice | |
List of phases to be skipped | |
--skip-token-print | |
Skip printing of the default bootstrap token generated by 'kubeadm init'. | |
--token string | |
The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef | |
--token-ttl duration Default: 24h0m0s | |
The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire |
--rootfs string | |
[EXPERIMENTAL] The path to the 'real' host root filesystem. |
kubeadm init
bootstraps a Kubernetes control-plane node by executing the
following steps:
Runs a series of pre-flight checks to validate the system state
before making changes. Some checks only trigger warnings, others are
considered errors and will exit kubeadm until the problem is corrected or the
user specifies --ignore-preflight-errors=<list-of-errors>
.
Generates a self-signed CA (or using an existing one if provided) to set up
identities for each component in the cluster. If the user has provided their
own CA cert and/or key by dropping it in the cert directory configured via --cert-dir
(/etc/kubernetes/pki
by default) this step is skipped as described in the
Using custom certificates document.
The APIServer certs will have additional SAN entries for any --apiserver-cert-extra-sans
arguments, lowercased if necessary.
Writes kubeconfig files in /etc/kubernetes/
for
the kubelet, the controller-manager and the scheduler to use to connect to the
API server, each with its own identity, as well as an additional
kubeconfig file for administration named admin.conf
.
Generates static Pod manifests for the API server, controller manager and scheduler. In case an external etcd is not provided, an additional static Pod manifest is generated for etcd.
Static Pod manifests are written to /etc/kubernetes/manifests
; the kubelet
watches this directory for Pods to create on startup.
Once control plane Pods are up and running, the kubeadm init
sequence can continue.
Apply labels and taints to the control-plane node so that no additional workloads will run there.
Generates the token that additional nodes can use to register
themselves with a control-plane in the future. Optionally, the user can provide a
token via --token
, as described in the
kubeadm token docs.
Makes all the necessary configurations for allowing node joining with the Bootstrap Tokens and TLS Bootstrap mechanism:
Write a ConfigMap for making available all the information required for joining, and set up related RBAC access rules.
Let Bootstrap Tokens access the CSR signing API.
Configure auto-approval for new CSR requests.
See kubeadm join for additional info.
ClusterConfiguration
. For more information about the configuration see the section
Using kubeadm init with a configuration file
below.
Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed.Kubeadm allows you create a control-plane node in phases. In 1.13 the kubeadm init phase
command has graduated to GA from it’s previous alpha state under kubeadm alpha phase
.
To view the ordered list of phases and sub-phases you can call kubeadm init --help
. The list will be located at the top of the help screen and each phase will have a description next to it.
Note that by calling kubeadm init
all of the phases and sub-phases will be executed in this exact order.
Some phases have unique flags, so if you want to have a look at the list of available options add --help
, for example:
sudo kubeadm init phase control-plane controller-manager --help
You can also use --help
to see the list of sub-phases for a certain parent phase:
sudo kubeadm init phase control-plane --help
kubeadm init
also expose a flag called --skip-phases
that can be used to skip certain phases. The flag accepts a list of phase names and the names can be taken from the above ordered list.
An example:
sudo kubeadm init phase control-plane all --config=configfile.yaml
sudo kubeadm init phase etcd local --config=configfile.yaml
# you can now modify the control plane and etcd manifest files
sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml
What this example would do is write the manifest files for the control plane and etcd in /etc/kubernetes/manifests
based on the configuration in configfile.yaml
. This allows you to modify the files and then skip these phases using --skip-phases
. By calling the last command you will create a control plane node with the custom manifest files.
Caution: The config file is still considered beta and may change in future versions.
It’s possible to configure kubeadm init
with a configuration file instead of command
line flags, and some more advanced features may only be available as
configuration file options. This file is passed in the --config
option.
In Kubernetes 1.11 and later, the default configuration can be printed out using the kubeadm config print command.
It is recommended that you migrate your old v1alpha3
configuration to v1beta1
using
the kubeadm config migrate command,
because v1alpha3
will be removed in Kubernetes 1.15.
For more details on each field in the v1beta1
configuration you can navigate to our
API reference pages.
For information about kube-proxy parameters in the kubeadm configuration see: - kube-proxy
For information about enabling IPVS mode with kubeadm see: - IPVS
For information about passing flags to control plane components see: - control-plane-flags
By default, kubeadm pulls images from k8s.gcr.io
, unless
the requested Kubernetes version is a CI version. In this case,
gcr.io/kubernetes-ci-images
is used.
You can override this behavior by using kubeadm with a configuration file. Allowed customization are:
imageRepository
to be used instead of
k8s.gcr.io
.useHyperKubeImage
to true
to use the HyperKube image.imageRepository
and imageTag
for etcd or DNS add-on.Please note that the configuration field kubernetesVersion
or the command line flag
--kubernetes-version
affect the version of the images.
By default, kubeadm generates all the certificates needed for a cluster to run. You can override this behavior by providing your own certificates.
To do so, you must place them in whatever directory is specified by the
--cert-dir
flag or CertificatesDir
configuration file key. By default this
is /etc/kubernetes/pki
.
If a given certificate and private key pair exists, kubeadm skips the
generation step and existing files are used for the prescribed
use case. This means you can, for example, copy an existing CA into /etc/kubernetes/pki/ca.crt
and /etc/kubernetes/pki/ca.key
, and kubeadm will use this CA for signing the rest
of the certs.
It is also possible to provide just the ca.crt
file and not the
ca.key
file (this is only available for the root CA file, not other cert pairs).
If all other certificates and kubeconfig files are in place, kubeadm recognizes
this condition and activates the “External CA” mode. kubeadm will proceed without the
CA key on disk.
Instead, run the controller-manager standalone with --controllers=csrsigner
and
point to the CA certificate and key.
The kubeadm package ships with configuration for how the kubelet should
be run. Note that the kubeadm
CLI command never touches this drop-in file.
This drop-in file belongs to the kubeadm deb/rpm package.
To find out more about how kubeadm manages the kubelet have a look at this page.
Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default.
The container runtime used by default is Docker, which is enabled through the built-in
dockershim
CRI implementation inside of the kubelet
.
Other CRI-based runtimes include:
Refer to the CRI installation instructions for more information.
After you have successfully installed kubeadm
and kubelet
, execute
these two additional steps:
Install the runtime shim on every node, following the installation document in the runtime shim project listing above.
Configure kubelet to use the remote CRI runtime. Please remember to change
RUNTIME_ENDPOINT
to your own value like /var/run/{your_runtime}.sock
:
cat > /etc/systemd/system/kubelet.service.d/20-cri.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=$RUNTIME_ENDPOINT"
EOF
systemctl daemon-reload
Now kubelet
is ready to use the specified CRI runtime, and you can continue
with the kubeadm init
and kubeadm join
workflow to deploy Kubernetes cluster.
You may also want to set --cri-socket
to kubeadm init
and kubeadm reset
when
using an external CRI implementation.
By default, kubeadm
assigns a node name based on a machine’s host address. You can override this setting with the --node-name
flag.
The flag passes the appropriate --hostname-override
to the kubelet.
Be aware that overriding the hostname can interfere with cloud providers.
As of 1.8, you can experimentally create a self-hosted Kubernetes control plane. This means that key components such as the API server, controller manager, and scheduler run as DaemonSet pods configured via the Kubernetes API instead of static pods configured in the kubelet via static files.
To create a self-hosted cluster see the kubeadm alpha selfhosting
command.
Self-hosting in 1.8 and later has some important limitations. In particular, a self-hosted cluster cannot recover from a reboot of the control-plane node without manual intervention.
A self-hosted cluster is not upgradeable using kubeadm upgrade
.
By default, self-hosted control plane Pods rely on credentials loaded from
hostPath
volumes. Except for initial creation, these credentials are not managed by
kubeadm.
The self-hosted portion of the control plane does not include etcd, which still runs as a static Pod.
The self-hosting bootstrap process is documented in the kubeadm design document.
In summary, kubeadm alpha selfhosting
works as follows:
Waits for this bootstrap static control plane to be running and
healthy. This is identical to the kubeadm init
process without self-hosting.
Uses the static control plane Pod manifests to construct a set of DaemonSet manifests that will run the self-hosted control plane. It also modifies these manifests where necessary, for example adding new volumes for secrets.
Creates DaemonSets in the kube-system
namespace and waits for the
resulting Pods to be running.
Once self-hosted Pods are operational, their associated static Pods are deleted and kubeadm moves on to install the next component. This triggers kubelet to stop those static Pods.
When the original static control plane stops, the new self-hosted control plane is able to bind to listening ports and become active.
For running kubeadm without an internet connection you have to pre-pull the required control-plane images.
In Kubernetes 1.11 and later, you can list and pull the images using the kubeadm config images
sub-command:
kubeadm config images list
kubeadm config images pull
In Kubernetes 1.12 and later, the k8s.gcr.io/kube-*
, k8s.gcr.io/etcd
and k8s.gcr.io/pause
images
don’t require an -${ARCH}
suffix.
Rather than copying the token you obtained from kubeadm init
to each node, as
in the basic kubeadm tutorial, you can parallelize the
token distribution for easier automation. To implement this automation, you must
know the IP address that the control-plane node will have after it is started.
Generate a token. This token must have the form <6 character string>.<16
character string>
. More formally, it must match the regex:
[a-z0-9]{6}\.[a-z0-9]{16}
.
kubeadm can generate a token for you:
kubeadm token generate
Start both the control-plane node and the worker nodes concurrently with this token.
As they come up they should find each other and form the cluster. The same
--token
argument can be used on both kubeadm init
and kubeadm join
.
Once the cluster is up, you can grab the admin credentials from the control-plane node
at /etc/kubernetes/admin.conf
and use that to talk to the cluster.
Note that this style of bootstrap has some relaxed security guarantees because
it does not allow the root CA hash to be validated with
--discovery-token-ca-cert-hash
(since it’s not generated when the nodes are
provisioned). For details, see the kubeadm join.
kubeadm init
phaseskubeadm init
or kubeadm join
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.