Kubernetes - Init Cluster on CentOS

🎖 准备

🔅 配置防火墙(firewalld)

1
2
3
4
5
# 临时关闭防火墙
$ systemctl stop firewalld
# 永久防火墙开机自启动
$ systemctl disable firewalld

🔅 关闭 Swap

Kubernetes 1.8 开始要求关闭系统的 Swap,如果不关闭,默认配置下kubelet将无法启动。可以通过 kubelet 的启动参数 --fail-swap-on=false 更改这个限制。 我们这里关闭系统的 Swap:

1
$ swapoff -a

关闭之前:

1
2
3
4
$ free -m
total used free shared buff/cache available
Mem: 992 113 636 6 242 716
Swap: 2047 0 2047

关闭之后:

1
2
3
4
$ free -m
total used free shared buff/cache available
Mem: 992 111 638 6 242 718
Swap: 0 0 0

🔅 禁用 SELINUX:

1
$ setenforce 0

确认:

1
2
3
$ vi /etc/selinux/config
SELINUX=disabled

🔅 安装 Docker

Docker 的安装比较复杂,在另一篇文章中有单独讲安装和配置。

这里安装的是 docker-ce-17.10

这里,因为 kubernetes 用的 cgroup driver 为 systemd,所以要把 docker 也配置为 systemd。通过 info 命令确认。

1
2
3
4
5
6
7
8
9
# docker info
Server Version: 17.09.0-ce
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: systemd
...


🎖 安装

🔅 更新源

1
2
3
4
5
6
7
8
9
10
11
12
13
# 如果需要 sudo 用户,请
$ su
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

🔅 更新

1
$ sudo yum makecache fast

🔅 安装 kubernets 主要软件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ sudo yum install -y kubelet kubeadm kubectl
...
Dependencies Resolved
======================================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================================
Installing:
kubeadm x86_64 1.8.1-0 kubernetes 15 M
kubectl x86_64 1.8.1-0 kubernetes 7.3 M
kubelet x86_64 1.8.1-0 kubernetes 16 M
Installing for dependencies:
kubernetes-cni x86_64 0.5.1-1 kubernetes 7.4 M
socat x86_64 1.7.3.2-2.el7 base 290 k
Transaction Summary
======================================================================================================================================================

🔅 版本确认

1
2
3
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64”}


🎖 配置 Master与问题集

1
$ sudo kubeadm init --kubernetes-version=v1.8.1 --pod-network-cidr=10.244.0.0/16

正常来说,这条命令就会启动一个 Master 了,但实际情况是新的 Kubernetes 是使用容器来启动三方软件的,而这些容器放置在 gcr.io,需要翻墙,这里我们预加载一下。

系统启动一共需要以下镜像:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
gcr.io/google_containers/kube-apiserver-amd64:v1.8.1
gcr.io/google_containers/kube-controller-manager-amd64:v1.8.1
gcr.io/google_containers/kube-scheduler-amd64:v1.8.1
gcr.io/google_containers/pause-amd64:3.0
gcr.io/google_containers/etcd-amd64:3.0.17
gcr.io/google_containers/kube-proxy-amd64:v1.8.1
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
quay.io/coreos/flannel-amd64:v0.9.0

🔅 重新开始

1
2
3
$ kubeadm reset
$ sudo kubeadm init --kubernetes-version=v1.8.1 --pod-network-cidr=10.244.0.0/16

🔅 完成

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
$ kubeadm init --kubernetes-version=v1.8.1 --pod-network-cidr=10.244.0.0/16
# 以下为输出
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 17.03
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.8.78]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 29.502147 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node localhost.localdomain as master by adding a label and a taint
[markmaster] Master localhost.localdomain tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 405cea.cb350fd16023e08d
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 405cea.cb350fd16023e08d 192.168.8.78:6443 --discovery-token-ca-cert-hash sha256:87a0229f7837cf1502789e430c57ca381642902f5665005640b917b6fcbf5a3b

按最后的要求,运行这几条命令:

1
2
3
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

🔅 结果

1
2
3
4
5
6
7
8
9
10
11
12
13
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
656dbe0d38be gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-dns-545bc4bfd4-2wrrg_kube-system_de9494c2-b6f7-11e7-af7d-0800278919d4_0
e0dfea9f00a5 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-fnj6t_kube-system_de803e34-b6f7-11e7-af7d-0800278919d4_0
da7a7ec9d578 mirrorgooglecontainers/kube-scheduler-amd64 "kube-scheduler --..." 2 minutes ago Up 2 minutes k8s_kube-scheduler_kube-scheduler-localhost.localdomain_kube-system_5fbf3e68ff1f2f57797628887e9c1bec_0
2126fd5a0d57 mirrorgooglecontainers/kube-controller-manager-amd64 "kube-controller-m..." 2 minutes ago Up 2 minutes k8s_kube-controller-manager_kube-controller-manager-localhost.localdomain_kube-system_2f2bd9f6dddf513ac6c21a43335777dd_0
21edd6e71eeb mirrorgooglecontainers/kube-apiserver-amd64 "kube-apiserver --..." 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-localhost.localdomain_kube-system_dc78e47f78457da950a93b36ff6ac4ba_0
8712a0e9833f mirrorgooglecontainers/etcd-amd64 "etcd --listen-cli..." 2 minutes ago Up 2 minutes k8s_etcd_etcd-localhost.localdomain_kube-system_d76e26fba3bf2bfd215eb29011d55250_0
47bd69d72e16 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-controller-manager-localhost.localdomain_kube-system_2f2bd9f6dddf513ac6c21a43335777dd_0
292d54bd936f gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-apiserver-localhost.localdomain_kube-system_dc78e47f78457da950a93b36ff6ac4ba_0
dae95c5d2a8f gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-scheduler-localhost.localdomain_kube-system_5fbf3e68ff1f2f57797628887e9c1bec_0
5d41a042874c gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_etcd-localhost.localdomain_kube-system_d76e26fba3bf2bfd215eb29011d55250_0

🔅 调试

如果在初始化过程中碰到问题,可以通过以下命令来显示日志。

🎖 配置 pod 网络 Flannel

1
2
3
4
5
6
7
8
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds” created

这条命令运行完之后,是会生成

1
2
3
4
5
6
7
8
9
/etc/cni/net.d/10-flannel.conf
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}

这时的 docker 多出现了

1
2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aaeef81d5b18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 14 minutes ago Up 14 minutes k8s_POD_kube-flannel-ds-kwhl9_kube-system_5137e57d-b6fd-11e7-af7d-0800278919d4_0


🎖 加入 Worker Node

🔅 更改 docker 为 systemd

🔅 安装 docker

🔅 准备镜像

和 Master 一样,

1
2
3
gcr.io/google_containers/kube-proxy-amd64:v1.8.1
gcr.io/google_containers/pause-amd64:3.0
quay.io/coreos/flannel-amd64:v0.9.0

🔅 安装 kubeadm kubectl 和 cni

1
$ sudo yum install -y kubelet kubeadm

🔅 配置 cni 的插件

1
2
3
4
5
6
7
8
9
/etc/cni/net.d/10-flannel.conf
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}

🔅 最后,加入 cluster

1
kubeadm join --token cbc509.fcc110dc2d9c47d8 10.10.81.150:6443 --discovery-token-ca-cert-hash sha256:b7b0d7d737376b59417f2760e4c61bbb0948abee283677f83afc6d80fe67fff9

实就是运行最后一句话。

🎖 验收

在 master 上运行

1
$ kubectl get nodes