简单使用kubeadm构建K8S集群

环境

  • 主机:macOS Catalina 10.15
  • u1:ubuntu 18.04 (parallel 2GiB)
  • u2:ubuntu 18.04 (parallel 2GiB, master node)

准备

  • container runtime (Docker/containerd/CRI-O)

    这里使用的是Docker,Docker安装说明网上已有教程

  • kubeadm、kubelet、kubectl,国内请使用代理,否则无法访问google.com

    1
    2
    3
    4
    5
    6
    7
    8
    sudo apt-get update && sudo apt-get install -y apt-transport-https curl
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    sudo apt-mark hold kubelet kubeadm kubectl

开始

  1. 获取所需要的镜像(虚拟机u1、u2)
1
2
3
4
5
6
7
8
➜ ~ image kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.17.4
k8s.gcr.io/kube-controller-manager:v1.17.4
k8s.gcr.io/kube-scheduler:v1.17.4
k8s.gcr.io/kube-proxy:v1.17.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

将如上镜像地址并保存到images.txt文件中,运行如下脚本拉取镜像(这是由于国内无法访问k8s.gcr.io而需要将其换做国内源)

1
2
3
4
5
6
7
8
9
10
images=$(cat images.txt)

for image in $images;
do
tmp=$(echo $image | sed "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" | sed "s#calico#quay.azk8s.cn/calico#g")
docker pull $tmp
docker tag $tmp $(echo $image | sed "s#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#g" | sed "s#quay.azk8s.cn/calico#calico#g")
docker rmi $tmp
done;

  1. 执行kubeadm init(虚拟机u2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
➜ ~ kubeadm init
[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.501735 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-cp" as an annotation
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: <token>
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

打印内容中 kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>需要记录一下,token和discovery-token-ca-cert-hash用于将新的节点加入到集群

  1. 根据说明,在自己的用户目录下建立.kube文件夹并将k8s config文件拷贝过去(虚拟机u2)
1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 测试kubectl是否正常(虚拟机u2)
1
2
3
➜  ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
u2 Ready master 39h v1.17.4
  1. 安装网络插件(版本可能已更新,可参考官方文档https://docs.projectcalico.org/getting-started/kubernetes/quickstart)(虚拟机u2)
1
2
3
4
5
# images_.txt
calico/pod2daemon-flexvol:v3.13.1
calico/node:v3.13.1
calico/cni:v3.13.1
calico/kube-controllers:v3.13.1

同样适用国内源进行下载

1
2
3
4
5
6
7
8
9
images=$(cat images_calico.txt)

for image in $images;
do
tmp=$(echo $image | sed "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" | sed "s#calico#quay.azk8s.cn/calico#g")
docker pull $tmp
docker tag $tmp $(echo $image | sed "s#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#g" | sed "s#quay.azk8s.cn/calico#calico#g")
docker rmi $tmp
done;

加载到k8s中

1
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  1. 加入新的节点u1(切换到虚拟机u1

    将上面的kubeadm join命令复制下来执行,⚠️注意填写正确的u2地址和端口(一般为6443)

1
kubeadm join <u2_IP>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>```
作者

Exqlnet

发布于

2020-03-17

更新于

2023-10-15

许可协议

评论