2023-12-06    2023-12-18    2189 字  5 分钟

在一些没有云环境的场景,比如自建机房中,当我们需要使用Kubernetes时,需要手动部署。如果通过二进制部署,部署过程相当繁琐。我们可以通过kubeasz工具实现Kubernetes的快速部署。

环境

节点列表:

节点 IP 组件
hw-sg-k8s-master-test-0001 192.168.1.56 etcd/master
hw-sg-k8s-master-test-0002 192.168.1.45 etcd/master
hw-sg-k8s-master-test-0003 192.168.1.117 etcd/master
hw-sg-k8s-node-test-0001 192.168.1.72 node
hw-sg-k8s-node-test-0002 192.168.1.7 node

操作系统:CentOS7.9

Kubeasz安装

使用kubeasz自动化部署程序安装。

hw-sg-k8s-master-test-0001为管理机,生成密钥并与其他机器建立免密互信。

1
2
3
4
5
6
ssh-keygen
ssh-copy-id 192.168.1.56
ssh-copy-id 192.168.1.45
ssh-copy-id 192.168.1.117
ssh-copy-id 192.168.1.72
ssh-copy-id 192.168.1.7

确定kubeasz版本,这与最后选择的k8s版本密切相关。我这里需要部署的Kubernetes 版本是1.24,所以选择的kubeasz版本是3.6.2

1
2
3
4
5
cd /usr/local/src/
export release=3.6.2
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod a+x ezdown
mv ezdown /usr/local/bin/

下载kubeasz代码、二进制、默认容器镜像。此步骤可能需要较长时间,请耐心等待。

1
2
3
4
# 国内环境
ezdown -D
# 海外环境
ezdown -D -m standard

上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz

/etc/kubeasz 包含kubeasz版本为${release}的发布代码。

/etc/kubeasz/bin 包含k8s/etcd/docker/cni等二进制文件。

/etc/kubeasz/down 包含集群安装时需要的离线容器镜像。

下载额外容器镜像(cilium,flannel,prometheus等)。

1
ezdown -X flannel

容器化运行kubeasz

1
ezdown -S

验证容器是否创建成功。

1
2
3
4
5
docker ps

CONTAINER ID   IMAGE                   COMMAND                   CREATED          STATUS          PORTS     NAMES
68233de430bf   easzlab/kubeasz:3.6.2   "tail -f /dev/null"       14 minutes ago   Up 14 minutes             kubeasz
d40a0bd15b11   registry:2              "/entrypoint.sh /etc…"   37 minutes ago   Up 37 minutes             local_registry

新建集群并修改配置文件

创建新集群 k8s-test

1
2
3
4
5
6
7
8
docker exec -it kubeasz ezctl new k8s-test

2023-12-05 16:04:59 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-test
2023-12-05 16:04:59 DEBUG set versions
2023-12-05 16:04:59 DEBUG disable registry mirrors
2023-12-05 16:04:59 DEBUG cluster k8s-test: files successfully created.
2023-12-05 16:04:59 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-test/hosts'
2023-12-05 16:04:59 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-test/config.yml'

根据提示配置/etc/kubeasz/clusters/k8s-test/hosts/etc/kubeasz/clusters/k8s-test/config.yml文件。由于将宿主机的/etc/kubeasz目录挂载到了容器的/etc/kubeasz目录,所以直接修改宿主机上的文件即可。容器目录挂载关系如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
docker inspect 68233de430bf | grep Mounts -A 33

        "Mounts": [
            {
                "Type": "bind",
                "Source": "/root/.ssh",
                "Destination": "/root/.ssh",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/docker",
                "Destination": "/etc/docker",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/kubeasz",
                "Destination": "/etc/kubeasz",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/root/.kube",
                "Destination": "/root/.kube",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],

修改/etc/kubeasz/clusters/k8s-test/hosts文件。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
egrep -v "^$|^#" /etc/kubeasz/clusters/k8s-test/hosts
# 按照规划,etcd和master共用节点
[etcd]
192.168.1.56
192.168.1.45
192.168.1.117
[kube_master]
192.168.1.56
192.168.1.45
192.168.1.117
# 配置node节点
[kube_node]
192.168.1.72
192.168.1.7
[harbor]
[ex_lb]
[chrony]
[all:vars]
SECURE_PORT="6443"
CONTAINER_RUNTIME="containerd"
# 使用flannel。优点是简单,轻量,易维护。
# 镜像最好提前通过ezdown -X准备好。
CLUSTER_NETWORK="flannel"
PROXY_MODE="ipvs"
# service网段
SERVICE_CIDR="10.68.0.0/16"
# pod网段
CLUSTER_CIDR="172.20.0.0/16"
NODE_PORT_RANGE="30000-32767"
CLUSTER_DNS_DOMAIN="cluster.local"
bin_dir="/opt/kube/bin"
base_dir="/etc/kubeasz"
cluster_dir="{{ base_dir }}/clusters/k8s-test"
ca_dir="/etc/kubernetes/ssl"
k8s_nodename=''

修改/etc/kubeasz/clusters/k8s-test/config.yml文件。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
[root@hw-sg-k8s-master-test-0001 ~]# egrep -v "^$|^#" /etc/kubeasz/clusters/k8s-test/config.yml
INSTALL_SOURCE: "online"
OS_HARDEN: false
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
CHANGE_CA: false
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
K8S_VER: "1.24.2"
K8S_NODENAME: "{%- if k8s_nodename != '' -%} \
                    {{ k8s_nodename|replace('_', '-')|lower }} \
               {%- else -%} \
                    {{ inventory_hostname }} \
               {%- endif -%}"
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
ENABLE_MIRROR_REGISTRY: false
INSECURE_REG:
  - "http://easzlab.io.local:5000"
  - "https://{{ HARBOR_REGISTRY }}"
SANDBOX_IMAGE: "easzlab.io.local:5000/easzlab/pause:3.9"
# /var/lib将会占用较大空间,可将数据盘挂载至此目录。
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
DOCKER_STORAGE_DIR: "/var/lib/docker"
DOCKER_ENABLE_REMOTE_API: false
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
# 全部master地址,域名,lb地址,公网地址,127.0.0.1补充完整,千万不要漏!
MASTER_CERT_HOSTS:
  - "192.168.1.56"
  - "192.168.1.45"
  - "192.168.1.117"
  - "127.0.0.1"
NODE_CIDR_LEN: 24
KUBELET_ROOT_DIR: "/var/lib/kubelet"
MAX_PODS: 110
KUBE_RESERVED_ENABLED: "no"
SYS_RESERVED_ENABLED: "no"
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
flannel_ver: "v0.22.2"
CALICO_IPV4POOL_IPIP: "Always"
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
CALICO_NETWORKING_BACKEND: "bird"
CALICO_RR_ENABLED: false
CALICO_RR_NODES: []
calico_ver: "v3.24.6"
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
cilium_ver: "1.13.6"
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false
kube_ovn_ver: "v1.11.5"
OVERLAY_TYPE: "full"
FIREWALL_ENABLE: true
kube_router_ver: "v1.5.4"
dns_install: "yes"
corednsVer: "1.11.1"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.22.23"
LOCAL_DNS_CACHE: "169.254.20.10"
# 不需要metricsserver,可以先不安装。
metricsserver_install: "no"
metricsVer: "v0.6.4"
# 不需要dashboard,可以先不安装。
dashboard_install: "no"
dashboardVer: "v2.7.0"
dashboardMetricsScraperVer: "v1.0.8"
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "45.23.0"
kubeapps_install: "no"
kubeapps_install_namespace: "kubeapps"
kubeapps_working_namespace: "default"
kubeapps_storage_class: "local-path"
kubeapps_chart_ver: "12.4.3"
local_path_provisioner_install: "no"
local_path_provisioner_ver: "v0.0.24"
local_path_provisioner_dir: "/opt/local-path-provisioner"
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
network_check_enabled: false
network_check_schedule: "*/5 * * * *"
HARBOR_VER: "v2.6.4"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_PATH: /var/data
HARBOR_TLS_PORT: 8443
HARBOR_REGISTRY: "{{ HARBOR_DOMAIN }}:{{ HARBOR_TLS_PORT }}"
HARBOR_SELF_SIGNED_CERT: true
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CHARTMUSEUM: true

根据实际情况决定是否需要新加一块数据盘挂载到/var/lib目录。

修改内核配置文件/etc/kubeasz/roles/prepare/templates/95-k8s-sysctl.conf.j2

在第一行添加如下内容,否则可能会出现pod一直处于Terminating状态,无法回收。

1
fs.may_detach_mounts = 1

安装集群

一键安装集群

1
docker exec -it kubeasz ezctl setup k8s-test all

验证集群

重新登录后,可以执行kubectl命令。

1
2
3
4
5
6
[root@hw-sg-k8s-master-test-0001 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.56:6443
CoreDNS is running at https://192.168.1.56:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
KubeDNSUpstream is running at https://192.168.1.56:6443/api/v1/namespaces/kube-system/services/kube-dns-upstream:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看pod状态。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
kubectl get pod -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-7bc88ddb8b-n4f5s   1/1     Running   0          39s
kube-flannel-ds-czz8g      1/1     Running   0          52s
kube-flannel-ds-j6b2c      1/1     Running   0          52s
kube-flannel-ds-jc4m4      1/1     Running   0          52s
kube-flannel-ds-kvxb5      1/1     Running   0          52s
kube-flannel-ds-phx4v      1/1     Running   0          52s
node-local-dns-47vjj       1/1     Running   0          38s
node-local-dns-g2dcf       1/1     Running   0          38s
node-local-dns-hlxpt       1/1     Running   0          38s
node-local-dns-wlvgb       1/1     Running   0          38s
node-local-dns-xmzc9       1/1     Running   0          38s

添加节点

将新开的192.168.1.93服务器加入到集群中。需要注意容器目录/var/lib/空间是否充足。

1
2
ssh-copy-id 192.168.1.93
docker exec -it kubeasz ezctl add-node k8s-test 192.168.1.93

验证。

kubectl get node

NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.117   Ready,SchedulingDisabled   master   23m   v1.28.1
192.168.1.45    Ready,SchedulingDisabled   master   23m   v1.28.1
192.168.1.56    Ready,SchedulingDisabled   master   23m   v1.28.1
192.168.1.7     Ready                      node     22m   v1.28.1
192.168.1.72    Ready                      node     22m   v1.28.1
192.168.1.93    Ready                      node     30s   v1.28.1

删除节点

192.168.1.93从集群中删除。

1
2
3
4
5
6
7
8
9
docker exec -it kubeasz ezctl del-node k8s-test 192.168.1.93

kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.117   Ready,SchedulingDisabled   master   26m   v1.28.1
192.168.1.45    Ready,SchedulingDisabled   master   26m   v1.28.1
192.168.1.56    Ready,SchedulingDisabled   master   26m   v1.28.1
192.168.1.7     Ready                      node     25m   v1.28.1
192.168.1.72    Ready                      node     25m   v1.28.1

image-20231028232834657