原创

部署K8S集群

一、环境准备

服务器最低配置:2核CPU、2GB内存、20GB硬盘

软件环境:

  1. 1、操作系统:Centos7.4
  2. 2、Docker-ce:20.10.16
  3. 3、K8S:1.23

服务器:(本地虚拟机)

  1. 1、k8s-master:192.168.30.95
  2. 2、k8s-node1:192.168.30.94
  3. 3、k8s-node2:192.168.30.132

二、初始化环境

三台服务器均需要执行

1、管理防火墙、Selinux

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

2、关闭Swap

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

3、根据服务器规划配置主机名

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 也可以修改文件后重启
vim /etc/hostname

4、添加Hosts文件

cat >> /etc/hosts << EOF
192.168.30.95 k8s-master
192.168.30.94 k8s-node1
192.168.30.132 k8s-node2
EOF

5、修改路由

vim /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1	   # 开启桥设备内核监控(ipv6)
net.bridge.bridge-nf-call-iptables = 1     # 开启桥设备内核监控(ipv4)
net.ipv4.ip_forward = 1			           # 开启路由转发
modprobe br_netfilter     # 加载内核模块,开启netfilter对bridge设备的监控
sysctl --system           # 加载上面的k8s.conf配置文件

三、安装Docker、kubeadm、kubelet

所有节点都需要安装

1、安装docker,安装20.10.16版本:

点击查看docker安装文档

# docker 要启动状态
systemctl start docker
systemctl enable docker

2、修改docker源

vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

# 重启docker
systemctl restart docker
# 验证配置是否生效
docker info

3、安装k8s组件

配置源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

刷新源

yum makecache fast

安装kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0

yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0

设置kubelet开机启动(先不执行启动)

systemctl enable kubelet

4、Master初始化(仅master节点执行)

执行初始化命令

kubeadm init \
  --apiserver-advertise-address=192.168.30.95 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.23.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --ignore-preflight-errors=all
  1. --apiserver-advertise-address=192.168.30.95     集群的通告地址(master节点IP)
  2. --image-repository  指定镜像仓库地址
  3. --kubernetes-version 指定k8s版本,需要与上面安装的版本一致
  4. --service-cidr  集群内部的虚拟网络,pod访问入口
  5. --pod-network-cidr Pod网络

检查coredns

[root@k8s-master ~]# kubectl -n kube-system get po -o wide
coredns-6d8c4cb4d-fcjr4              0/1     Pending   0          3m8s    <none>          <none>       <none>           <none>
coredns-6d8c4cb4d-l5rtm              0/1     Pending   0          3m8s    <none>          <none>       <none>           <none>

coredns是pending,没有启动,安装kube-flannel

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# kubectl get ns
[root@k8s-master ~]# kubectl -n kube-flannel get po
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-ssk2n   1/1     Running   0          70s

初始化之后会输出一个join命令,需要再node节点执行

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.30.95:6443 --token 1ghmmf.ms3r9vw4b41b25f1 \
        --discovery-token-ca-cert-hash sha256:3eaa96faa0018a0d0d4d277a23b4f7012802ed52108619dcf226d77736cb9750 

上面的token有效期为24小时,过期后无法使用,可以生成新的

 kubeadm token create --print-join-command

拷贝K8S认证文件(初始化输出里有)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看工作节点

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   19m   v1.23.0

5、向集群添加节点(node节点执行)

node节点安装完docker、k8s相关组件之后,加入集群

执行主节点初始化输出的命令(记着执行 systemctl enable kubelet ,没有执行会有WARNING告警)

[root@k8s-node1 ~]# kubeadm join 192.168.30.95:6443 --token 1ghmmf.ms3r9vw4b41b25f1 \
>         --discovery-token-ca-cert-hash sha256:3eaa96faa0018a0d0d4d277a23b4f7012802ed52108619dcf226d77736cb9750
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node2 ~]# kubeadm join 192.168.30.95:6443 --token 1ghmmf.ms3r9vw4b41b25f1 \
>         --discovery-token-ca-cert-hash sha256:3eaa96faa0018a0d0d4d277a23b4f7012802ed52108619dcf226d77736cb9750
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6、到master节点验证

一共三个节点,一个master两个node,已经加入集群

[root@k8s-master ~]# kubectl get nodes 
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   Ready      control-plane,master   9m55s   v1.23.0
k8s-node1    Ready      <none>                 2m27s   v1.23.0
k8s-node2    NotReady   <none>                 73s     v1.23.0


四、配置容器网络(master)Calico

也可以参考官方文档部署

官网:https://projectcalico.docs.tigera.io/archive/v3.22/getting-started/kubernetes/self-managed-onprem/onpremises

1、下载Calico 

curl https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calico.yaml -O

2、修改文件

# 修改文件的CALICO_IPV4POOL_CIDR 变量,要与master初始化时--pod-network-cidr指定的一致

[root@k8s-master ~]# vim calico.yaml 
- name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

3、应用calico

kubectl apply -f calico.yaml

查看状态

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS              RESTARTS   AGE
calico-kube-controllers-6b77fff45-9gk4b    0/1     Terminating         0          20m
calico-kube-controllers-6c75955484-gznpp   0/1     ContainerCreating   0          12s
calico-node-4f8rq                          0/1     Init:0/3            0          12s
calico-node-k8rz9                          0/1     Init:0/3            0          12s
calico-node-vgftq                          0/1     Init:0/3            0          12s
coredns-6d8c4cb4d-492fc                    1/1     Running             0          3h3m
coredns-6d8c4cb4d-gb4g8                    1/1     Running             0          3h3m
etcd-k8s-master                            1/1     Running             0          3h3m
kube-apiserver-k8s-master                  1/1     Running             0          3h3m
kube-controller-manager-k8s-master         1/1     Running             0          3h3m
kube-proxy-7lmzb                           1/1     Running             0          159m
kube-proxy-9cltg                           1/1     Running             0          161m
kube-proxy-hxzs7                           1/1     Running             0          3h3m
kube-scheduler-k8s-master                  1/1     Running             0          3h3m
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6c75955484-gznpp   1/1     Running   0          5m21s
calico-node-4f8rq                          1/1     Running   0          5m21s
calico-node-k8rz9                          1/1     Running   0          5m21s
calico-node-vgftq                          1/1     Running   0          5m21s
coredns-6d8c4cb4d-492fc                    1/1     Running   0          3h8m
coredns-6d8c4cb4d-gb4g8                    1/1     Running   0          3h8m
etcd-k8s-master                            1/1     Running   0          3h8m
kube-apiserver-k8s-master                  1/1     Running   0          3h8m
kube-controller-manager-k8s-master         1/1     Running   0          3h8m
kube-proxy-7lmzb                           1/1     Running   0          164m
kube-proxy-9cltg                           1/1     Running   0          166m
kube-proxy-hxzs7                           1/1     Running   0          3h8m
kube-scheduler-k8s-master                  1/1     Running   0          3h8m

Calico的pod状态都是running,已经安装完了

五、使用k8s部署Nginx

1、创建命名空间namespace

# 创建命名空间
[root@k8s-master ~]# kubectl create ns nginx
namespace/nginx created
# 查询所有命名空间,可以看到已经成功创建nginx的命名空间
[root@k8s-master ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   3h16m
kube-node-lease   Active   3h16m
kube-public       Active   3h16m
kube-system       Active   3h16m
nginx             Active   39s
# 查看名称空间详情
[root@k8s-master ~]# kubectl describe namespace nginx
Name:         nginx
Labels:       kubernetes.io/metadata.name=nginx
              name=nginx
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.

2、创建Nginx的发布Deployment

[root@k8s-master ~]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx-deployment1
  namespace: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.20.0
        ports:
        - containerPort: 80
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        hostPath:
          path: /data/w/kube/nginx/html/
  1. kind: Deployment表示yaml文件创建的是一个Deployment发布

    metadata表示这个deployment的元信息

    metadata.name 是deployment的名称 nginx-deployment1

    metadata.labels 是deployment的标签 即:app=nginx

    metadata.namespace  是deployment的命名空间,此处选择的是第一步创建的命名空间nginx

    spec: 表示deployment的详细参数配置说明

    spec.replicas 是启动几个pod节点

    spec.template.spec 是deployment选择模块的详细说明

  2. spec.template.spec.containers 表示选择的容器是什么

  3. spec.template.spec.volumes 表示选择的容器挂载的宿主机的文件和目录 conf, log和html

#挂载目录需要提前创建好
mkdir -p /data/w/kube/nginx/html/
[root@k8s-master ~]# kubectl create -f nginx-deployment.yaml 
deployment.apps/nginx-deployment1 created
# 等的时间比较久
[root@k8s-master ~]# kubectl get pod -n nginx
NAME                                 READY   STATUS    RESTARTS   AGE
nginx-deployment1-6fd9d6cdbc-k9n6s   1/1     Running   0          12s
nginx-deployment1-6fd9d6cdbc-pwnzc   1/1     Running   0          12s

创建时间可能比较久,如果一直ContainerCreating可以,可以看有没有报错

[root@k8s-master ~]# kubectl describe pod -n nginx

在node的docker里可以看到已经创建成功了

[root@k8s-node2 ~]# docker ps
CONTAINER ID   IMAGE                                                COMMAND                  CREATED          STATUS          PORTS     NAMES
eb14fe8b726f   7ab27dbbfbdf                                         "/docker-entrypoint.…"   5 minutes ago    Up 5 minutes              k8s_nginx_nginx-deployment1-6fd9d6cdbc-pwnzc_nginx_314b019b-ea04-4631-a099-83e6526040a9_0
b33da7f6ab07   registry.aliyuncs.com/google_containers/pause:3.6    "/pause"                 5 minutes ago    Up 5 minutes              k8s_POD_nginx-deployment1-6fd9d6cdbc-pwnzc_nginx_314b019b-ea04-4631-a099-83e6526040a9_0

删除一个pod节点,删掉之后还会自动创建一个

[root@k8s-master ~]# kubectl get pod -n nginx
NAME                                 READY   STATUS    RESTARTS   AGE
nginx-deployment1-6fd9d6cdbc-k9n6s   1/1     Running   0          9m11s
nginx-deployment1-6fd9d6cdbc-pwnzc   1/1     Running   0          9m11s
[root@k8s-master ~]# kubectl delete pod nginx-deployment1-6fd9d6cdbc-pwnzc -n nginx
pod "nginx-deployment1-6fd9d6cdbc-pwnzc" deleted
[root@k8s-master ~]# kubectl get pod -n nginx
NAME                                 READY   STATUS    RESTARTS   AGE
nginx-deployment1-6fd9d6cdbc-k9n6s   1/1     Running   0          9m46s
nginx-deployment1-6fd9d6cdbc-qd9dl   1/1     Running   0          13s

3、创建Nginx的服务service

[root@k8s-master ~]# vim nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
   app: nginx
  name: nginx-deployment1
  namespace: nginx
spec:
  ports:
  - port: 9000
    name: nginx-service80
    protocol: TCP
    targetPort: 80
    nodePort: 31091
  selector:
    app: nginx
  type: NodePort
  1. kind: Service表示yaml文件创建的是一个Service

    metadata表示这个Service的元信息

    metadata.name 是Service的名称 nginx-deployment1

    metadata.labels 是Service的标签 即:app=nginx

    metadata.namespace  是Service的命名空间,此处选择的是第一步创建的命名空间nginx

  2. sepc.type 取值NodePort 表示这个Service的类型是一个节点端口转发类型

    sepc.selector 表示这个Service是将带标签的哪些pods做为一个集合对外通过服务

    sepc.ports.port 是Service绑定的端口

    sepc.ports.name: nginx-service80 表示Service服务的名称
    sepc.ports.protocol: TCP 表示Service转发请求到容器的协议是TCP,我们部署的http的nginx服务,因此选择协议为TCP
    sepc.ports.targetPort: 80 表示Service转发外部请求到容器的目标端口80,即deployment的pod容器对外开放的容器端口80
    sepc.ports.nodePort: 31090 表示Service对外开放的节点端口


执行创建

[root@k8s-master ~]# kubectl create -f nginx-service.yaml 
service/nginx-deployment1 created

查看Service

[root@k8s-master ~]# kubectl get services -n nginx
NAME                TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
nginx-deployment1   NodePort   10.103.9.181   <none>        9000:31091/TCP   42s

查看服务的详情(10.224.169.*地址都是集群内的)

[root@k8s-master ~]# kubectl describe service nginx-deployment1 -n nginx
Name:                     nginx-deployment1
Namespace:                nginx
Labels:                   app=nginx
Annotations:              <none>
Selector:                 app=nginx
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.9.181
IPs:                      10.103.9.181
Port:                     nginx-service80  9000/TCP
TargetPort:               80/TCP
NodePort:                 nginx-service80  31091/TCP
Endpoints:                10.244.169.131:80,10.244.36.67:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

访问Nginx需要 访问node的真实地址+暴漏的端口号 31091

在node服务器/data/w/kube/nginx/html/目录创建一个index.html(内容随便写)

http://192.168.30.132:31091




正文到此结束
评论

登录后才能发表评论 登录/注册

0评论
  • 还没有评论,快来抢沙发吧!