镜像下载、域名解析、时间同步请点击 阿里云开源镜像站
// 1.安装Docker源yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo// 2.安装Dockeryum -y install docker-ce-18.06.1.ce-3.el7// 3.开启自启和启动systemctl enable docker && systemctl start docker// 4.查看版本docker --version
// 查找最新版本[root@master ~]# curl -sSL https://dl.k8s.io/release/stable.txtv1.23.5// 下载安装[root@master tmp]# wget -q https://dl.k8s.io/v1.23.5/kubernetes-server-linux-amd64.tar.gz[root@master tmp]# tar -zxf kubernetes-server-linux-amd64.tar.gz[root@master tmp]# ls kubernetesaddons kubernetes-src.tar.gz LICENSES server[root@master tmp]# ls kubernetes/server/bin/ | grep -E 'kubeadm|kubelet|kubectl'kubeadmkubectlkubelet// 可以看到在 server/bin/ 目录下有我们所需要的全部内容,将我们所需要的 kubeadm kubectl kubelet 等都移动至 /usr/bin 目录下。[root@master tmp]# mv kubernetes/server/bin/kube{adm,ctl,let} /usr/bin/[root@master tmp]# ls /usr/bin/kube*/usr/bin/kubeadm /usr/bin/kubectl /usr/bin/kubelet[root@master tmp]# kubeadm version[root@master tmp]# kubectl version --client[root@master tmp]# kubelet --version//为了在生产环境中保障各组件的稳定运行,同时也为了便于管理,我们增加对 kubelet 的 systemd 的配置,由 systemd 对服务进行管理。[root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service[Unit]Description=kubelet: The Kubernetes AgentDocumentation=http://kubernetes.io/docs/[Service]ExecStart=/usr/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.targetEOF[root@master tmp]# mkdir -p /etc/systemd/system/kubelet.service.d[root@master tmp]# cat <<'EOF' > /etc/systemd/system/kubelet.service.d/kubeadm.conf[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.envEnvironmentFile=-/etc/default/kubeletExecStart=ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGSEOF// 设置开机自启[root@master tmp]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.// 此时,我们的前期准备已经基本完成,可以使用 kubeadm 来创建集群了。别着急,在此之前,我们还需要安装两个工具,名为crictl 和 socat。// Kubernetes v1.23.5 对应 crictl-v1.23.0[root@master ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz[root@master ~]# tar zxvf crictl-v1.23.0-linux-amd64.tar.gz[root@master ~]# mv crictl /usr/bin/sudo yum install -y socat// 启动 master[root@master ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16[init] Using Kubernetes version: v1.23.5[preflight] Running pre-flight checkserror execution phase preflight: [preflight] Some fatal errors occurred:[ERROR FileExisting-conntrack]: conntrack not found in system path[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`To see the stack trace of this error execute with --v=5 or higher// 报错了 需要安装conntrack-toolsyum -y install socat conntrack-tools// 又报错了[kubelet-check] Initial timeout of 40s passed.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.// Docker是用yum安装的,docker的cgroup驱动程序默认设置为systemd。默认情况下Kubernetes cgroup为system,我们需要更改Docker cgroup驱动,# 添加以下内容vim /etc/docker/daemon.json{"exec-opts": ["native.cgroupdriver=systemd"]}# 重启dockersystemctl restart docker# 重新初始化 kubeadmkubeadm reset # 先重置kubeadm init \--apiserver-advertise-address=192.168.42.122 \--image-repository registry.aliyuncs.com/google_containers \--kubernetes-version v1.22.2 \--service-cidr=10.96.0.0/12 \--pod-network-cidr=10.244.0.0/16 \--ignore-preflight-errors=allkubeadm reset// 可以简单初始化kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16Your Kubernetes control-plane has initialized successfully!/var/lib/kubelet/config.yaml #kubeadm配置文件/etc/kubernetes/pki #证书存放目录[root@master ~]# kubeadm config images list --kubernetes-version v1.23.5k8s.gcr.io/kube-apiserver:v1.23.5k8s.gcr.io/kube-controller-manager:v1.23.5k8s.gcr.io/kube-scheduler:v1.23.5k8s.gcr.io/kube-proxy:v1.23.5k8s.gcr.io/pause:3.6k8s.gcr.io/etcd:3.5.1-0k8s.gcr.io/coredns/coredns:v1.8.6[root@master ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.5[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.5[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.5[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6// 配置 环境变量 ,每次重启,kubeadm 都要配置,这个待研究mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:// 安装 通信组件 flannel 或者 calicomkdir ~/kubernetes-flannel && cd ~/kubernetes-flannelwget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f kube-flannel.ymlkubectl get nodes[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlWarning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created[root@master ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-6d8c4cb4d-7jfb8 0/1 Pending 0 11mcoredns-6d8c4cb4d-m8hfd 0/1 Pending 0 11metcd-master 1/1 Running 4 11mkube-apiserver-master 1/1 Running 3 11mkube-controller-manager-master 1/1 Running 4 11mkube-flannel-ds-m65q6 1/1 Running 0 17skube-proxy-qlrmp 1/1 Running 0 11mkube-scheduler-master 1/1 Running 4 11m// coredns 一直是 Pending没有找到原因// 于是乎决定换成 calico试试先删除 kube-flannel[root@master ~]# kubectl delete -f kube-flannel.ymlWarning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+podsecuritypolicy.policy "psp.flannel.unprivileged" deletedclusterrole.rbac.authorization.k8s.io "flannel" deletedclusterrolebinding.rbac.authorization.k8s.io "flannel" deletedserviceaccount "flannel" deletedconfigmap "kube-flannel-cfg" deleteddaemonset.apps "kube-flannel-ds" deleted[root@master ~]# ifconfig cni0 downcni0: ERROR while getting interface flags: No such device[root@master ~]# ip link delete cni0Cannot find device "cni0"[root@master ~]# rm -rf /var/lib/cni/[root@master ~]# ifconfig flannel.1 down[root@master ~]# ip link delete flannel.1[root@master ~]# rm -f /etc/cni/net.d/*[root@master ~]# restart kubelet-bash: restart: command not found[root@master ~]# systemctl restart kubelet// 安装 calico[root@master ~]# curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed100 212k 100 212k 0 0 68018 0 0:00:03 0:00:03 --:--:-- 68039[root@master ~]# lscalico.yaml kube-flannel.yml kubernetes-flannel[root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster NotReady control-plane,master 16h v1.23.5node1 NotReady <none> 12h v1.23.5[root@master ~]# kubectl apply -f calico.yamlconfigmap/calico-config createdcustomresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org createdclusterrole.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node createddaemonset.apps/calico-node createdserviceaccount/calico-node createddeployment.apps/calico-kube-controllers createdserviceaccount/calico-kube-controllers createdWarning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudgetpoddisruptionbudget.policy/calico-kube-controllers created// 查询 pod[root@master ~]# kubectl get -w pod -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-56fcbf9d6b-28w9g 1/1 Running 0 21mkube-system calico-node-btgnl 1/1 Running 0 21mkube-system calico-node-z64mb 1/1 Running 0 21mkube-system coredns-6d8c4cb4d-8pnxx 1/1 Running 0 12hkube-system coredns-6d8c4cb4d-jdbj2 1/1 Running 0 12hkube-system etcd-master 1/1 Running 4 17hkube-system kube-apiserver-master 1/1 Running 3 17hkube-system kube-controller-manager-master 1/1 Running 4 17hkube-system kube-proxy-68qrn 1/1 Running 0 12hkube-system kube-proxy-qlrmp 1/1 Running 0 17hkube-system kube-scheduler-master 1/1 Running 4 17h运行正常了
原文链接:https://blog.csdn.net/qq_36002737/article/details/123678418
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号