企业级k8s高可用集群搭建(无坑版,如有坑请联系解决)
[*]目录
一.服务器规划
二.资源准备(系统配置,内核优化,yum源配置)(所有节点操作)
三.keepalived配置(master节点操作)
四.haproxy配置(master节点操作)
五.docker安装(所有节点操作)
六.安装kubeadm,kubelet kubectl(所有节点操作)
七.k8s集群安装(在具有vip的master上操作)
八.安装集群网络(master节点操作)
九.其他节点加入集群
十.部署dashboard(k8s-master-01)
十一.部署ingress(0.30.0)
十二.部署metric
十三.部署kubernetes dns缓存
十四.k8s缩容扩容维护
十五.k8s常见操作十六.参考下一篇,关于应用部署实际应用案例
一.服务器规划(基于腾讯云服务器的规划)
k8s-master-01 10.206.16.14 master
k8s-master-02 10.206.16.15 master
k8s-master-03 10.206.16.16 master
k8s-node-01 10.206.16.8 node
k8s-node-02 10.206.16.9 node
k8s-harbor 10.206.16.4 harbor
vip 10.206.16.18 (https://cloud.tencent.com/document/product/215/36694 腾讯云vip申请办法)
服务器系统centos8 k8s版本1.16 机器配置4核16G
二.资源准备
1.设置主机的名字
hostnamectl set-hostname k8s-master-01(分别在对应的主机按照规划设置名字)
2.编辑hosts文件(在所有主机操作)
cat <<EOF >>/etc/hosts
10.206.16.18 master.k8s.io k8s-vip
10.206.16.14 master01.k8s.io k8s-master-01
10.206.16.15 master02.k8s.io k8s-master-02
10.206.16.16 master03.k8s.io k8s-master-03
10.206.16.8 node01.k8s.io k8s-node-01
10.206.16.9 node02.k8s.io k8s-node-02
10.206.16.4 k8s-harbor
EOF
3.关闭防火墙,selinux,swap(在所有主机操作)
systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && swapoff -a
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
4.配置内核参数(所有主机操作)
cat >/etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf 5.设置资源配置文件(所有主机操作)
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536">> /etc/security/limits.conf
echo "* hard nproc 65536">> /etc/security/limits.conf
echo "* softmemlockunlimited">> /etc/security/limits.conf
echo "* hard memlockunlimited">> /etc/security/limits.conf 6.配置yum源(所有主机操作)
yum install -y wget
mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos8_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
yum clean all && yum makecache 7.配置kubernetes源(所有主机操作)
cat <<EOF >/etc/yum.repos.d/kubernetes.repo
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF 8.配置docker源(所有主机操作)
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo 9.安装相关包(所有主机操作)
yum install -y conntrack-tools libseccomp libtool-ltdl 三.部署keepalived(在3台master机器操作)
1.安装keepalived(在3台master操作)
yum install -y conntrack-tools libseccomp libtool-ltdl 2.配置(另外的两台master配置和上面类似,只需要修改对应的state配置为BACKUP,priority权重值不同即可,配置中的其他字段这里不做说明)
k8s-master-01配置:
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 250
nopreempt #设置非抢占模式
preempt_delay 10#抢占延时10分钟
advert_int 1 #检查间隔默认1s
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab11
}
unicast_src_ip 10.206.16.14#设置本机内网ip
unicast_peer{#其他两台master ip
10.206.16.15
10.206.16.16
}
virtual_ipaddress {
10.206.16.18
}
track_script {
check_haproxy
}
}
EOF
k8s-master-02配置
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
nopreempt
preempt_delay 10
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab11
}
unicast_src_ip 10.206.16.15#设置本机内网ip
unicast_peer{
10.206.16.14
10.206.16.16
}
virtual_ipaddress {
10.206.16.18
}
track_script {
check_haproxy
}
}
EOF k8s-master-03配置:
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id k8s
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 150
nopreempt
preempt_delay 10
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab11
}
unicast_src_ip 10.206.16.16
unicast_peer{
10.206.16.14
10.206.16.15
}
virtual_ipaddress {
10.206.16.18
}
track_script {
check_haproxy
}
}
EOF 3.启动检查
设置开机启动
systemctl enable keepalived.service 启动
systemctl start keepalived.service 查看启动状态
systemctl status keepalived.service 启动后查看k8s-master-01网卡信息
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 52:54:00:f2:57:46 brd ff:ff:ff:ff:ff:ff
inet 10.206.16.14/20 brd 10.206.31.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 10.206.16.18/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fef2:5746/64 scope link
valid_lft forever preferred_lft forever 尝试停掉k8s-master-01的keepalived服务,查看vip是否能漂移到其他的master,并且重新启动k8s-master-01的keepalived服务,查看vip是否能正常漂移回来,证明配置没有问题。
四.haproxy搭建(三台master上操作)
1.安装haproxy(三台master执行)
yum install -y haproxy 2.配置(三台master执行)
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events.This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01.k8s.io 10.206.16.14:6443 check
server master02.k8s.io 10.206.16.15:6443 check
server master03.k8s.io 10.206.16.16:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF 3.启动和检查(三台master执行)
# 设置开机启动 systemctl enable haproxy
# 开启haproxy systemctl start haproxy
# 查看启动状态 systemctl status haproxy
4.检查服务端口是否启动
# netstat -lntup|grep haproxy
tcp 0 0 0.0.0.0:1080 0.0.0.0:* LISTEN 37567/haproxy
tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 37567/haproxy
udp 0 0 0.0.0.0:48413 0.0.0.0:* 37565/haproxy 五.安装docker(所有节点)
1.安装
# step 1: 安装必要的一些系统工具
$ yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
$ sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 查找Docker-CE的版本:
$ yum list docker-ce.x86_64 --showduplicates | sort -r
# Step 4: 安装指定版本的Docker-CE
$ yum makecache
$ yum install -y docker-ce
2.配置
修改docker的配置文件,目前k8s推荐使用的docker文件驱动是systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"insecure-registries" : ["10.206.16.4"]
}
EOF 修改docker的服务配置文件,指定docker的数据目录为外挂的磁盘--graph /data/docker
mkdir /data/docker
sed -i "s#containerd.sock#containerd.sock --graph /data/docker#g" /lib/systemd/system/docker.service 添加如下执行语句:(如果pod之间无法通信的问题)
mkdir -p /etc/systemd/system/docker.service.d/
cat>/etc/systemd/system/docker.service.d/10-docker.conf<<EOF
ExecStartPost=/sbin/iptables --wait -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecStopPost=/bin/bash -c '/sbin/iptables --wait -D FORWARD -s 0.0.0.0/0 -j ACCEPT &> /dev/null || :'
ExecStartPost=/sbin/iptables --wait -I INPUT -i cni0 -j ACCEPT
ExecStopPost=/bin/bash -c '/sbin/iptables --wait -D INPUT -i cni0 -j ACCEPT &> /dev/null || :'
EOF 3.启动docker
$ systemctl daemon-reload
$ systemctl start docker.service
$ systemctl enable docker.service
$ systemctl status docker.service 六.安装kubeadm,kubelet,和kubectl(所有节点操作)
1.安装
yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
systemctl enable kubelet 2.配置kubectl自动补全功能
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc 七.安装k8s集群(在具有vip的k8s-master-01上操作)
1.创建配置文件:
mkdir /usr/local/kubernetes/manifests -p
cd /usr/local/kubernetes/manifests/
cat > kubeadm-config.yaml <<EOF
apiServer:
certSANs:
- k8s-master-01
- k8s-master-02
- k8s-master-03
- master.k8s.io
- 10.206.16.14
- 10.206.16.15
- 10.206.16.16
- 10.206.16.18
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}
EOF 2.初始化master节点
# kubeadm init --config kubeadm-config.yaml
Using Kubernetes version: v1.16.3
Running pre-flight checks
: tc not found in system path
: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
Pulling images required for setting up a Kubernetes cluster
This might take a minute or two, depending on the speed of your internet connection
You can also perform this action in beforehand using 'kubeadm config images pull'
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Activating the kubelet service
Using certificateDir folder "/etc/kubernetes/pki"
Generating "ca" certificate and key
Generating "apiserver" certificate and key
apiserver serving cert is signed for DNS names and IPs
Generating "apiserver-kubelet-client" certificate and key
Generating "front-proxy-ca" certificate and key
Generating "front-proxy-client" certificate and key
Generating "etcd/ca" certificate and key
Generating "etcd/server" certificate and key
etcd/server serving cert is signed for DNS names and IPs
Generating "etcd/peer" certificate and key
etcd/peer serving cert is signed for DNS names and IPs
Generating "etcd/healthcheck-client" certificate and key
Generating "apiserver-etcd-client" certificate and key
Generating "sa" key and public key
Using kubeconfig folder "/etc/kubernetes"
WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
Writing "admin.conf" kubeconfig file
WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
Writing "kubelet.conf" kubeconfig file
WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
Writing "controller-manager.conf" kubeconfig file
WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
Writing "scheduler.conf" kubeconfig file
Using manifest folder "/etc/kubernetes/manifests"
Creating static Pod manifest for "kube-apiserver"
Creating static Pod manifest for "kube-controller-manager"
Creating static Pod manifest for "kube-scheduler"
Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
All control plane components are healthy after 36.002615 seconds
Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
Skipping phase. Please see --upload-certs
Marking the node k8s-master-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
Marking the node k8s-master-01 as control-plane by adding the taints
Using token: ee3zom.l4xeahsfqcj9uvvz
Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
Creating the "cluster-info" ConfigMap in the "kube-public" namespace
Applied essential addon: CoreDNS
WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f .yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join master.k8s.io:16443 --token ee3zom.l4xeahsfqcj9uvvz \
--discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join master.k8s.io:16443 --token ee3zom.l4xeahsfqcj9uvvz \
--discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b 3.按照提示配置环境变量
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 4.查看集群状态
# kubectl get cs
NAME AGE
scheduler <unknown>
controller-manager <unknown>
etcd-0 <unknown>
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-ch8pn 0/1 Pending 0 2m45s
coredns-58cc8c89f4-qdz7t 0/1 Pending 0 2m45s
etcd-k8s-master-01 1/1 Running 0 113s
kube-apiserver-k8s-master-01 1/1 Running 0 99s
kube-controller-manager-k8s-master-01 1/1 Running 0 98s
kube-proxy-wvp9b 1/1 Running 0 2m45s
kube-scheduler-k8s-master-01 1/1 Running 0 2m6s 里处于pending状态的原因是因为还没有安装网络组件
九.安装集群网络(master操作)
1.安装flannel插件
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created 2.检查
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-ch8pn 1/1 Running 0 32m
coredns-58cc8c89f4-qdz7t 1/1 Running 0 32m
etcd-k8s-master-01 1/1 Running 0 31m
kube-apiserver-k8s-master-01 1/1 Running 0 31m
kube-controller-manager-k8s-master-01 1/1 Running 0 31m
kube-flannel-ds-qljzc 1/1 Running 0 63s
kube-proxy-wvp9b 1/1 Running 0 32m
kube-scheduler-k8s-master-01 1/1 Running 0 32m 十.其他节点加入集群(master,node都需要加入)
1.master加入集群
1.1复制密钥和相关文件(k8s-master-01执行)
建立免登录
ssh-keygen -t rsa
ssh-copy-id root@10.206.16.15
ssh-copy-id root@10.206.16.16
复制文件到k8s-master-02
ssh root@10.206.16.15 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@10.206.16.15:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@10.206.16.15:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@10.206.16.15:/etc/kubernetes/pki/etcd 复制文件到k8s-master-03
ssh root@10.206.16.16 mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@10.206.16.16:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@10.206.16.16:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@10.206.16.16:/etc/kubernetes/pki/etcd 1.2 master加入集群
分别在其他两个master ,k8s-master-02,k8s-master-03上执行k8s-master-01 init后输出的join命令,如果找不到可以在k8s-master-01执行 kubeadm token create --print-join-command
在k8s-master-02上执行操作,需要带上参数--control-plane表示把master控制节点加入到集群
kubeadm join master.k8s.io:16443 --token 13dqfw.8vteayxksdn03mve --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b--control-plane
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 在k8s-master-03上执行join命令
kubeadm join master.k8s.io:16443 --token 13dqfw.8vteayxksdn03mve --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b--control-plane
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config 1.3检查master是否加入成功
root@VM-16-14-centos flannel]#kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready master 74m v1.16.3
k8s-master-02 Ready master 18m v1.16.3
k8s-master-03 Ready master 87s v1.16.3
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-58cc8c89f4-ch8pn 1/1 Running 0 74m
kube-system coredns-58cc8c89f4-qdz7t 1/1 Running 0 74m
kube-system etcd-k8s-master-01 1/1 Running 0 73m
kube-system etcd-k8s-master-02 1/1 Running 0 19m
kube-system etcd-k8s-master-03 1/1 Running 0 93s
kube-system kube-apiserver-k8s-master-01 1/1 Running 0 73m
kube-system kube-apiserver-k8s-master-02 1/1 Running 0 19m
kube-system kube-apiserver-k8s-master-03 1/1 Running 0 94s
kube-system kube-controller-manager-k8s-master-01 1/1 Running 1 73m
kube-system kube-controller-manager-k8s-master-02 1/1 Running 0 19m
kube-system kube-controller-manager-k8s-master-03 1/1 Running 0 94s
kube-system kube-flannel-ds-965w9 1/1 Running 0 94s
kube-system kube-flannel-ds-qljzc 1/1 Running 0 42m
kube-system kube-flannel-ds-vjn8d 1/1 Running 1 19m
kube-system kube-proxy-6w9ch 1/1 Running 0 19m
kube-system kube-proxy-p4mt8 1/1 Running 0 94s
kube-system kube-proxy-wvp9b 1/1 Running 0 74m
kube-system kube-scheduler-k8s-master-01 1/1 Running 1 73m
kube-system kube-scheduler-k8s-master-02 1/1 Running 0 19m
kube-system kube-scheduler-k8s-master-03 1/1 Running 0 94s 2.node加入到集群(分别在两个node上执行)
2.1配置:
kubeadm join master.k8s.io:16443 --token hx67nu.7nlxcsvcsa8uy46o --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b 2.2检查:
#kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready master 80m v1.16.3
k8s-master-02 Ready master 24m v1.16.3
k8s-master-03 Ready master 7m11s v1.16.3
k8s-node-01 Ready <none> 2m58s v1.16.3
k8s-node-02 Ready <none> 101s v1.16.3
#kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-ch8pn 1/1 Running 0 80m
coredns-58cc8c89f4-qdz7t 1/1 Running 0 80m
etcd-k8s-master-01 1/1 Running 0 79m
etcd-k8s-master-02 1/1 Running 0 24m
etcd-k8s-master-03 1/1 Running 0 7m20s
kube-apiserver-k8s-master-01 1/1 Running 0 79m
kube-apiserver-k8s-master-02 1/1 Running 0 24m
kube-apiserver-k8s-master-03 1/1 Running 0 7m21s
kube-controller-manager-k8s-master-01 1/1 Running 1 79m
kube-controller-manager-k8s-master-02 1/1 Running 0 24m
kube-controller-manager-k8s-master-03 1/1 Running 0 7m21s
kube-flannel-ds-965w9 1/1 Running 0 7m21s
kube-flannel-ds-nvdhl 1/1 Running 0 111s
kube-flannel-ds-qljzc 1/1 Running 0 48m
kube-flannel-ds-vjn8d 1/1 Running 1 24m
kube-flannel-ds-z9zc2 1/1 Running 0 3m8s
kube-proxy-6w9ch 1/1 Running 0 24m
kube-proxy-fswvz 1/1 Running 0 111s
kube-proxy-p4mt8 1/1 Running 0 7m21s
kube-proxy-wvp9b 1/1 Running 0 80m
kube-proxy-z27lw 1/1 Running 0 3m8s
kube-scheduler-k8s-master-01 1/1 Running 1 79m
kube-scheduler-k8s-master-02 1/1 Running 0 24m
kube-scheduler-k8s-master-03 1/1 Running 0 7m21s 十一.部署dashboard(k8s-master-01执行)
部署最新版本v2.0.0-beta6,下载yaml
cd /usr/local/kubernetes/manifests/
mkdir dashboard && cd dashboard
wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
# 修改service类型为nodeport
vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
...
# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-76585494d8-62vp9 1/1 Running 0 6m47s
kubernetes-dashboard-b65488c4-5t57x 1/1 Running 0 6m48s
# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.1.207.27 <none> 8000/TCP 7m6s
kubernetes-dashboard NodePort 10.1.207.168 <none> 443:30001/TCP 7m7s
# 在node上通过https://nodeip:30001访问是否正常,注意在firefox浏览器执行,使用非安全模式进入 2.创建service account并绑定默认cluster-admin管理员集群角色
vim dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
获取token,dashboard通过这个token进入系统
# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-p7wgc
Namespace: kubernetes-dashboard
Labels: <none>
Annotations:kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 0e9f5406-3c26-4141-a233-ff4eaa841401
Type:kubernetes.io/service-account-token
Data
====
namespace:20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBDdkJ1VWZFZENYbEd3ZGVrc3FldlhXWG94QU0ySjN1M1Y4ZVRJOUZPd1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXA3d2djIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwZTlmNTQwNi0zYzI2LTQxNDEtYTIzMy1mZjRlYWE4NDE0MDEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.jcCnl8hHWrZcFtd5H17gJdoaDiFsPUos_4oNYQexiXSjFDgy972Bk1qgYV-zHZhu7o_UZyESMRLTlzRFl3W5Eqbhq9fouD0j0DH_qnTGewNTEuByQj5n6uPLloPG5VNCOs1y3TINVj8LdG5q_n6DWfozfn76eNhU9eAnSJZVZ97dGKy_LDykpM9QtJQQkpaF9jSnDPCeoSnSd_1ud1FoQlNS3PAenB54khOmL5gbD6Pf4uJOVUjzxoHk_--gKDW7juVAsaDPbbGftuiM1mIfQ3K02VoNMiG1VB2hlzJ5kWeUn7wpqZpmngzrqBtVj5DJWSpnHAZZef_FFCakKMp5TA
ca.crt: 1025 bytes 十二.部署ingress控制器 mandatory.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:#为了让pod获取真实ip
compute-full-forwarded-for: 'true'
use-forwarded-headers: 'true'
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
--- 十三.部署metric(v0.3.6) mandatory.yaml
## ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
## ClusterRole aggregated-metrics-reader
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods","nodes"]
verbs: ["get","list","watch"]
---
## ClusterRole metrics-server
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups: [""]
resources: ["pods","nodes","nodes/stats","namespaces","configmaps"]
verbs: ["get","list","watch"]
---
## ClusterRoleBinding auth-delegator
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
## RoleBinding metrics-server-auth-reader
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
## ClusterRoleBinding system:metrics-server
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
## APIService
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
## Service
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
targetPort: 4443
---
## Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
hostNetwork: true
serviceAccountName: metrics-server
containers:
- name: metrics-server
## 修改镜像源地址
image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls## 增加
- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname ## 增加
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
memory: 1Gi
cpu: 1000m
requests:
memory: 1Gi
cpu: 1000m
volumeMounts:
- name: tmp-dir
mountPath: /tmp
- name: localtime
readOnly: true
mountPath: /etc/localtime
volumes:
- name: tmp-dir
emptyDir: {}
- name: localtime
hostPath:
type: File
path: /etc/localtime
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: "amd64" 十四.安装kubernetes dns缓存,避免延迟问题
kubectl apply -f https://github.com/feiskyer/kubernetes-handbook/raw/master/examples/nodelocaldns/nodelocaldns-kubenet.yaml
(参考:https://mp.weixin.qq.com/s/t7nt87JPJnWEVCNBS-sBpw)
十五. 集群的扩容,缩容
1.集群扩容
默认情况下加入集群的token是24小时过期,24小时后如果是想要新的node加入到集群,需要重新生成一个token,命令如下
# 显示获取token列表
$ kubeadm token list
# 生成新的token
$ kubeadm token create
除token外,join命令还需要一个sha256的值,通过以下方法计算
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
用上面输出的token和sha256的值或者是利用kubeadm token create --print-join-command拼接join命令即可
2.集群的缩容
kubectl cordon <node name> #设置为不可调度
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets 驱逐节点上的pod
kubectl delete node <node name>
3.初始化重新加入
需要把原来的配置清空
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
文档来源:51CTO技术博客https://blog.51cto.com/luoguoling/2990837
页:
[1]