@pandeMacBook-Pro ~ % multipass list
Name State IPv4 Image
master1 Running 192.168.64.8 Ubuntu 18.04 LTS
worker1 Running 192.168.64.11 Ubuntu 18.04 LTS
worker2 Running 192.168.64.12 Ubuntu 18.04 LTS
通过以下指令进入虚拟机中
# 进入master1主机
multipass shell master1
2.1.2. 使用vagrant创建
定义Vagrantfile,如下内容
.configure("2") do |config|
config.vm.box = "ubuntu/focal64"
config.disksize.size = '10GB'
# master1
config.vm.define "master1" do |master1|
master1.vm.network "public_network", ip: "192.168.33.10"
master1.vm.hostname = "master1"
# 将宿主机../data目录挂载到虚拟机/vagrant_dev目录
master1.vm.synced_folder "../data", "/vagrant_data"
# 指定核心数和内存
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
end
# worker1
config.vm.define "worker1" do |worker1|
worker1.vm.network "public_network", ip: "192.168.33.11"
worker1.vm.hostname = "worker1"
# 将宿主机../data目录挂载到虚拟机/vagrant_pro目录
worker1.vm.synced_folder "../data", "/vagrant_data"
# 指定核心数和内存
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
end
# worker2
config.vm.define "worker2" do |worker2|
worker2.vm.network "public_network", ip: "192.168.33.12"
worker2.vm.hostname = "worker2"
# 将宿主机../data目录挂载到虚拟机/vagrant_pro目录
worker2.vm.synced_folder "../data", "/vagrant_data"
# 指定核心数和内存
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
end
end
如果你的vagrant 虚拟机不能正常运行,需要安装以下两个
plugin install vagrant-vbguest vagrant-disksize
使用vagrant status命令查看当前虚拟机的状态,可以看到如下内容
@pan-PC ~/Work/vagrant/kubernetes$ vagrant status
Current machine states:
master1 running (virtualbox)
worker1 running (virtualbox)
worker2 running (virtualbox)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
通过以下指令进入虚拟机中
# 进入master1主机
vagrant ssh master1
2.2. 修改root用户
为了方便使用root用户,我们对每一台虚拟机进行账号登录的修改操作
# 修改root账号的密ma,这里我都改为123456
sudo passwd root
# 修改之后,直接使用su命令切换 root用户
su
tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.20 k8s.gcr.io/kube-apiserver:v1.18.20
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.20 k8s.gcr.io/kube-controller-manager:v1.18.20
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.20 k8s.gcr.io/kube-scheduler:v1.18.20
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.18.20 k8s.gcr.io/kube-proxy:v1.18.20
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.64.8:6443 --token mzolyd.fgbta1hw9s9yml55 \
--discovery-token-ca-cert-hash sha256:21ffa3a184bb6ed36306b483723c37169753f9913e645dc4f88bb12afcebc9dd
.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created