This is part five of a five-part
process.
Before you begin
Create the control plane, region, and zone VMs, and the locations.csv file.
Procedure
-
Ensure that the installations on
the control plane, region, and zone VMs (nodes) are complete.
-
On the control plane VM, verify
that
etcd
and
haproxy
are
running.
# systemctl status haproxy
# systemctl status etcd3
-
On each region VM, verify that
the
patroni
service is running.
# systemctl status patroni
-
On any region VM, determine
whether leader election is complete.
# patronictl -c /opt/app/patroni/etc/postgresql.yml list postgres
Output of the command
identifies the IP address of the host with the Leader role.
-
On the control plane VM, create
the join token.
# kubeadm token create --print-join-command
kubeadm join 10.37.138.217:6443 --token cmtjhj.fah33qgst7gl0z7w
--discovery-token-ca-cert-hash
sha256:9de0fcb3e7c5a5aa6a3ecbea0b9e9c3b3c187b6777b5c2746b2ce031240875be
#
-
On each region and zone VM, run
the highlighted portion of the output in step 5.
-
On the control plane VM, verify
that the nodes are joined.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
control Ready control-plane 6m50s v1.20.5
regions1 Ready <none> 58s v1.20.5
zones1 Ready <none> 53s v1.20.5
#
The wait time for VMs to join is typically
1 to 2 minutes. In the output of the command, regions and zones should be in
Ready state.
-
On each region and zone VM,
verify that Docker images are loaded.
-
On all VMs, add default
DNS.
-
On the control plane VM,
determine the cluster IP address.
# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAl-IP PORTS AGE
kube-dns ClusterIP 10.x.x.x <none> 53/UDP,53/TCP,9153/TCP 13m
-
Add the cluster IP
address to /etc/resolv.conf.
# echo "nameserver 10.x.x.x" > /etc/resolv.conf
# cat /etc/resolv.conf
nameserver 10.x.x.x
-
Verify that all system pods are
in Running state.
The Status column in the
output shows the state.
-
If the output of step 10 shows that coredns pods are in CrashLoopBackOff state, take
the following steps. Otherwise, skip to step 12.
-
Run the kubectl edit cm
coredns -n kube-system command.
}
ready
kubernetes clusster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-03-20T06:04:50Z"
name: coredns
namespace: kube-system
resourceVersion: "274"
uid: 89a2b293-c0a3-b4b8-f262834020b
-
Delete the loop, save,
and then exit.
-
Delete both coredns
pods.
kubectl delete pod coredns-<full-pod-name> -n kube-system
-
Repeat step 10 to ensure that the coredns pods are in Running
state.
-
Set node labels for each region
and zone using the kubectl label nodes <hostname><label> syntax, where
<label> is in the following format: reg1 for regions 1, 2,
and 3, and reg1-zone1, reg1-zone2 for different
zones in a region.
kubectl label nodes cat-region1-evm region=reg1
kubectl label nodes cat-region2-evm region=reg1
kubectl label nodes cat-region3-evm region=reg1
kubectl label nodes cat-zone1-evm zone=reg1-zone1
The final command differs from the others in two ways. It has a different
label and the label value is assigned.
-
Copy the
locations.csv
file from the qcow build directory to
one of the region VMs.
-
Edit the
locations.csv
file
to reflect zone VM configuration.
Do not insert spaces after
commas or in any fields other than the geographical location fields. Ensure that
the IP addresses are valid. Zone names and host names can consist of numeric
characters and lowercase alphabetic characters.
usa,region-1,10.37.138.187,east-zone,10.37.138.187,zone-187,Duff,
36.4467° N,84.0674° W
usa,region-1,10.37.138.188,west-zone,10.37.138.188,zone-188,Las Vegas,
36.1699° N,115.1398° W
This example shows two zones in one region,
with IP addresses of 10.37.138.187 and 10.37.138.188 and host names of zone-187
and zone-188. You must edit the highlighted fields.
-
On each region VM, create the
following directory and copy the
locations.csv
file
into the directory.
mkdir -p /opt/crms
cp locations.csv /opt/crms
-
On the control plane VM, perform
the following.
cd /etc/xvm/controlplane_node_binaries
./loadPodsInControlPlaneNode.sh
The wait time for the script to
finish running is typically 8 to 10 minutes.
-
Verify that pods are in Running
state.
# kubectl get pods -n xvm