This is part five of a five-part
process.
Before you begin
Create the control plane, region, and zone VMs, and the locations.csv file.
Procedure
-
Ensure that the installations on
the control plane, region, and zone VMs (nodes) are complete.
-
On the control plane VM, verify
that
etcd
and
haproxy
are
running.
# systemctl status haproxy
# systemctl status etcd3
-
On each region VM, verify that
the
patroni
service is running.
# systemctl status patroni
-
On any region VM, determine
whether leader election is complete.
# patronictl -c /opt/app/patroni/etc/postgresql.yml list postgres
Output of the command
identifies the IP address of the host with the Leader role.
-
On the control plane VM, create
the join token.
# kubeadm token create --print-join-command
kubeadm join 10.37.138.217:6443 --token cmtjhj.fah33qgst7gl0z7w
--discovery-token-ca-cert-hash
sha256:9de0fcb3e7c5a5aa6a3ecbea0b9e9c3b3c187b6777b5c2746b2ce031240875be
#
-
On each region and zone VM, run
the highlighted portion of the output in step 5.
-
On the control plane VM, verify
that the nodes are joined.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
control Ready control-plane 6m50s v1.20.5
regions1 Ready <none> 58s v1.20.5
zones1 Ready <none> 53s v1.20.5
#
The wait time for VMs to join is typically
1 to 2 minutes. In the output of the command, regions and zones should be in
Ready state.
-
On each region and zone VM,
verify that Docker images are loaded.
-
Verify that all system pods are
in Running state.
The Status column in the
output shows the state.
-
Set node labels for each region
and zone using the kubectl label nodes <hostname><label> syntax, where
<label> is in the following format: reg1 for regions 1, 2,
and 3, and reg1-zone1, reg1-zone2 for different
zones in a region.
kubectl label nodes cat-region1-evm region=reg1
kubectl label nodes cat-region2-evm region=reg1
kubectl label nodes cat-region3-evm region=reg1
kubectl label nodes cat-zone1-evm zone=reg1-zone1
The final command differs from the others in two ways. It has a different
label and the label value is assigned.
-
Copy the
locations.csv
file from the qcow build directory to
one of the region VMs.
-
Edit the
locations.csv
file
to reflect zone VM configuration.
Do not insert spaces after
commas or in any fields other than the geographical location fields. Ensure that
the IP addresses are valid. Zone names and host names can consist of numeric
characters and lowercase alphabetic characters.
usa,region-1,10.37.138.187,east-zone,10.37.138.187,zone-187,Duff,
36.4467° N,84.0674° W
usa,region-1,10.37.138.188,west-zone,10.37.138.188,zone-188,Las Vegas,
36.1699° N,115.1398° W
This example shows two zones in one region,
with IP addresses of 10.37.138.187 and 10.37.138.188 and host names of zone-187
and zone-188. You
must
edit the highlighted fields. For an example of a
locations.csv
file with IPv6 addresses, see
Create a Location Definition File.
-
On each region VM, create the
following directory and copy the
locations.csv
file
into the directory.
mkdir -p /opt/crms
cp locations.csv /opt/crms
-
On the control plane VM, perform
the following.
cd /etc/xvm/controlplane_node_binaries
./loadPodsInControlPlaneNode.sh
The wait time for the script to
finish running is typically 8 to 10 minutes.
-
Verify that pods are in Running
state.
# kubectl get pods -n xvm