Verify the Running System and Services

After any of the following scenarios, wait 10 minutes for EFA micro-services to be operational before you run EFA commands.
  • Powering on the OVA
  • Rebooting the OVA
  • Rebooting the TPVM
  • Rebooting the SLX (which also reboots the TPVM)
  • Rebooting the server on which the EFA is installed

You can use various commands and scripts to verify the status of the EFA system, to help troubleshoot, and to view details of EFA nodes, PODs, and services.

  1. Verify the K3s installation in a TPVM.
    1. Run the show efa status command from the SLX command prompt.
      device# show efa status                                                          
             NAME   STATUS   ROLES    AGE     VERSION
      	TPVM   Ready    master   6m59s   v1.14.5-k3s.1
      	admin@10.24.51.226's password:
      	NAME                            READY  	STATUS    RESTARTS   AGE
      	pod/godb-service-wk57h          1/1     	Running  0          6m11s
      	pod/gofabric-service-8v8b2      1/1     	Running  3          6m12s
      	pod/goinventory-service-4kggf   1/1     	Running  3          6m12s
      	pod/gotenant-service-xcqf6      1/1     	Running  3          6m12s
      	pod/rabbitmq-0                  1/1     	Running  0          6m12s
      	pod/rabbitmq-1                  1/1     	Running   0         4m51s
    Output varies by type of deployment and the services that are installed.
  2. View details of EFA nodes, PODs, and services.
    1. Run the efactl status script.
      root@node1:/home/ubuntu/efa# efactl status
      NAME       STATUS     ROLES     AGE     VERSION
      node1      Ready      Master    22m     v1.17.3+k3s1
      node2      Ready      Master    22m     v1.17.3+k3s1
      NAME                                      Ready   Status      Restarts  Age
      pod/efa-api-docs-55c97cbdf-qnqg4          1/1     Running     0         16m
      pod/rabbitmq-6qkmp                        1/1     Running     0         16m
      pod/godb-service-6c7f7d865b-q2rhv         1/1     Running     0         16m
      pod/rabbitmq-4671j                        1/1     Running     0         16m
      This example shows only a few of all possible rows of detail.
  3. Verify that all PODs are in a running state.
    1. Run the k3s kubectl get pods -n efa command.
      # k3s kubectl get pods -n efa 
      
      NAME                                     READY   STATUS    RESTARTS   AGE
      goswitch-service-958fcfb4f-qddnw         1/1     Running   4          72d
      godb-service-57bd99747-f4cxb             1/1     Running   4          83d
      efa-api-docs-6bb5dbcc74-br485            1/1     Running   4          72d
      filebeat-service-86ddd654b6-z9zhr        1/1     Running   4          72d
      goopenstack-service-554c57548f-bjwtb     1/1     Running   8          72d
      rabbitmq-0                               1/1     Running   7          72d
      govcenter-service-f6b49d9b9-s24wk        1/1     Running   19         72d
      gohyperv-service-854654f6b9-m9mv8        1/1     Running   20         72d
      goinventory-service-59d9b798d8-s9wn6     1/1     Running   20         72d
      gotenant-service-55fd8889d8-g8rgb        1/1     Running   19         72d
      gofabric-service-69d8995fc6-swnqw        1/1     Running   19         72d
      metricbeat-service-76c4874887-mbm7h      1/1     Running   32         72d
  4. Verify the status of the Authentication service.
    1. Run the systemctl status hostauth.service script.
      $ systemctl status hostauth.service
      hostauth.service - OS Auth Service
      Loaded: loaded (/lib/systemd/system/hostauth.service; enabled; vendor preset: enabled)
      Active: active (running) since Thu 2020-04-23 07:56:20 UTC; 23 h ago
      Main PID: 23839 (hostauth)
      Tasks: 5
      CGroup: /system.slice/hostauth.service
              23839 /apps/bin/hostauth
      
      Apr 23 07:56:20 tpvm2 systemd[1]: Started OS Auth Service
  5. Restart a service using the efactl restart-service SERVICE command.
  6. Identify the active node that serves as the database for Kubenetes clusters.
    1. Run the ip addr show command from all nodes.
    2. Verify that on one of the Ethernet interfaces, the virtual IP address shows up as the secondary IP address.