Parent Defect ID: | EFA-7341 | Issue ID: | EFA-7341 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | "efactl status" or any "k3s kubectl" commands respond with the following error message: The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? [Ref: EMAX-105] | ||
Condition: |
The k3s datastore cluster may fail to restore when the standby node's management interface is shut down for sometime and brought back up again. efa-monitor incorrectly recovered the standby node as a new standalone cluster. Once the management interface was restored, the active node tries to join this new cluster and fails to join because the transactions are ahead of the standby node. The standby node should have remained down in a fault state and joined the current active once the management interface was restored. |
||
Workaround: | The issue has been fixed in 2.3.1 | ||
Recovery: | The issue has been fixed in 2.3.1 |
Parent Defect ID: | EFA-7346 | Issue ID: | EFA-7346 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | Except local-ip other configs are not removed from the "Admin-UP" device after EPG delete operation on partial success topology [Ref: EMAX-106] | ||
Condition: | This issue can be observed if EPG deletion containing local-ips has failed because any one of the devices is admin down. | ||
Workaround: | The issue has been fixed in 2.3.1 | ||
Recovery: | The issue has been fixed in 2.3.1 |
Parent Defect ID: | EFA-7351 | Issue ID: | EFA-7351 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | Deletion of the EPG fails with an error " Error: Operation "epg delete" not allowed on an "epg in uninitialised state". | ||
Condition: |
Below is the scenario in which the issue can happen: 1.EPG creation is in progress. 2.Tenant service restarts while (1) is in progress. 3.Deletion of EPG (created in step 1) is attempted after the tenant service restart. |
||
Workaround: | The issue has been fixed in 2.3.1 | ||
Recovery: | The issue has been fixed in 2.3.1 |
Parent Defect ID: | EFA-7355 | Issue ID: | EFA-7355 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: |
Even though the configuration is in sync between the EFA and SLX, the switch doesn't have: 1. "cluster-track" configuration under the CEP port-channel. 2. "graceful-restart" configuration under the router bgp address-family. |
||
Condition: |
The issue can occur in the below scenario: 1. EFA 2.1.0 version is installed. 2. CEP port-channels are configured. 3. L3 EPGs are configured with the CEP port-channel and anycast-ipv4. 4. EFA 2.1.0 is upgraded to EFA 2.2.0. 5. efa-db-upgrade-from-2-1-0.sh is executed. 6. EFA 2.2.0 is upgraded to EFA 2.3.0. 7. Manual drift and reconcile is executed using "efa inventory drift-reconcile execute --ip <switch-ip> --reconcile" CLI. |
||
Recovery: | A temporary EPG can be created using the "CEP port-channel" and the "VRF" for which the "cluster-track" and "graceful-restart" configuration was missing respectively on the switch, so that the missing "cluster-track" and "graceful-restart" configurations get pushed to the switch during the EPG create. |
Parent Defect ID: | EFA-7357 | Issue ID: | EFA-7357 |
Severity: | S3 – Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | 3DES ciphers from Golang TCP/8078 HTTPS server | ||
Workaround: | As part of Security hardening process this issue has been fixed in EFA 2.3.1 | ||
Recovery: | As part of Security hardening process this issue has been fixed in EFA 2.3.1 |
Parent Defect ID: | EFA-7358 | Issue ID: | EFA-7358 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | The raslog TCP/6514 on EFA-2.3 offers both TLS 1.0 and TLS 1.1 | ||
Workaround: | This issue is fixed in EFA 2.3.1 by way stricter enforcement TLS Version => 1.2 | ||
Recovery: | This issue is fixed in EFA 2.3.1 |
Parent Defect ID: | EFA-7360 | Issue ID: | EFA-7360 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | system set-mtu and set-admin-state commands fail due to 'Service Unavailable' error [Ref: EMAX-107] | ||
Condition: | when the Keystore value of the device are not available in the Asset Services. | ||
Workaround: | This issue is fixed in EFA 2.3.1 | ||
Recovery: | This issue is fixed in EFA 2.3.1 |
Parent Defect ID: | EFA-7362 | Issue ID: | EFA-7362 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | The creation of 10 MS networks with subnets parallelly using hot template is failing [Ref: EMAX-108] | ||
Condition: | Creation of ports aross 10 networks on higher number of segments takes more time leading to failure. | ||
Workaround: | Issue is fixed in 2.3.1 |
Parent Defect ID: | EFA-7364 | Issue ID: | EFA-7364 |
Severity: | S3 – Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | The pod status shown in efactl status for a given node as it is coming up after an EFA HA failover makes it difficult to determine if the node is in active/standby state to be ready to service EFA requests (Ref: EMAX-109) | ||
Condition: | This is seen right after an EFA HA failover as the nodes are transitioning from active/standby state. | ||
Workaround: |
This issue is fixed in 2.3.1 adding an API to correctly determine each node's status. An example of using the API is as follows curl --location --request GET '<monitor-endpoint>/v1/monitor/status/efa' |
||
Recovery: |
This issue is fixed in 2.3.1 adding an API to correctly determine each node's status. An example of using the API is as follows curl --location --request GET '<monitor-endpoint>/v1/monitor/status/efa' |
Parent Defect ID: | EFA-7365 | Issue ID: | EFA-7365 |
Severity: | S3 – Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: |
The inability to run an upgrade process until the underlying prerequisite check is fixed. efa login and all other efa commands fail to execute, showing this message in efa login `CLI is not registered as a client. Please run 'source /etc/profile' to update your environment.` [Ref: EMAX-110] |
||
Condition: | The issue is seen when there is a failing prerequisite check at the start of the upgrade process. After the failure is reported and the upgrade process is cancelled without undeploying the failed upgrade, efa login and thus all other efa commands fail to run. | ||
Workaround: | This issue is fixed in EFA 2.3.1 | ||
Recovery: | This issue is fixed in EFA 2.3.1 |
Parent Defect ID: | EFA-7372 | Issue ID: | EFA-7372 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | Unable to reach VMs (ping) through new DHCP agent after compute poweroff [Ref: EMAX-111] | ||
Condition: | When a DHCP agent is re-located to other compute host, change in the switch details cause port configuration to fail. | ||
Workaround: | This Issue is fixed in EFA 2.3.1 | ||
Recovery: | This Issue is fixed in EFA 2.3.1 |
Parent Defect ID: | EFA-7386 | Issue ID: | EFA-7386 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.1 |
Symptom: |
In a multinode EFA HA deployment the following symptoms are noticed for this issue after an HA failover scenario: efa login fails and efactl status shows one or both nodes as a not ready state. k3s systemctl service fails to start on the not ready node. |
||
Condition: | The issue is found to occur when the system is under stress due to prolonged failover testing, causing the maxium inotify watchers limit to be hit for the k3s process. This caused the k3s service to fail and hence make efa non functional. | ||
Workaround: | This issue is fixed in EFA 2.3.1 | ||
Recovery: | This issue is fixed in EFA 2.3.1 |
Parent Defect ID: | EFA-7388 | Issue ID: | EFA-7388 |
Severity: | S2 – High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | Duplicate networks with same vlan in efa is observed [Ref: EMAX-115] | ||
Condition: | “openstack network create” should have failed. | ||
Workaround: | Issue is fixed in 2.3.1 |
Parent Defect ID: | EFA-7392 | Issue ID: | EFA-7392 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.1 |
Symptom: | On an HA setup the monitoring service is not accessible with https protocol intermittently | ||
Condition: | After HA failover | ||
Workaround: | This issue is fixed in EFA 2.3.1 | ||
Recovery: | This issue is fixed in EFA 2.3.1 |