Resolved Defects

Parent Defect ID: EFA-6994 Issue ID: EFA-6994
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.0
Symptom: Failed to delete BGP peer listen ranges from the vrf in SLX switches, with bgp peer delete on EFA.
Condition:

Perform the below steps in EFA:

1. Create L3 EPG.

2. Create BGP listen ranges on the same VRF used in step 1.

3. Delete any port from EPG created in step 1.

4. Delete listen ranges created in step 2.

5. Listen ranges will be deleted from EFA but remain as it is on the Device.

Workaround: This issue is fixed in EFA 2.3.2
Recovery: Fixed in v2.3.2
Parent Defect ID: EFA-7398 Issue ID: EFA-7398
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.1
Symptom: Tenant drift reconcile operation failed with error message "Drift generation failed for the device"
Condition: This can happen when EPGs are created with different devices using same/common ctag/network and one or more of such common ctag/network has drifted on one of the device
Recovery:

1) Delete EndpointGroup(s) using same/common ctag/network(s) on the device where drift generation has failed

2) Let Drift-Reconcile operation complete for Tenant

3) Recreate deleted EndpointGroup(s)

Parent Defect ID: EFA-7403 Issue ID: EFA-7403
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.1
Symptom: Devices failed to add into fabric
Condition: Delete device from fabric and then immediately re-add it
Workaround

After deleting device, wait 4-5 minutes before re-adding device into fabric using fabric add-bulk command as such:

[efa fabric device add-bulk --name small-fabric --border-leaf <device-ip> --username <name> --password <pass>]

Recovery: This works as designed
Parent Defect ID: EFA-7434 Issue ID: EFA-7434
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.1
Symptom: EFA high availability cluster may take up to 20 minutes to recover with EFA commands being serviceable again from a double fault scenario.
Condition: A double fault caused by rebooting both SLX switches or TPVMs simultaneously.
Recovery: This issue is fixed in EFA 2.3.2
Parent Defect ID: EFA-7446 Issue ID: EFA-7446
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.1
Symptom: EFA services not running after installing the efa-2.3.1.ova.
Condition: This is an inherent issue of the OVA.
Workaround:

1) Login to OVA

2) Confirm k3 service is running

"sudo service k3s status"

[Output Example]

k3s.service - Lightweight Kubernetes

...

Active: active (running) since Sat 2020-10-31 00:08:24 UTC; 2 days ago

3) Run script manually

"/opt/godcapp/efa/adjust_single_node_ip_change.sh"

4) reboot efa server

5) Login and verify service are running using edactl status

Recovery: Same as Workaround
Parent Defect ID: EFA-7547 Issue ID: EFA-7547
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.1
Symptom:

All Inventory CLI commands are stuck.

Condition:

Switch failed to respond to EFA's http & https request

Recovery: This issue is resolved in EFA 2.3.2
Parent Defect ID: EFA-7552 Issue ID: EFA-7552
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.1
Symptom:

1. User is unable to read any service logs on the system without using sudo. Eg. <Logs-directory>/inventory/inventory-server.log needs sudo access to read the file.

2. The service log files added in Supportsave do not have any content.

Condition:
Recovery: This issue is resolved in EFA 2.3.2
Parent Defect ID: EFA-7589 Issue ID: EFA-7589
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.3.2
Symptom: In a multi-node environment, 'efa system restore' command does not succeed.
Condition: In a multi-node environment, during execution of 'efa system restore' command, if one of the nodes goes down, then the restore procedure fails.
Recovery: Once the node is started, rerun the restore procedure.