Defects Closed with Code Changes

The following defects, which were previously disclosed as open, were resolved in Extreme Fabric Automation 2.6.0.

Parent Defect ID: EFA-5592 Issue ID: EFA-5592
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.2.0
Symptom: Certificates need to be manually imported on replaced equipment in-order to perform RMA.
Condition: RMA/replaced equipment will not have ssh key and auth certificate, in-order to replay the configuration on new switch user needs to import the certificates manually.
Workaround:

import certificate manually

efa certificates device install --ips x,y --certType

Parent Defect ID: EFA-8297 Issue ID: EFA-8297
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.0
Symptom:

EPG update anycast-ip-delete operation succeeded for deletion of provisioned anycast-ip for admin-down device.

This issue is observed only if an update anycast-ip-add operation is performed after device is put in admin down state and the new config is in non-provisioned state followed by anycast-ip-delete operation for already configured anycast-ip.

Condition:

Steps to reproduce issue:

1) Configure EPG with anycast-ip (ipv4/ipv6)

2) Make one device admin-down

3) Anycast-ip update-add new anycast-ip (ipv6/ipv4)

4) Update-delete provisioned anycast-ip configured in step-1 (ipv4/ipv6)

Step (4) should fail as IP is already configured on the device and trying to delete it should fail as part of APS.

Workaround: No workaround for this.
Recovery: Recovery can be done by configuring EPG again with the required configuration using efa or cleaning device config for anycast-ip on the switch.
Parent Defect ID: EFA-8448 Issue ID: EFA-8448
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.0
Symptom:

When the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel, the PO goes into delete pending state. However, the ports are not deleted from the PO.

They get deleted from the tenant though.

Condition: This issue is seen when the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel resulting in an empty PO.
Workaround: User needs to provide ports for “tenant update port-delete operation” which do not result in an empty PO i.e. PO needs to have at least 1 member port.
Recovery: Add the ports back using "tenant port-add operation" so that the port-channel has at least 1 member port. The use "efa configure tenant port-channel" to bring the po back to stable state.
Parent Defect ID: EFA-9645 Issue ID: EFA-9645
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When the fabric setting is updated with this particular password "password$\n", md5 password doesn't get configured on the backup routing neighbors that was already created.
Condition:

1. Configure fabric

2. Create tenant, po, vrf and epg

3. Update fabric setting with "password$\n" and configure fabric

4. MD5 password is not configured on backup routing neighbors under BGP address family ipv4/ipv6 vrf

Workaround: Update the fabric setting with any other password combination that does not include "$\n" combination.
Recovery: Update the fabric setting with any other password combination that does not include "$\n" combination.
Parent Defect ID: EFA-9813 Issue ID: EFA-9813
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.3
Symptom: When doing RMA of device the port connections for the new device must be identical.
Condition: New device's port connections were not identical to the device being RMAed.
Workaround: When doing RMA of device the port connections for the new device must be identical.
Parent Defect ID: EFA-9906 Issue ID: EFA-9906
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When concurrent EFA tenant EPG create or update operation is requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "EPG: <epg-name> Save for Vlan Records save Failed".
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-9952 Issue ID: EFA-9952
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When concurrent EFA tenant EPG delete operations are requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "EPG network-property delete failed"
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed
Parent Defect ID: EFA-9990 Issue ID: EFA-9990
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: EPG update ctag-range-add operation with the existing ctag-range (i.e. ctag1, ctag2) and modified native vlan (ctag2) succeeds without any effect
Condition:

Below are the steps to reproduce the issue:

1. Create Endpoint group with ctag1, ctag2 and native vlan as ctag1

2. Update the Endpoint group (created in step 1) using ctag-range-add operation with the same set of ctags (i.e. ctag1, ctag2) and different native VLAN ctag2

Workaround: If user intends to modify the native vlan from ctag1 to ctag2 in an EPG, then the user will need to remove ctag1 (using ctag-range-delete) from the EPG and add ctag2 (using ctag-range-add) as native vlan to the EPG
Parent Defect ID: EFA-10371 Issue ID: EFA-10371
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: Additional Route Target may be configured under VRF "address-family ipv6 unicast" device configuration when EPG update is performed with vrf-add operation on concurrent EPGs that use same VRF.
Workaround: No workaround
Recovery:

Below are the steps to recover from the issue:

1. Delete VRF from all EndpointGroups by performing EPG Update <vrf-delete> operation

2. Add VRF to EndpointGroups by performing EPG Update <vrf-add> operation

Parent Defect ID: EFA-10377 Issue ID: EFA-10377
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: Manual or Auto DRC can timeout on a scaled setup (when attempted soon after the fabric configure is success) with more number of Backup Routing enabled VRFs because EFA would start a mini DRC in the background (as soon as fabric configure is success) to provision the updated MD5 password on the Backup Routing neighbours.
Condition:

1. Configure non-clos fabric with backup-routing enabled.

2. Configure tenant, po, lot of vrf (e.g. 50), epgs, bgp peer-group, bgp static peers.

3. Configure maintenance-mode enable-on-reboot on the SLX.

4. Update fabric setting to configure MD5 password.

5. Configure fabric created in step 1 in order to provision MD5 password on Backup Routing neighbors for all the tenant VRFs.

6. Reload SLX to trigger MM triggered DRC or trigger manual DRC, as soon as the fabric configure is complete in step 5.

7. DRC will timeout since the provisioning of MD5 password on the Backup Routing neighbours was not allowed to be completed after step 5.

Workaround:

1. Update MD5 password setting in fabric and configure fabric.

2. Allow MD5 password to get provisioned on all BR neighbors of all the tenant VRFs on the SLX.

3. Perform manual/auto DRC once MD5 password is provisioned.

Recovery: Manual or Auto DRC can be reattempted again once the fabric MD5 password is provisioned on all Backup Routing neighbors for all tenant VRFs.
Parent Defect ID: EFA-10397 Issue ID: EFA-10397
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Native Vlan gets added as trunk VLAN to ports/ port channels after DRC is executed
Condition:

1. Create EPG1 with PO1 and switchport mode trunk with native VLAN V1.

2. Create EPG2 with the same PO as used in step1 i.e. PO1 and new port channel PO2 . Switchport mode is configured as trunk without native VLAN.

3. Execute Manual/auto DRC .

4. Native VLAN V1 gets added as trunk VLAN to PO2, on the SLX.

Recovery:

1.Introduce manual drift on SLX by executing "no switchport" on port channels which are not intended to have Native VLAN as trunk VLAN.

2.Perform manual/auto DRC.

3. After DRC execution only the required VLAN members will be added to the port-channel. Native VLAN will be removed from the not intended port channels

Parent Defect ID: EFA-10560 Issue ID: EFA-10560
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: VRF goes to "vrf-device-srbfd-delete-pending" after the execution of vrf update static-route-bfd-add operation with invalid source IP or invalid destination IP.
Condition:

1. Create VRF VRF1 and update VRF1 with static-route-bfd-add operation.

2. Provide the below payload with invalid IP address containing leading zeroes during static-route-bfd-add operation.

--ipv4-static-route-bfd 10.20.61.92,214.5.94.13,214.5.94.02,300,300,3

3. static-route-bfd-add operation fails due to invalid IP and the rollback (delete) operation also fails due to the invalid IP further resulting in "vrf-device-srbfd-delete-pending" state of the VRF.

Workaround:

Any IP Address with leading zeroes (eg: 214.5.94.02) are not recommended to be provided as user input .

The correct IP to be used is 214.5.94.2 instead of 214.5.94.02.

Parent Defect ID: EFA-10718 Issue ID: EFA-10718
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: Non-existing connections between super-spines in EFA
Condition: If an endpoint of a connection (between 2 devices) is moved to a different device with the same port, the old connection remains as a stale entry in the EFA database. This condition occurs since the last device update by EFA but prior to handling any RASlog events which exhibits the non-existing connection.
Workaround: No Workaround
Parent Defect ID: EFA-10982 Issue ID: EFA-10982
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.3
Symptom: Efa inventory drift-reconcile history failed after reloading L01/L02
Parent Defect ID: EFA-11036 Issue ID: EFA-11036
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Error in ts-server.log "Error: Failed to fetch device information for device"
Parent Defect ID: EFA-11058 Issue ID: EFA-11058
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.3
Symptom: EFA API Documentation lists incorrect strings in REST responses which might not be updated with the actual response fields.
Condition: The API documentation is not updated.
Recovery: Fetch the correct values from the actual EFA REST response.
Parent Defect ID: EFA-11248 Issue ID: EFA-11248
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom:

Observation 1 : Delay for long : few nodes moved to cfg-refresh/cfg-refresh-error:

30 min after, auto device update helps to move Border-leaf states as “cfg-in-sync

Again after 30 min, auto device update helps to move leaf states as “cfg-in-sync

Again after 30 min, auto device update helps to move spine states as “cfg-in-sync

Observation 2 : No change in spine config, shown as cfg-refresh

Spine node lldp peer node leaf/border leaf validates, if the MCT link failure, spine node doesn‘t get chance to move to 4th stage ( as part of firmware download case /lldp)

Observation 3 : B2 : Broder leaf non-selection group node went to cfg-refresh

If lldp update is missed on peer nodes Border Leaf1, and the fabric got lldp on B2 which leads to failure on fabric operation.

B2 node never gets an update event from inventory and there is no chance to compute fabric app/state update.

Workaround:

Step1 : efa fabric error show --name stage3

Step2: execute Drift-only on error node ( border MCT leaf)

Step3: execute Drift-only on Leaf node

Step4: execute Drift-only on Spine node

[or]

If the state is not moved from cfg-referesh, force to do DRC on the node.

Parent Defect ID: EFA-11520 Issue ID: EFA-11520
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: The Dynamic BGP Listen-Limit is not configured on the Border_leaf nodes
Parent Defect ID: EFA-11739 Issue ID: EFA-11739
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.5
Symptom: EFA return success when configure 1GbpsAN speed on 100G switch port
Parent Defect ID: EFA-11770 Issue ID: EFA-11770
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.5
Symptom: EFA return 'invalid value' output for "Auto" set speed command
Parent Defect ID: EFA-11867 Issue ID: EFA-11867
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: DRC timeout during SLX upgrade.
Condition: The logic that qualifies the start of a DRC during the firmware download workflow was incorrectly getting updated on an inventory service restart. This caused the "Drift Reconcile Start Timeout Failure" failure for device 21.144.144.201 (b144_BL1).