Defects Closed with Code Changes

The following defects, which were previously disclosed as open, were resolved in Extreme Fabric Automation 2.7.0.

Parent Defect ID: EFA-8448 Issue ID: EFA-8448
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.0
Symptom:

When the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel, the PO goes into delete pending state. However, the ports are not deleted from the PO.

They get deleted from the tenant though.

Condition: This issue is seen when the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel resulting in an empty PO.
Workaround: User needs to provide ports for “tenant update port-delete operation” which do not result in an empty PO i.e. PO needs to have at least 1 member port.
Recovery: Add the ports back using "tenant port-add operation" so that the port-channel has at least 1 member port. The use "efa configure tenant port-channel" to bring the po back to stable state.
Parent Defect ID: EFA-9065 Issue ID: EFA-9065
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.3
Symptom: EFA Port Channel remains in cfg-refreshed state when the port-channel is created immediately followed by the EPG create using that port-channel
Condition:

Below are the steps to reproduce the issue:

1. Create port-channel po1 under the ownership of tenant1

2. Create endpoint group with po1 under the ownership of tenant1

3. After step 2 begins and before step 2 completes, the raslog event w.r.t. step 1 i.e. port-channel creation is received. This Ralsog event is processed after step 2 is completed

Recovery:

1. Introduce switchport or switchport-mode drift on the SLX for the port-channel which is in cfg-refreshed state

2. Perform manual DRC to bring back the cfg-refreshed port-channel back to cfg-in-sync

Parent Defect ID: EFA-9576 Issue ID: EFA-9576
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Deletion of the tenant by force followed by the recreation of the tenant and POs can result in the error "Po number <id> not available on the devices".
Condition:

Below are the steps to reproduce the issue:

1. Create tenant and PO.

2. Delete the tenant using the "force" option.

3. Recreate the tenant and recreate the PO in the short time window.

Workaround: Avoid performing tenant/PO create followed by tenant delete followed by the tenant and PO recreate in the short time window.
Recovery: Execute inventory device prior to the PO creation.
Parent Defect ID: EFA-9758 Issue ID: EFA-9758
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: EFA is not reconciling the remote-asn of BGP peer configuration after the user modified the remote-asn of BGP peer out of band,
Workaround: None
Recovery: Revert the remote ASN of BGP peer on the device through SLX CLI to what EFA has configured previously.
Parent Defect ID: EFA-9874 Issue ID: EFA-9874
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When EPG is in the "anycast-ip-delete-pending" state and the user performs "epg configure", it will succeed without actually removing anycast-ip from SLX.
Condition:

Below are the steps to reproduce the issue:

1) Configure EPG with VRF, VLAN and anycast-ip (ipv4/ipv6) on a single rack Non-CLOS fabric.

2) Bring one of the devices to admin-down.

3) EPG Update anycast-ip-delete for anycast-ip ipv4 or ipv6. This will put EPG in "anycast-ip-delete-pending" state.

4) Bring the admin-down device to admin-up.

5) In this state, the only allowed operations on EPG are "epg configure" and EPG update "anycast-ip-delete".

6) Perform "epg configure --name <epg-name> --tenant <tenant-name>".

Workaround: No workaround.
Recovery: Perform the same anycast-ip-delete operation when both devices are admin-up.
Parent Defect ID: EFA-9907 Issue ID: EFA-9907
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When concurrent EFA tenant EPG update port-add or port-delete operation is requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "vni in use error".
Condition: The failure is reported when Tenant service gets stale information about a network that existed a while back but not now. This happens only when the port-add and port-delete are done in quick succession
Workaround: Avoid executing port-add and port-delete of same ports in quick succession and in concurrence.
Recovery: None
Parent Defect ID: EFA-10048 Issue ID: EFA-10048
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom:

EPG: epgev10 Save for devices failed

When concurrent EFA tenant EPG create or update operation is requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "EPG: <epg-name> Save for devices Failed".

Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-10252 Issue ID: EFA-10252
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When concurrent EFA tenant EPG update port-group-add operations are requested where the tenant is bridge-domain enabled, one of them may fail with the error "EPG network-property delete failed"
Condition: The failure is reported when concurrent resource allocations by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround.
Recovery: The failing command can be rerun separately and it will succeed
Parent Defect ID: EFA-10268 Issue ID: EFA-10268
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When concurrent EPG deletes on bd-enabled tenant are requested where the EPGs involve large number of vlans, local-ip and anycast-ip addresses, one of them may fail with the error "EPG: <epg-name> Save for Vlan Records save Failed".
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-10288 Issue ID: EFA-10288
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When a bgp peer is created and update operations are performed when one of the devices are in admin down state, the configuration for the admin up device is deleted from the slx switch but remains in efa when "efa tenant service bgp peer configure --name <name> --tenant <tenant>" is performed.
Condition:

The bgp peer gets deleted from the SLX but not from EFA. This issue is seen when the following sequence is performed.

1. Create static bgp peer

2. Admin down one of the devices

3. Update the existing bgp static peer by adding a new peer

4. Update the existing bgp static peer by deleting the peers which were first created in step1. Delete from both devices

5. Admin up the device

6. efa tenant service bgp peer configure --name "bgp-name" --tenant "tenant-name"

Once the bgp peer is configured, the config is deleted from the switch for the device which is in admin up state whereas EFA still has this information and displays it during bgp peer show

Workaround: Delete the peer for admin up device first and then delete the peer from admin down device as a separate cli command.
Recovery: Perform a drift reconcile operation for the admin up device so that the configuration gets reconciled on the switch.
Parent Defect ID: EFA-10445 Issue ID: EFA-10445
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Tenant service may occasionally reject subsequent local-ip-add command incorrectly.
Condition: When continuous EPG updates with repeated local-ip-add and local-ip-delete operations are done on the same EPG repeatedly without much gap in-between, Tenant service may occasionally retain stale information about the previously created IP configuration and may reject subsequent local-ip-add command incorrectly.
Workaround: There is no work-around to avoid this. Once the issue is hit, user may use a new local-ip-address from another subnet.
Recovery:

Follow the steps below to remove the stale IP address from Tenant's knowledge base:

1. Find the management IP for the impacted devices. this is displayed in the EFA error message

2. Find the interface VE number. This is same as the CTAG number that the user was trying to associate the local-ip with

3. Telnet/SSH to the device management IP and login with admin privilege

4. Set the local IP address in the device

configure t

interface ve <number>

ip address <local-ip>

5. Do EFA device update by executing 'efa inventory device update --ip <IP> and wait for a minute for the information to be synchronized with Tenant service database

6. Reset the local IP address in the device

configure t

interface ve <number>

no ip address

7. Do EFA device update and wait for a minute for the information to be synchronized with Tenant service database

These steps will remove the stale entries and allow future local-ip-add operation to be successful.

Parent Defect ID: EFA-10455 Issue ID: EFA-10455
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: "efa status" takes several minutes longer than expected to report a healthy EFA status.
Condition: This problem happens when Kubernetes is slow to update the standby node's Ready status. This is a potential issue in the shipped version of Kubernetes.
Recovery: EFA will recover after a period of several minutes.
Parent Defect ID: EFA-10548 Issue ID: EFA-10548
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: When EPG delete operations are done concurrently for EPGs that are on bridge-domain based tenant where the EPG was created with more number of bridge-domains, one of the command may fail with the error "EPG: <epg name> Update for pw-rofile Record save Failed".
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution causing the underlying database to report error for one of operation.
Workaround: This is a transient error that can rarely happen and there is no workaround.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-10606 Issue ID: EFA-10606
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: "efa status" takes several minutes longer than expected to report a healthy EFA status.
Condition: This problem happens when Kubernetes is slow to update the standby node's Ready status. This is a potential issue in the shipped version of Kubernetes.
Recovery: EFA will recover after a period of several minutes.
Parent Defect ID: EFA-10754 Issue ID: EFA-10754
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: EFA - Backup create fails (timeout)
Condition:

The device is stuck with the service lock taken as noted in the example inventory log message. This will happen when performing an EFA backup if the backup is performed near the expiration time of the authentication token.

{"@time":"2021-10-13T16:19:53.132404 CEST","App":"inventory","level":"info","msg":"executeCBCR: device '21.150.150.201' is already Locked with reason : configbackup ","rqId":"4f144a0c-7be6-4056-8371-f1dc39eb28b3"}

Recovery: efa inventory debug devices-unlock --ip 21.150.150.201" will resolve the issue and backup can be done after efa login.
Parent Defect ID: EFA-10759 Issue ID: EFA-10759
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: Fabric-wide Firmware download will fail on timeout if the number of devices in the prepare group is greater than 5.
Workaround: The number of devices in the Fabric-wide Firmware download prepare group must be less than or equal to 5.
Parent Defect ID: EFA-11002 Issue ID: EFA-11002
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: SNMP Host with ?#%&*+( characters is not supported
Condition: .
Workaround: Please create SNMP hostnames without these characters.
Recovery: Please create SNMP hostnames without these characters
Parent Defect ID: EFA-11063 Issue ID: EFA-11063
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: The standby status of the EFA node shows as down when actually the node is ready for failover
Condition: The issue happened because one of the pods - rabbitmq was in Crashloopbackoff instead of init mode. This is not a functional issue since its just a status issue.
Workaround: Reboot the standby - which doesn't cause any down time. Another workaround is to restart k3s using systemctl restart k3s command.
Recovery: Rebooting the node will fix the pods or restarting k3s will fix the issue
Parent Defect ID: EFA-11177 Issue ID: EFA-11177
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: When a tenant with EPGs having 4000+ VLANs across 10+ devices, is deleted with the 'force' option, the delete operation may fail.
Condition: This failure happens because Tenant service executes a large database query line which may fail to execute by EFA's database backend.
Workaround: Delete the EPGs belonging to the tenant first and then delete the tenant. This will ensure that the database query lines are split across these multiple request.
Recovery: There is no recovery required. This failure does not lead to inconsistency of EFA's database or the SLX device's configurations.
Parent Defect ID: EFA-11768 Issue ID: EFA-11768
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: This issue was seen when the user tried to delete the devices from the fabric. The bgp peer groups associated to the devices were not removed from the switch.
Condition:

Initiating a device clean up using the following command does not clean up the associated bgp peer groups from the device.

efa fabric device remove --ip 10.20.48.161-162,10.20.48.128-129,10.20.54.83,10.20.61.92-93,10.20.48.135-136 --name fabric2 --no-device-cleanup

Workaround: Delete the bgp peer group before issuing a device clean up for the fabric.
Recovery: Manually delete the peer groups from the switch.
Parent Defect ID: EFA-11813 Issue ID: EFA-11813
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom:

This issue can be seen for a bgp peer or peer group when update-peer-delete or delete operations are performed with one device for the mct pair in admin down state.

The bgp peer gets deleted from the SLX but not from EFA.

Condition:

Steps to reproduce:

1. Create static bgp peer

2. Admin down one of the devices

3. Update the existing bgp static peer by deleting the peers which were first created in step1. Delete from both devices

4. Admin up the device

Once the device is brought up, auto drc kicks in and the config which is deleted from the switch due to admin down state has an incorrect provisioning-state and app-state.

Workaround: Bring the admin down device up and then delete the required bgp peers.
Recovery: No recovery
Parent Defect ID: EFA-11980 Issue ID: EFA-11980
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: An EFA TPVM upgrade workflow may fail for a given device along with the automatic recovery to restore the TPVM back to the original version and rejoin the EFA node back into the HA cluster.
Condition:

During the "EFA Deploy Peer and Rejoin" step, the EFA image import into the k3s container runtime fails.

During the "TPVM Revert" step, the k3s on the active EFA node would not allow the standby EFA node to join the cluster due to a stale node-password in k3s.

Workaround: None
Recovery:

Manually recover the TPVM and EFA deployment by following the procedure described in the link below:

EFA 2.5.2 Re-deploy post a TPVM Rollback failed on first attempt.

https://extremeportal.force.com/ExtrArticleDetail?an=000099582

Parent Defect ID: EFA-11983 Issue ID: EFA-11983
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Error : Ports Failed to allocate ClientIDs: [13] as its already consumed by other clients
Parent Defect ID: EFA-11992 Issue ID: EFA-11992
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: When a device is deleted from inventory, the corresponding route-maps are not removed from the specified device for any route-maps that have active BGP peer bindings.
Condition: Issue will be seen when user removes the device from inventory and the device has route-map configurations with active bindings
Workaround: The user must remove the route-maps from the device manually prior to device deletion.
Recovery: After the device is removed from inventory user can remove the route-map configuration on that device manually.
Parent Defect ID: EFA-12033 Issue ID: EFA-12033
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: Using EFA CLI, the user is able to delete non-EFA managed/OOB (out of band) route-map entries and add rules to non-EFA managed/OOB (out of band) prefix-list.
Condition: The user configures some OOB route-map or prefix-list entry on the device directly using SLX CLI/other management means and then tries to delete this route-map entry or add rules under this prefix list entry using EFA. This shouldn't be allowed from EFA as they are not EFA managed entities
Workaround: No workaround
Recovery: If user deletes the OOB entry or adds rules under OOB prefix-list by mistake it can be added back or removed manually on the device through SLX CLI/other management means.
Parent Defect ID: EFA-12114 Issue ID: EFA-12114
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: In rare circumstances, Kubernetes' EndpointSliceController can fall out of sync leading to incorrect iptables rules being instantiated. This can lead to EFA APIs failing because they are redirected to non-existent services.
Recovery:

EFA's monitor process will detect and attempt to remediate this situation automatically. If it fails to do so, the following can help:

On both TPVMs, as the super-user,

$ systemctl restart k3s

If the problem recurs, these further steps, run as super-user, may help:

$ sed -i -E 's/EndpointSlice=true/EndpointSlice=false/' /lib/systemd/system/k3s.service

$ systemctl daemon-reload

$ systemctl restart k3s

Parent Defect ID: EFA-12117 Issue ID: EFA-12117
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Not able to add new spine link due to duplicate entries in device links
Condition: A duplicate device link is created for the same interface. If the user has changed link connection to new device, we see this issue
Recovery: Remove duplicate entries either manually or through the script.
Parent Defect ID: EFA-12141 Issue ID: EFA-12141
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: After EFA backup and restore, drifted route maps could be shown as cfg-in-sync state.
Condition: Issue could be seen after EFA backup and restore, if prefix lists and route maps are removed by EFA after backup.
Workaround: There is no workaround. It is a display issue.
Recovery: If a drift is present on device, running the 'efa inventory drift-reconcile' command will reconcile the entities on the device.
Parent Defect ID: EFA-12147 Issue ID: EFA-12147
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: EFA upgrade failed CNIS 1.2 to 1.3
Parent Defect ID: EFA-12182 Issue ID: EFA-12182
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom:

The issue can be replicated by adding extra link to the existing ICL link. This could find the error in “efa fabric show”

The issue is not seen on every attempt.

Condition: Dynamic adding of links to existing ICL, The speed of interface is not updated to the LLDP database causing the devices to go into error state.
Workaround: Remove, read the device and configure fabric after adding new links.
Recovery: Remove, read the device and configure fabric OR manually update the lldp db with the correct speed and update devices.
Parent Defect ID: EFA-12305 Issue ID: EFA-12305
Severity: S3 – Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: EFA not closing unsuccessful SSH attempts when password expires on SLX
Recovery: No Recovery
Parent Defect ID: EFA-12344 Issue ID: EFA-12344
Severity: S3 – Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: After firmware download (with maint mode enabled on reboot) the device takes a long time to finish DRC thus taking device out of maint mode
Recovery: It's applicable with large SLX configuration
Parent Defect ID: EFA-12441 Issue ID: EFA-12441
Severity: S2 – Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.5
Symptom: The RabbitMQ port was getting exposed on the EFA management interface and all sub-interfaces.
Workaround: For manually created sub-interfaces after EFA installation, the EFA iptables policy will need to be restarted in order to apply filtering rules to these new interfaces. The command for this (as root) is: 'systemctl restart efa-iptables.service'.
Recovery: Same as workaround
Parent Defect ID: EFA-12454 Issue ID: EFA-12454
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: If the password on an SLX device is changed manually through the SLX command and the password is modified in EFA as well using the command "efa inventory device update --ip <IP> --username <user> --password <password", then the subsequent "efa tenant ...." commands that correspond to the device (for which the password is changed) will fail with the error "Error : Could not connect to Devices: <device-ip>"
Condition:

Below are the steps to reproduce the issue:

1. The SLX device password is changed manually through the SLX command

2. The SLX device password is modified in EFA as well using the command "efa inventory device update --ip <IP> --username <user> --password <password"

3. "efa tenant ...." commands that correspond to the device (for which the password is changed) are executed

Workaround:

1. Change the device password through EFA using the command "efa inventory device update --ip <IP> --username <user> --password <password"

2. Change EFA inventory key-value store information for the corresponding device by using "efa inventory kvstore create --key switch.<IP addr>.password --value <new-password> --encrypt"

3. Wait for up to 15 minutes for this information to be consumed by the tenant service

Recovery:

Two recovery steps are available

1. Change EFA inventory key-value store information for the corresponding device by using "efa inventory kvstore create --key switch.<IP addr>.password --value <new-password> --encrypt". Then wait for 15 minutes for this information to be available to Tenant service

2. If the password information needs to be quickly made available to Tenant service or if the first step does not help, restart the tenant service with sudo or root privilege using 'efactl restart-service gotenant-service'

Parent Defect ID: EFA-12480 Issue ID: EFA-12480
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: Scale Config : VRF doesn't allow to have more than 4095 SR in single creation
Workaround No Workaround
Recovery: No Recovery
Parent Defect ID: EFA-12516 Issue ID: EFA-12516
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: After changing IP and running "efa-change-ip" script EFA pods are in boot loop
Parent Defect ID: EFA-12554 Issue ID: EFA-12554
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: Response from REST get for LLDP neighbors when there are none is null instead of empty which was expected
Parent Defect ID: EFA-12555 Issue ID: EFA-12555
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: EPG update request with port-group-add operation and EPG create request where multiple ctags are mapped to one bridge-domain, may fail with error "Error 1452: Cannot add or update a child row"
Condition:

Error will be observed when one of the following use cases are executed on bridge-domain enabled tenant.

Use case-1:

1. Create an EPG with multiple ctags mapped to one bridge-domain and with ports across SLX devices that are not part of an MCT pair

Use case-2:

1. Create an EPG with multiple ctags mapped to one bridge-domain and with ports or port-channels on one SLX device

2. Update the EPG with port(s) on new SLX device that is not an MCT pair of first device

Recovery: Recreate L3 EPG the port-groups, further anycast-ip updates will work as expected.
Parent Defect ID: EFA-12556 Issue ID: EFA-12556
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: If all port-groups are deleted from the L3 EPG, then anycast-address details are removed from the EFA database, thus next port-group operations fails with validations error.
Condition:

1. Create L3 epg with device ports [0/10,11]

2. EPG update port-group-delete operation with both ports from the device.

3. EPG update with port-group-add with device port [0/10]

Recovery: Delete and recreate the EPG with anycast-details and then perform port-group add operations
Parent Defect ID: EFA-12557 Issue ID: EFA-12557
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: L3 EPG update with operation anycast-ip-delete with all anycast-ips(configured as part of EPG), is allowed leading to Ve without any v4/v6 anycast-ips.
Condition:

1. Create L3 EPGs with both ipv4 and ipv6 anycast-ips

2. EPG update with anycast-ip-delete, pass all anycast-ips configured as part of step 1

3. After EPG update, all anycast-ips are removed from the DB and device both

Workaround:

Pass anycast-ip one by one to the EPG update CLI. Last anycast-ip removal will

not be allowed and validation error will be thrown.

Workaround: Pass anycast-ip one by one to the EPG update CLI. Last anycast-ip removal will not be allowed and validation error will be thrown.
Parent Defect ID: EFA-12558 Issue ID: EFA-12558
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: L3 NPEPG(without ports) update with operation anycast-ip delete, does not remove Anycast-ip from the EFA DB.
Condition:

1. Create L3 EPG without ports

2. Update epg with operation: anycast-ip-delete

3. Anycast-ip not removed from the DB.

Workaround: Delete and re-create the EPG to remove the anycast-address
Parent Defect ID: EFA-12692 Issue ID: EFA-12692
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: After last EPG delete, IP and MAC access lists are not removed from the device.
Condition:

1. Create EPG1 with port-property ACL on the device1

2. Create EPG2 with network-property ACL on the device1.

3. Verify on the device, ACL configurations are pushed

4. Delete both EPG1 and EPG2

5. Do show mac access-list/ show ip access-list.

MAC and IP Access-lists not removed from the device.

Recovery: Manually delete the MAC and IP access-lists from the device.
Parent Defect ID: EFA-12722 Issue ID: EFA-12722
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.5
Symptom: FlexiLab: MCT PO64 goes down on device seliinsw00288 when doing DRC on cluster device seliinsw00279
Parent Defect ID: EFA-12796 Issue ID: EFA-12796
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: During reconcile, Drifted ACL rules not identified and reconciled on the device.
Condition:

1. Create EPG with PP ACL on the device port/PO.

2. Manually remove the rules under the ACL from the SLX device.

3. Trigger DRC flow. Drift in the ACL rules is not identified, hence during reconciliation, the rules are not pushed to the device.

Recovery: Manually configure drifted rules under the ACL
Parent Defect ID: EFA-12858 Issue ID: EFA-12858
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: OOB created Monitor Session is deleted from SLX
Condition:

Below are the steps to reproduce the issue:

1) Manually create a monitor session on the SLX i.e. Create an OOB (Out Of Band) monitor session on SLX

2) Create Tenant and Mirror Session from EFA

Workaround: No workaround
Recovery: Manually recreate the deleted OOB monitor session on SLX again
Parent Defect ID: EFA-12933 Issue ID: EFA-12933
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: Monitor Session(s) and Portchannels are not deleted from SLX
Condition:

Below are the steps to reproduce the issue:

1) Create Fabric

2) Create Tenant and Portchannels

3) Create Mirror Session using Portchannel as a mirror source

4) Delete fabric with force option or remove devices from inventory

Workaround: No workaround
Recovery:

1) Manually delete the EFA created Mirror Session from SLX device

2) Manually delete the EFA created Portchannels from SLX device

Parent Defect ID: EFA-12955 Issue ID: EFA-12955
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.5
Symptom: Error message seen while trying to add ports to an EPG
Parent Defect ID: EFA-12967 Issue ID: EFA-12967
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: LLDP remains disabled on the mirror destination port of the SLX
Condition:

Below are the steps to reproduce the issue:

1) Create a Tenant and create a Mirror Session. The mirror session create disables the LLDP on the mirror destination port

2) Delete the Mirror Session created in step 1

Workaround: No workaround
Recovery: "no lldp disable" needs to be performed manually on the SLX port
Parent Defect ID: EFA-12968 Issue ID: EFA-12968
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: Tenant service restarts on rare cases when REST based configuration commands are sent in a tight loop causing REST clients to get HTTP error 502.
Condition: This can happen on rare occasions when REST based configuration commands are sent in a tight loop over a period of time.
Workaround: There is no workaround available
Recovery: Rerun the failed command once Tenant service is back up, which would happen in a couple of minutes
Parent Defect ID: EFA-12987 Issue ID: EFA-12987
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: Efa Drift-and-reconcile keeps failing on one of the switch in fabric
Parent Defect ID: EFA-13028 Issue ID: EFA-13028
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: DRC fails while reconciling ACL configurations if ACL has different rule with same sequence number.
Condition:

1. Create the EPG with PP/NP ACL on the device Ports/Networks.

2. Manually delete the rule under the ACL and create another rule with the same sequence number, on the SLX device.

3. Trigger DRC.

Recovery: Manually configure the expected rule under ACL
Parent Defect ID: EFA-13069 Issue ID: EFA-13069
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.7.0
Symptom: Mirror Session create fail with error - "More than one mirror destination ports available on the device <device-ip> for the tenant <tenant-name>"
Condition:

Below are the steps to reproduce the issue:

1) Create Tenant with more than one mirror-destination-port per device

2) Create Mirror Session with Global Vlan as mirror source and an explicit mirror destination (and not auto derived mirror destination)

Workaround:

1) Update Tenant to delete mirror-destination ports from devices using "efa tenant update --operation mirror-destination-port-delete --mirror-destination-port <ports>" so that only one mirror-destination port per device remains in tenant

2) Create Mirror Session again with Global Vlan as mirror source and the explicit mirror destination (and not auto derived mirror destination)

Recovery: No recovery
Parent Defect ID: EFA-13189 Issue ID: EFA-13189
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: HTTP Server configuration is shutdown on some switches in the fabric after upgrade of EFA/TPVM
Parent Defect ID: EFA-13254 Issue ID: EFA-13254
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: 3 of EFA Pods fails Liveliness / Rediness checks causing Init containers to stop and causing crashloopback