Open Defects

The following defects are open in Extreme Fabric Automation 2.6.1.

Parent Defect ID: EFA-9065 Issue ID: EFA-9065
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.3
Symptom: EFA Port Channel remains in cfg-refreshed state when the port-channel is created immediately followed by the EPG create using that port-channel
Condition:

Below are the steps to reproduce the issue:

1. Create port-channel po1 under the ownership of tenant1

2. Create endpoint group with po1 under the ownership of tenant1

3. After step 2 begins and before step 2 completes, the raslog event w.r.t. step 1 i.e. port-channel creation is received. This Ralsog event is processed after step 2 is completed

Recovery:

1. Introduce switchport or switchport-mode drift on the SLX for the port-channel which is in cfg-refreshed state

2. Perform manual DRC to bring back the cfg-refreshed port-channel back to cfg-in-sync

Parent Defect ID: EFA-9439 Issue ID: EFA-9439
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Dev-State and App-State of EPG Networks are not-provisioned and cfg-ready
Condition:

Below are the steps to reproduce the issue:

1) Create VRF with local-asn

2) Create EPG using the VRF created in step 1

3) Take one of the SLX devices to administratively down state

4) Perform VRF Update "local-asn-add" to different local-asn than the one configured during step 1

5) Perform VRF Update "local-asn-add" to the same local-asn that is configured during step 1

6) Admin up the SLX device which was made administratively down in step 3 and wait for DRC to complete

Workaround: No workaround as such.
Recovery:

Following are the steps to recover:

1) Log in to SLX device which was made admin down and then up

2) Introduce local-asn configuration drift under "router bgp address-family ipv4 unicast" for the VRF

3) Execute DRC for the device

Parent Defect ID: EFA-9570 Issue ID: EFA-9570
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Add Device Failed because ASN used in border leaf showing conflict
Condition: If there are more than one pair of Leaf/border leaf devices then devices which are getting added first will get the first available ASN in ascending order and in subsequent addition of devices if one of device is trying to allocate the same ASN because of brownfield scenario then EFA will throw an error of conflicting ASN
Workaround:

Add the devices to fabric in the following sequence

1)First add devices that have preconfigured configs

2)Add remaining devices that don't have any configs stored

Recovery:

Removing the devices and adding the devices again to fabric in following sequence

1)First add devices that have preconfigured configs

2)Add remaining unconfigured devices.

Parent Defect ID: EFA-9576 Issue ID: EFA-9576
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Deletion of the tenant by force followed by the recreation of the tenant and POs can result in the error "Po number <id> not available on the devices".
Condition:

Below are the steps to reproduce the issue:

1. Create tenant and PO.

2. Delete the tenant using the "force" option.

3. Recreate the tenant and recreate the PO in the short time window.

Workaround: Avoid performing tenant/PO create followed by tenant delete followed by the tenant and PO recreate in the short time window.
Recovery: Execute inventory device prior to the PO creation.
Parent Defect ID: EFA-9591 Issue ID: EFA-9591
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: "efa fabric configure" fails with error after previously changing the fabric password in the configured fabric
Condition: This condition was seen when "efa fabric configure --name <fabric name>" was issued after modifying the MD5 password. Issue is observed when certain BGP sessions are not in an ESTABLISHED state after clearing the BGP sessions as part of fabric configure.
Workaround: Wait for BGP sessions to be ready by checking the status of BGP sessions using "efa fabric topology show underlay --name <fabric name>"
Recovery: Wait for BGP sessions to be ready. Check the status of BGP sessions using "efa fabric topology show underlay --name <fabric name>"
Parent Defect ID: EFA-9758 Issue ID: EFA-9758
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: EFA is not reconciling the remote-asn of BGP peer configuration after the user modified the remote-asn of BGP peer out of band,
Workaround: None
Recovery: Revert the remote ASN of BGP peer on the device through SLX CLI to what EFA has configured previously.
Parent Defect ID: EFA-9799 Issue ID: EFA-9799
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: 'efa status' response shows standby node status as 'UP' when node is still booting up
Condition: If SLX device is reloaded where EFA standby node resides, then 'efa status' command will still show the status of standby as UP.
Workaround: Retry the same command after some time.
Parent Defect ID: EFA-9907 Issue ID: EFA-9907
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When concurrent EFA tenant EPG update port-add or port-delete operation is requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "vni in use error".
Condition: The failure is reported when Tenant service gets stale information about a network that existed a while back but not now. This happens only when the port-add and port-delete are done in quick succession
Workaround: Avoid executing port-add and port-delete of same ports in quick succession and in concurrence.
Recovery: None
Parent Defect ID: EFA-10062 Issue ID: EFA-10062
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Removing a device from Inventory does not clean up breakout configuration on interfaces that are part of port channels.
Condition: This condition occurs when there is breakout configuration present on device that is being deleted from Inventory, such that those breakout configurations are on interfaces that are part of port-channels
Workaround: Manually remove the breakout configuration, if required.
Recovery: Manually remove the breakout configuration, if required.
Parent Defect ID: EFA-10063 Issue ID: EFA-10063
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Deleting device from EFA Inventory does not bring up the interface to admin state 'up' after unconfiguring breakout configuration
Condition: This condition occurs when there is a breakout configuration present on the device that is being deleted from EFA Inventory
Workaround: Manually bring the admin-state up on the interface, if required
Recovery: Manually bring the admin-state up on the interface, if required
Parent Defect ID: EFA-10288 Issue ID: EFA-10288
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When a bgp peer is created and update operations are performed when one of the devices are in admin down state, the configuration for the admin up device is deleted from the slx switch but remains in efa when "efa tenant service bgp peer configure --name <name> --tenant <tenant>" is performed.
Condition:

The bgp peer gets deleted from the SLX but not from EFA. This issue is seen when the following sequence is performed.

1. Create static bgp peer

2. Admin down one of the devices

3. Update the existing bgp static peer by adding a new peer

4. Update the existing bgp static peer by deleting the peers which were first created in step1. Delete from both devices

5. Admin up the device

6. efa tenant service bgp peer configure --name "bgp-name" --tenant "tenant-name"

Once the bgp peer is configured, the config is deleted from the switch for the device which is in admin up state whereas EFA still has this information and displays it during bgp peer show

Workaround: Delete the peer for admin up device first and then delete the peer from admin down device as a separate cli command.
Recovery: Perform a drift reconcile operation for the admin up device so that the configuration gets reconciled on the switch.
Parent Defect ID: EFA-10445 Issue ID: EFA-10445
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Tenant service may occasionally reject subsequent local-ip-add command incorrectly.
Condition: When continuous EPG updates with repeated local-ip-add and local-ip-delete operations are done on the same EPG repeatedly without much gap in-between, Tenant service may occasionally retain stale information about the previously created IP configuration and may reject subsequent local-ip-add command incorrectly.
Workaround: There is no work-around to avoid this. Once the issue is hit, user may use a new local-ip-address from another subnet.
Recovery:

Follow the steps below to remove the stale IP address from Tenant's knowledge base:

1. Find the management IP for the impacted devices. this is displayed in the EFA error message

2. Find the interface VE number. This is same as the CTAG number that the user was trying to associate the local-ip with

3. Telnet/SSH to the device management IP and login with admin privilege

4. Set the local IP address in the device

configure t

interface ve <number>

ip address <local-ip>

5. Do EFA device update by executing 'efa inventory device update --ip <IP> and wait for a minute for the information to be synchronized with Tenant service database

6. Reset the local IP address in the device

configure t

interface ve <number>

no ip address

7. Do EFA device update and wait for a minute for the information to be synchronized with Tenant service database

These steps will remove the stale entries and allow future local-ip-add operation to be successful.

Parent Defect ID: EFA-10525 Issue ID: EFA-10525
Severity: S3 – Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: When EFA OVA is deployed, and does not obtain a DHCP IP address, not all EFA services will start
Workaround:

Configure static IP, or obtain IP address from DHCP.

cd /opt/godcapp/efa

type: source deployment.sh

When the EFA installer appears, select Upgrade/Re-deploy

Select OK

Select single node, Select OK

Select the default of No for Additional management networks.

Select yes when prompted to redeploy EFA.

Once EFA has redeployed, all services should start

Parent Defect ID: EFA-10754 Issue ID: EFA-10754
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.2
Symptom: EFA - Backup create fails (timeout)
Condition:

The device is stuck with the service lock taken as noted in the example inventory log message. This will happen when performing an EFA backup if the backup is performed near the expiration time of the authentication token.

{"@time":"2021-10-13T16:19:53.132404 CEST","App":"inventory","level":"info","msg":"executeCBCR: device '21.150.150.201' is already Locked with reason : configbackup ","rqId":"4f144a0c-7be6-4056-8371-f1dc39eb28b3"}

Recovery: efa inventory debug devices-unlock --ip 21.150.150.201" will resolve the issue and backup can be done after efa login.
Parent Defect ID: EFA-11063 Issue ID: EFA-11063
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: The standby status of the EFA node shows as down when actually the node is ready for failover
Condition: The issue happened because one of the pods - rabbitmq was in Crashloopbackoff instead of init mode. This is not a functional issue since its just a status issue.
Workaround: Reboot the standby - which doesn't cause any down time. Another workaround is to restart k3s using systemctl restart k3s command.
Recovery: Rebooting the node will fix the pods or restarting k3s will fix the issue
Parent Defect ID: EFA-11105 Issue ID: EFA-11105
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: EFA tenant VRF and EPG show "App State: cfg-refresh-err" after a VRF change is made directly on SLX.
Condition:

Following are the steps to reproduce-

Step1) Introduce VRF drift on SLX device by removing "vrf-forwarding" from VE Interfaces associated with the given VRF

Step2) Perform "efa inventory device update" for the SLX device where VRF is instantiated

Step3) Perform any VRF Update operation

Step4) Perform DRC for the same SLX device where VRF is instantiated

Workaround: No workaround
Recovery:

Step1) Remove VRF from the EndpointGroups to which it belongs by using EPG Update "vrf-delete"

Step2) Add VRF to all the EndpointGroups again by using EPG Update "vrf-add"

Parent Defect ID: EFA-11813 Issue ID: EFA-11813
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom:

This issue can be seen for a bgp peer or peer group when update-peer-delete or delete operations are performed with one device for the mct pair in admin down state.

The bgp peer gets deleted from the SLX but not from EFA.

Condition:

Steps to reproduce:

1. Create static bgp peer

2. Admin down one of the devices

3. Update the existing bgp static peer by deleting the peers which were first created in step1. Delete from both devices

4. Admin up the device

Once the device is brought up, auto drc kicks in and the config which is deleted from the switch due to admin down state has an incorrect provisioning-state and app-state.

Workaround: Bring the admin down device up and then delete the required bgp peers.
Recovery: No recovery
Parent Defect ID: EFA-11980 Issue ID: EFA-11980
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: An EFA TPVM upgrade workflow may fail for a given device along with the automatic recovery to restore the TPVM back to the original version and rejoin the EFA node back into the HA cluster.
Condition:

During the "EFA Deploy Peer and Rejoin" step, the EFA image import into the k3s container runtime fails.

During the "TPVM Revert" step, the k3s on the active EFA node would not allow the standby EFA node to join the cluster due to a stale node-password in k3s.

Workaround: None
Recovery:

Manually recover the TPVM and EFA deployment by following the procedure described in the link below:

EFA 2.5.2 Re-deploy post a TPVM Rollback failed on first attempt.

https://extremeportal.force.com/ExtrArticleDetail?an=000099582

Parent Defect ID: EFA-12058 Issue ID: EFA-12058
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: The error 'Error updating traefik with efasecret' is seen during node replacement.
Condition: EFA node replacement is successful.
Workaround: Re-add subinterfaces using 'efa mgmt subinterfaces' CLI.
Parent Defect ID: EFA-12105 Issue ID: EFA-12105
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: A "Drift Reconcile Completion Status Failure" may occur during an EFA firmware download of an SLX device in a fabric.
Condition:

A DRC status failure can occur if the SLX device also fails during the firmware download. The DRC failure is observed during the drift-reconcile completion step on either the spine node that is hosting the active EFA node TPVM or any device in the same firmware download group which is concurrently running the firmware download workflow at the time of HA failover. This is likely due to the SLX device rebooting and activating the new firmware.

During the EFA HA failover, the REST endpoint for the go-inventory service is not established properly and causes the drift-reconcile process to fail.

Workaround: None
Recovery: Run "efa inventory drift-reconcile execute --ip <SLX device IP address> --reconcile" to retry the drift-reconcile process on the failed device.
Parent Defect ID: EFA-12114 Issue ID: EFA-12114
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: In rare circumstances, kubernetes' EndpointSliceController can fall out of sync leading to incorrect iptables rules being instantiated. This can lead to EFA APIs failing because they are redirected to non-existent services.
Recovery:

EFA's monitor process will detect and attempt to remediate this situation automatically. If it fails to do so, the following can help:

On both TPVMs, as the super-user,

$ systemctl restart k3s

If the problem recurs, these further steps, run as super-user, may help:

$ sed -i -E 's/EndpointSlice=true/EndpointSlice=false/' /lib/systemd/system/k3s.service

$ systemctl daemon-reload

$ systemctl restart k3s

Parent Defect ID: EFA-12117 Issue ID: EFA-12117
Severity: S3 – Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: EFA not closing unsuccessful SSH attempts when password expires on SLX
Recovery: No Recovery
Parent Defect ID: EFA-12182 Issue ID: EFA-12182
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom:

The issue can be replicated by adding extra link to the existing ICL link. This could find the error in “efa fabric show”

The issue is not seen on every attempt.

Condition: Dynamic adding of links to existing ICL, The speed of interface is not updated to the LLDP database causing the the devices to go into error state.
Workaround: Remove, readd the device and configure fabric after adding new links.
Recovery: Remove, readd the device and configure fabric OR manually update the lldp db with the correct speed and update devices.
Parent Defect ID: EFA-12228 Issue ID: EFA-12228
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: Efa system backup failure
Recovery: No Recovery
Parent Defect ID: EFA-12237 Issue ID: EFA-12237
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: EPG update port-group-delete operation results in the runtime error "Execution error: service is not available or internal server error has occurred, please try again later"
Condition:

Below are the steps to reproduce the issue:

1. Create a BD based tenant under a CLOS or Non-CLOS fabric.

2. Create a BD based EPG (under the ownership of the tenant created in step 1) with some ctags and some member port-channels.

3. For the reasons unknown, the BD (Bridge Domain) configuration pertaining to one of the member port-channel got deleted from the EFA DB, causing the DB to be in an inconsistent state.

4. Execute EPG update "port-group-delete" operation to remove the member port-channel whose BD configuration is inconsistent.

Recovery:

No recovery through EFA CLI.

The inconsistent DB needs to be corrected by creating dummy BD (Bridge Domain) entries in the database followed by EPG update "port-group-delete".

Parent Defect ID: EFA-12305 Issue ID: EFA-12305
Severity: S3 – Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: EFA not closing unsuccessful SSH attempts when password expires on SLX
Recovery: No Recovery
Parent Defect ID: EFA-12331 Issue ID: EFA-12331
Severity: S3 – Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: DRC takes too long to complete when a switch reload causes a transient kubernetes error.
Recovery: The system will recover on its own
Parent Defect ID: EFA-12237 Issue ID: EFA-12237
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: EPG update port-group-delete operation results in the runtime error "Execution error: service is not available or internal server error has occurred, please try again later"
Condition:

Below are the steps to reproduce the issue:

1. Create a BD based tenant under a CLOS or Non-CLOS fabric.

2. Create a BD based EPG (under the ownership of the tenant created in step 1) with some ctags and some member port-channels.

3. For the reasons unknown, the BD (Bridge Domain) configuration pertaining to one of the member port-channel got deleted from the EFA DB, causing the DB to be in an inconsistent state.

4. Execute EPG update "port-group-delete" operation to remove the member port-channel whose BD configuration is inconsistent.

Recovery: It's applicable with large SLX configuration
Parent Defect ID: EFA-12344 Issue ID: EFA-12344
Severity: S3 – Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.4
Symptom: After firmware download (with maint mode enabled on reboot) the device takes a long time to finish DRC thus taking device out of maint mode
Recovery: It's applicable with large SLX configuration
Parent Defect ID: EFA-12429 Issue ID: EFA-12429
Severity: S2 – Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: After failover active EFA down, Standby is up
Parent Defect ID: EFA-12441 Issue ID: EFA-12441
Severity: S2 – Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.5
Symptom: The RabbitMQ port was getting exposed on the EFA management interface and all sub-interfaces.
Workaround: For manually created sub-interfaces after EFA installation, the EFA iptables policy will need to be restarted in order to apply filtering rules to these new interfaces. The command for this (as root) is: 'systemctl restart efa-iptables.service'.
Recovery: Same as workaround
Parent Defect ID: EFA-12480 Issue ID: EFA-12480
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: Scale Config : VRF doesn't allow to have more than 4095 SR in single creation
Workaround No Workaround
Recovery: No Recovery
Parent Defect ID: EFA-12454 Issue ID: EFA-12454
Severity: S2 – Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: If the password on an SLX device is changed manually through the SLX command and the password is modified in EFA as well using the command "efa inventory device update --ip <IP> --username <user> --password <password", then the subsequent "efa tenant ...." commands that correspond to the device (for which the password is changed) will fail with the error "Error : Could not connect to Devices: <device-ip>"
Condition:

Below are the steps to reproduce the issue:

1. The SLX device password is changed manually through the SLX command

2. The SLX device password is modified in EFA as well using the command "efa inventory device update --ip <IP> --username <user> --password <password"

3. "efa tenant ...." commands that correspond to the device (for which the password is changed) are executed

Workaround:

1. Change the device password through EFA using the command "efa inventory device update --ip <IP> --username <user> --password <password"

2. Change EFA inventory key-value store information for the corresponding device by using "efa inventory kvstore create --key switch.<IP addr>.password --value <new-password> --encrypt"

3. Wait for up to 15 minutes for this information to be consumed by the tenant service

Recovery:

Two recovery steps are available

Parent Defect ID: EFA-12516 Issue ID: EFA-12516
Severity: S3 - Moderate
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.0
Symptom: After changing IP and running "efa-change-ip" script EFA pods are in boot loop
Recovery: Rollback is automatically performed so no stale config is left on the switch. No recovery is required
Parent Defect ID: EFA-12539 Issue ID: EFA-12539
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: EPG update request with port-group-add operation and EPG create request where multiple ctags are mapped to one bridge-domain, may fail with error "Error 1452: Cannot add or update a child row"
Condition:

Error will be observed when one of the following use cases are executed on bridge-domain enabled tenant.

Use case-1:

1. Create an EPG with multiple ctags mapped to one bridge-domain and with ports across SLX devices that are not part of an MCT pair

Use case-2:

1. Create an EPG with multiple ctags mapped to one bridge-domain and with ports or port-channels on one SLX device

2. Update the EPG with port(s) on new SLX device that is not an MCT pair of first device

Workaround: Create the EPG with all the required ports and with one ctag-bridge-domain mapping first. Then do epg update with ctag-add-range operation to add additional ctags to the same bridge-domain
Recovery: Rollback is automatically performed so no stale config is left on the switch. No recovery is required
Parent Defect ID: EFA-12555 Issue ID: EFA-12555
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: EPG update request with port-group-add operation and EPG create request where multiple ctags are mapped to one bridge-domain, may fail with error "Error 1452: Cannot add or update a child row"
Condition:

Error will be observed when one of the following use cases are executed on bridge-domain enabled tenant.

Use case-1:

1. Create an EPG with multiple ctags mapped to one bridge-domain and with ports across SLX devices that are not part of an MCT pair

Use case-2:

1. Create an EPG with multiple ctags mapped to one bridge-domain and with ports or port-channels on one SLX device

2. Update the EPG with port(s) on new SLX device that is not an MCT pair of first device

Recovery: Recreate L3 EPG the port-groups, further anycast-ip updates will work as expected.
Parent Defect ID: EFA-12556 Issue ID: EFA-12556
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: If all port-groups are deleted from the L3 EPG, then anycast-address details are removed from the EFA database, thus next port-group operations fails with validations error.
Condition:

1. Create L3 epg with device ports [0/10,11]

2. EPG update port-group-delete operation with both ports from the device.

3. EPG update with port-group-add with device port [0/10]

Recovery: Delete and recreate the EPG with anycast-details and then perform port-group add operations
Parent Defect ID: EFA-12557 Issue ID: EFA-12557
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: L3 EPG update with operation anycast-ip-delete with all anycast-ips(configured as part of EPG), is allowed leading to Ve without any v4/v6 anycast-ips.
Condition:

1. Create L3 EPGs with both ipv4 and ipv6 anycast-ips

2. EPG update with anycast-ip-delete, pass all anycast-ips configured as part of step 1

3. After EPG update, all anycast-ips are removed from the DB and device both

Workaround:

Pass anycast-ip one by one to the EPG update CLI. Last anycast-ip removal will

not be allowed and validation error will be thrown.

Workaround: Pass anycast-ip one by one to the EPG update CLI. Last anycast-ip removal will not be allowed and validation error will be thrown.
Parent Defect ID: EFA-12558 Issue ID: EFA-12558
Severity: S2 - Major
Product: Extreme Fabric Automation Reported in Release: EFA 2.6.1
Symptom: L3 NPEPG(without ports) update with operation anycast-ip delete, does not remove Anycast-ip from the EFA DB.
Condition:

1. Create L3 EPG without ports

2. Update epg with operation: anycast-ip-delete

3. Anycast-ip not removed from the DB.

Workaround: Delete and re-create the EPG to remove the anycast-address