The following defects are open in EFA 2.4.0.
Parent Defect ID: | EFA-5592 | Issue ID: | EFA-5592 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.2.0 |
Symptom: | Certificates need to be manually imported on replaced equipment in-order to perform RMA. | ||
Condition: | RMA/replaced equipment will not have ssh key and auth certificate, in-order to replay the configuration on new switch user needs to import the certificates manually. | ||
Workaround: |
import certificate manually efa certificates device install --ips x,y --certType |
Parent Defect ID: | EFA-5732 | Issue ID: | EFA-5732 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.2.0 |
Symptom: | When firmware download is in progress, fabric delete command is accepted without an error. | ||
Condition: | If fabric delete command is submitted when firmware download is in progress, it fails. | ||
Workaround: |
Allow firmware download process to complete. Status of the same can be checked using command efa inventory device firmware-download show --fabric {fabric name} |
||
Recovery: | Fabric can be deleted once the firmware download is completed |
Parent Defect ID: | EFA-5841 | Issue ID: | EFA-5841 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.2.0 |
Symptom: | When firmware download is in progress, tenant create command is accepted without an error. | ||
Condition: | If tenant commands are submitted when firmware download is in progress, it results in erroneous configuration and some configurations may miss. | ||
Workaround: |
Allow firmware download process to complete. Status of the same can be checked using command efa inventory device firmware-download show --fabric {fabric name} |
||
Recovery: | Tenant commands can be submitted after the firmware download is completed |
Parent Defect ID: | EFA-5874 | Issue ID: | EFA-5874 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.2.0 |
Symptom: | On device registration, the IP of the EFA system is recorded in the logging entry on the device so logs can be forwarded to the EFA system for notification. When the EFA system is backed up and restored on another system with a different IP, the old IP of the EFA system is still present on the devices and the devices will continue to forward logs to the old EFA IP. | ||
Workaround: | Users will have to manually login to each devices and remove the logging entry for the old EFA IP. |
Parent Defect ID: | EFA-5927 | Issue ID: | EFA-5927 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.2.0 |
Symptom: | Configuration reconciliation fails with an error "drift and reconcile failed waiting for status from tenant." because of the timeout. | ||
Condition: |
When the switch configurations drift from the intended configurations in EFA due to scenarios as follows: 1. L3 Epg is created with large ctag-range (e.g. 2-2000) 2. EFA configured VLANs and PO configurations are manually removed from the switch. 3. Switch is reloaded in maintenance mode |
||
Recovery: | After the switch is moved out of maintenance mode after reload, configuration drift can be viewed and reconciled using "efa inventory drift-reconcile execute --reconcile --ip <switch-ip>" CLI. |
Parent Defect ID: | EFA-5928 | Issue ID: | EFA-5928 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.2.0 |
Symptom: | Configuring devices to default startup-config and adding them to a non-clos fabric does not enable all MCT ports resulting into fabric validation failure for missing link | ||
Condition: | Added devices immediately after setting to default startup config | ||
Workaround: |
Remove the devices from fabric and re-add efa fabric device remove --name <fabric-name> --ip <device-ips> efa inventory device delete --ip <device-ips> efa fabric device add-bulk --name <fabric-name> --rack <rack-name> --username <username> --password <password> --ip <device-ips> |
Parent Defect ID: | EFA-6501 | Issue ID: | EFA-6501 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | Configuration Drift for VRF still shown in "cfg-in-sync" though its child configuration are drifted on SLX switch. | ||
Condition: |
With below steps issue can be observed. 1) Create VRF/EPG having route target, static route and bgp configuration. 2) Introduce drift in VRF route target or static route or bgp configuration on SLX switch. 3) Update device from efa command "efa inventory device update --ip <device ip>" 4) Check device drift using efa command as "efa inventory drift-reconcile execute --ip <device ip>" 5) VRF shows as "cfg-in-sync" though its child configuration was drifted. |
||
Workaround: | None | ||
Recovery: | After drift and reconcile all EFA and device configuration will be in sync. |
Parent Defect ID: | EFA-7269 | Issue ID: | EFA-7269 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | During drift reconcile triggered because of maintenance mode enable-on-reboot configuration, interface configurations shown as drifted even though actual drift is not present on the SLX switch. | ||
Condition: |
This issue observed with below steps, - Configured fabric/tenant/po/vrf/epg/bgp peer/peer-group - Enabled maintenance mode enable-on-reboot on SLX switch. - Reload SLX switch. - Drift and Reconcile process shows drift for interface used in EPG which was not drifted. |
||
Workaround: | None | ||
Recovery: | EFA and SLX will be in sync when the drift and reconcile (triggered because of maintenance mode enable-on-reboot) is completed. |
Parent Defect ID: | EFA-7324 | Issue ID: | EFA-7324 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.0 |
Symptom: | Continuous create/delete of BGP peer-group and peer can finally cause CLI errors | ||
Condition: | When create and delete BGP peer/peer-group is repeatedly done in a loop. This will cause inventory does not have chance to update its DB with the current design so DB can be out of sync between inventory and tenant. When other events happen such as timer collection from inventory to sweep config to tenant, it can cause issues on tenant DB where CLI can fail. | ||
Workaround: | Avoid such cycles of operations | ||
Recovery: | Delete the BGP peer/peer-group in problem and recreate them again. |
Parent Defect ID: | EFA-7592 | Issue ID: | EFA-7592 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.3.2 |
Symptom: | "dev-state/app-state" moved to not-provisoned/cfg-ready | ||
Condition: |
1) Configure non-clos fabric 2) Create tenant, vrf, epg 3) Admin down device 4) Create multiple epg's , delete an existing epg 5) Manually delete vrf from admin down device 6) Admin up device 7) After admin up, for epg which is in delete-pending theapp-state moved to cfg-ready |
||
Workaround: | wait for few minutes after epg delete, before admin up of the device. | ||
Recovery: | force delete the EPGs in question and recreate them. |
Parent Defect ID: | EFA-8090 | Issue ID: | EFA-8090 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | When a fabric containing more than 15 newly registered devices is deployed using the CLI 'efa fabric configure', an attempt to add ports of any of these devices to a tenant within 5 minutes may fail. The error will indicate that the ports have not yet been registered in the fabric | ||
Condition: | Attempt to add device ports of a recently configured fabric to a tenant may fail with an error indication that the ports have not yet been registered in the fabric | ||
Workaround: | Wait for up to 5 minutes after deploying the fabric before adding ports to a tenant | ||
Recovery: | This is a transient error. Rerunning the port-add operation after a maximum wait time of 5 minutes will succeed |
Parent Defect ID: | EFA-8152 | Issue ID: | EFA-8152 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | While graceful-restart(GR) updating with value TRUE and inflight transition triggered as a part of EFA rollover then update will continue as a part of inflight transition. | ||
Condition: | Update GR with value TRUE and perform EFA rollover on HA setup. |
Parent Defect ID: | EFA-8155 | Issue ID: | EFA-8155 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | "cluster-client auto" is not configured under port channel for first reloaded device. | ||
Condition: |
Execute below steps to hit this condition 1) Create fabric on MCT paired device 2) Create Tenant/PO/VRF/EPG 3) Enable MM mode on both device 4) Perform EFA backup 5) Delete EPG/VRF/PO/Tenant 6) Delete fabric 7) Restore EFA backup 8) Reload device one by one After these steps check PO on both device, "cluster-client auto" will not configured on first reloaded device. |
||
Workaround: | Instead of reload device in Step (8), perform manual DRC using inventory CLI as "efa inventory drift-reconcile execute --ip <device ip> --reconcile" for each device. |
Parent Defect ID: | EFA-8257 | Issue ID: | EFA-8257 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EFA is not able to detect drift for configuration like VRF/VE/VLAN/EVPN | ||
Condition: |
Please follow below steps, 1) Create tenant/VRF/PO/EPG 2) As soon as EPG creation pushed configuration on device, remove them from device. 3) Check drift using inventory CLI as "efa inventory drift-reconcile execute --ip --device-ip <device ip>" |
||
Workaround: | As this is timing issue so we need to wait for 1 min before remove configurations from device. | ||
Recovery: | We need to delete EPG and recreate it again. |
Parent Defect ID: | EFA-8269 | Issue ID: | EFA-8269 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EPG app-state moved to cfg-refresh-err after epg delete and admin up | ||
Condition: |
1) Configure clos fabric (Medium scale fabric) 2) Create tenant 3) Admin down the devices 4) Create port-channels, vrfs and epgs 5) Admin up the following devices Wait for the DRC to be success 6) Repeat step 3 Wait for the devices to put into maintenance mode 7) Create bgp peer-group and dynamic peers 8) Delete all epg's 9) Repeat step 5 10) Vrfs are getting deleted from admin up devices 11) EPG app-state move to cfg-refresh-err |
||
Recovery: | Delete the EPGs in cfg-refresh-err state and recreate them. |
Parent Defect ID: | EFA-8273 | Issue ID: | EFA-8273 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EPG Update "vrf-add" operation gives success when EPG is in "vrf-delete-pending" state | ||
Condition: | Perform EPG Update "vrf-add" operation on an EPG in "vrf-delete-pending" state | ||
Workaround: | No workaround | ||
Recovery: | User needs to remove the VRF from EPG using EPG update "vrf-delete" operation before attempting the "vrf-add" operation. |
Parent Defect ID: | EFA-8297 | Issue ID: | EFA-8297 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: |
EPG update anycast-ip-delete operation succeeded for deletion of provisioned anycast-ip for admin-down device. This issue is observed only if an update anycast-ip-add operation is performed after device is put in admin down state and the new config is in non-provisioned state followed by anycast-ip-delete operation for already configured anycast-ip. |
||
Condition: |
Steps to reproduce issue: 1) Configure EPG with anycast-ip (ipv4/ipv6) 2) Make one device admin-down 3) Anycast-ip update-add new anycast-ip (ipv6/ipv4) 4) Update-delete provisioned anycast-ip configured in step-1 (ipv4/ipv6) Step (4) should fail as IP is already configured on the device and trying to delete it should fail as part of APS. |
||
Workaround: | No workaround for this. | ||
Recovery: | Recovery can be done by configuring EPG again with the required configuration using efa or cleaning device config for anycast-ip on the switch. |
Parent Defect ID: | EFA-8315 | Issue ID: | EFA-8315 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | User adds ports in empty EPG and immediately deletes them. The following adding ports into EPG can have error as duplicate entry | ||
Condition: |
1) Add ports in empty EPG 2) Delete ports from epg right away 3) Add ports into EPG. Which can have error. |
||
Workaround: | After adding ports into EPG, wait certain time before trying to delete ports from EPG. | ||
Recovery: | Delete the EPG and recreate again. |
Parent Defect ID: | EFA-8319 | Issue ID: | EFA-8319 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | PO creation failed with error "Devices are not MCT Pairs". | ||
Condition: |
Please follow below steps, 1) Create fabric/tenant/PO/EPG 2) Take EFA backup 3) Delete EPG/PO/tenant/fabric 4) Restore EFA backup taken in step (2) 5) Delete tenant from which was created before backup 6) Create same tenant again 7) Create PO under same tenant |
||
Workaround: |
As after restore MCT peer details are Nil so we need to perform DRC after restore taken backup. After step (4) above, we need to perform DRC using inventory CLI as efa inventory drift-reconcile execute --ip <device ip 1> --reconcile efa inventory drift-reconcile execute --ip <device ip 2> --reconcile |
Parent Defect ID: | EFA-8322 | Issue ID: | EFA-8322 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EPG Update "anycast-ip-delete" operation gives different output/result when one of the EPG device is admin down | ||
Condition: |
1) Create L3 EPG with anycast-ip/anycast-ipv6 2) Take one of EPG device administratively down 3) Bring device admin up which was taken down in previous step 4) While device is coming up administratively, try EPG Update "anycast-ip-delete" operation |
||
Workaround: | No workaround | ||
Recovery: | No recovery as such. Wait for device to be completely up before trying EPG Update "anycast-ip-delete" operation |
Parent Defect ID: | EFA-8334 | Issue ID: | EFA-8334 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.5.0 |
Symptom: | system backup and restore causes epg state to be in cfg-refresh-err | ||
Condition: | Tenant DB and inventory DB needs time to be in sync. In a busy and scaled system this in sync can take much longer time to finish. Backup DB during DB un-sync window can cause system saves the DBs for tenant and inventory which are not synced yet and following restore will have issues. | ||
Workaround: | If there's a need to make system backup, please execute the backup after system have not made any new config for few minutes. It's needed for the inventory and tenant databases to be in sync before executing system backup. In a busy system the DB sync can take longer to finish. | ||
Recovery: | Delete the EPGs which report errors and recreate them. |
Parent Defect ID: | EFA-8335 | Issue ID: | EFA-8335 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | system backup and restore causes following manual DRC has errors | ||
Condition: | Tenant DB and inventory DB needs time to be in sync. In a busy and scaled system this in sync can take much longer time to finish. Backup DB during this window will cause system saves the DB for tenant and inventory which is not synced yet and following restore will have issues. | ||
Workaround: | If there's a need to make system backup, please execute the backup after system have not made any new config for few minutes. It's needed for the inventory and tenant databases to be in sync before executing system backup. In a busy system the DB sync can take longer to finish. | ||
Recovery: | Delete the epg or tenant with problem and recreate them. |
Parent Defect ID: | EFA-8391 | Issue ID: | EFA-8391 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: |
1) NetConf error for epg update port-group-add operation. Caused due to conflicting anycast-ip added to EPG using anycast-ip-add operation. 2) Error for epg update port-group-add operation. Caused due to configuring different anycast-ip for same vlan in EPG using anycast-ip-add operation. |
||
Condition: |
1) If the user provides conflicting anycast-ip in empty EPG then it should throw an error. * Create EPG1 with port/po, vrf, vlan, anycast-ip * Create EPG2 without port/po * Add new VRF to EPG2 * Add conflicting anycast-ip already used in EPG1 with different vrf * Add port to EPG 2) Multiple EPG sharing the same VRF, VLAN with different anycast-ip should throw an error. * Create EPG1 with port/po, vrf, vlan, anycast-ip * Create EPG2 without port/po * Add vrf to EPG1 which is used in EPG2 * Add new anycast-ip to EPG * Add port to EPG (This will cause conflict) |
||
Workaround: | NA | ||
Recovery: |
Delete conflicting anycast-ip from EPG. Add correct anycast-ip. Add port/po to EPG. |
Parent Defect ID: | EFA-8408 | Issue ID: | EFA-8408 |
Severity: | S4 - Low | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EPG create failure while testing brownfield scenario by creating EPG with the same VRF/SR/SRBfd configuration as present on the device. | ||
Condition: |
Creating EPG with the same VRF configuration fails due to static route key mismatch. Steps: 1) Create Tenant, Vrf (with staticRoute), EPG 2) Check "sh run vrf" and "sh run router bgp" on device 3) delete EPG 4) Create VRF on device directly using config in step 2 5) Update inventory service 6) Try to create EPG with same VRF. (compareVrf fails due to key mismatch) |
||
Workaround: | No workaround. | ||
Recovery: |
1) Remove Vrf from SLX. 2) Update inventory using "efa inventory device update ..." 3) Create EPG using "efa tenant epg create ..." |
Parent Defect ID: | EFA-8443 | Issue ID: | EFA-8443 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | For Tenant created with L3 port having multiple ip-address associated with it, "efa tenant show" will have repeated entries of that L3 port. | ||
Condition: |
Steps to reproduce issue: 1) Assign multiple IPs to the physical port on SLX. 2) Create Tenant using same L3 port. 3) Check Tenant show output. L3 ports having multiple IPs will have repeated entry in the "efa tenant show" output. |
||
Workaround: | No workaround. | ||
Recovery: | Recovery can be done by removing all but one IP from the L3 port on SLX followed by an inventory device update. |
Parent Defect ID: | EFA-8448 | Issue ID: | EFA-8448 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: |
When the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel, the PO goes into delete pending state. However, the ports are not deleted from the PO. They get deleted from the tenant though. |
||
Condition: | This issue is seen when the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel resulting in an empty PO. | ||
Workaround: | User needs to provide ports for “tenant update port-delete operation” which do not result in an empty PO i.e. PO needs to have at least 1 member port. | ||
Recovery: | Add the ports back using "tenant port-add operation" so that the port-channel has at least 1 member port. The use "efa configure tenant port-channel" to bring the po back to stable state. |
Parent Defect ID: | EFA-8453 | Issue ID: | EFA-8453 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | SNMP subscriber add command failed | ||
Condition: | EFA deployed in standard (non secure) mode | ||
Workaround: | EFA as SNMP trap proxy is supported in secure mode deployment. Deploy in secure mode. |
Parent Defect ID: | EFA-8465 | Issue ID: | EFA-8465 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | The "efa inventory device firmware-download prepare add" command fails with "Please specify 'fullinstall' option in firmware download cmdline as GLIBC versions change". | ||
Condition: | Upgrading the SLX firmware from 20.1.2x to 20.2.x requires a 'fullinstall' firmware download in order to proceed. | ||
Workaround: | There is no workaround from EFA. The firmware download fullinstall must be carried out individually on each SLX device. |
Parent Defect ID: | EFA-8472 | Issue ID: | EFA-8472 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EFA firmware download fails with the status "Maintenance Mode Enable Failed". | ||
Condition: | EFA firmware download execution is done with default options and the SLX firmware is being downgraded. | ||
Workaround: | Use the --noMaintMode flag when performing a "efa inventory device firmware-download execute" command. | ||
Recovery: | Retry the firmware download execution using the --noMaintMode flag. |
Parent Defect ID: | EFA-8497 | Issue ID: | EFA-8497 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EPG delete fails with error "Service is not available or internal server error has occurred, please try again later" | ||
Condition: |
1) Create Fabric having leaf pair and border-leaf pair. 2) Create Tenant with ports from leaf devices. 3) Create VRF using routing type as centralized and border-leaf(s) as centralized routers. 4) Create L3 EPGs using VRF created in step3 and ports from leaf devices. 5) Remove Border-Leaf devices from fabric or inventory. 6) Remove Leaf devices from fabric or inventory. 7) Delete EPG(s) created in step4. |
||
Workaround: | No workaround | ||
Recovery: | Delete Tenant using 'force' option |
Parent Defect ID: | EFA-8507 | Issue ID: | EFA-8507 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | Certain vlans are missed in configuration when stacks are created in quick succession within a script with no delay. | ||
Condition: | 10 stack creations without much delay leads to missing configuration. Trunk Subport update is not generated from neutron. Issue is seen with only one controller and not seen when more delay is introduced between stack creations. Trunk also remains in DOWN state. | ||
Workaround: | Workaround is to have delay between stack creation. | ||
Recovery: |
Remove the Trunk Parent port added to the VM and add it back again. e.g. Max-L2-ss3VirtIoVM2_Test1==> VM Name Max-L2-ss3VirtIoTrunkPort2_Test1 ==> Parent Port of the Sub Port that is down openstack server remove port Max-L2-ss3VirtIoVM2_Test1 Max-L2-ss3VirtIoTrunkPort2_Test1 openstack server add port Max-L2-ss3VirtIoVM2_Test1 Max-L2-ss3VirtIoTrunkPort2_Test1 |
Parent Defect ID: | EFA-8512 | Issue ID: | EFA-8512 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | On SLX there can have partial config of neighbor under router bgp. The "show running command router bgp" from SLX shows invalid command "neighbor pg1" (Assume the bgp-group name is pg1). There's no corresponding command to delete this. | ||
Condition: | It's found if issue netconf RPC to SLX device with BGP peer group delete operation which the peer-group does not exist, SLX will create the invalid "neighbor pg1". | ||
Workaround: | Under some admin-down device scenario, avoid delete the same bgp-peer more than once. | ||
Recovery: |
On SLX use the following commands to get rid of the partial bgp-peer. SLX(config)# router bgp SLX(config-bgp-router)# neighbor pg1 peer-group SLX(config-bgp-router)# no neighbor pg1 peer-group |
Parent Defect ID: | EFA-8526 | Issue ID: | EFA-8526 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | VRF Update "centralized-router-add" fail with error "[x, y] are MCT pair. Update the VRF with both devices together as centralized routers" | ||
Condition: |
1) In a CLOS fabric setup with MCT pair of border-leafs, create VRF with routing-type as centralized and select MCT pair of border-leafs as centralized routers. 2) Remove one of the MCT pair border-leaf from the fabric 3) Add same/different border-leaf to the fabric and run fabric configure command 4) Wait for sometime and run VRF Update "centralized-router-add" operation to add newly added border-leaf as centralized router |
||
Workaround: | Run VRF Update "centralized-router-add" operation and specify both nodes of MCT pair border-leafs as centralized routers. | ||
Recovery: | No recovery is required. |
Parent Defect ID: | EFA-8527 | Issue ID: | EFA-8527 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EFA Update "port-group-delete" operation fail with error "port-group-delete" not allowed for "epg in the port-group-delete-pending state" | ||
Condition: | Perform EPG Update "port-group-delete" when EPG is in "port-group-delete-pending" state | ||
Workaround: |
Run EPG configure as follows: "efa tenant epg configure --name <epg-name> --tenant <tenant-name>" |
||
Recovery: | No recovery is required. |
Parent Defect ID: | EFA-8535 | Issue ID: | EFA-8535 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | On a single-node installation of TPVM, after ip-change, EFA is not operational. | ||
Condition: | After IP change of the host system, if 'efa-change-ip' script is run by a different user other than the installation user, in that case, EFA is not operational. | ||
Workaround: | Restart k3s service using the command 'sudo systemctl restart k3s' |
Parent Defect ID: | EFA-8564 | Issue ID: | EFA-8564 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | "device delete" and "device firmware-download prepare add" commands failed | ||
Condition: | This issue surfaces only when firmware download is triggered from the switch directly. EFA restricts certain operations when firmware download is in progress | ||
Workaround: | Use EFA to do firmware download | ||
Recovery: |
1) Confirm firmware download is complete on the device, use “show firmwaredownloadstatus” on the device. 2) Update device in EFA using command: “efa inventory device update --ip <device ip list>”. 3) After this other operation can be performed on the device. 4) This flag will automatically reset via periodic device update however periodic device update interval defaults to 1 hour. |
Parent Defect ID: | EFA-8567 | Issue ID: | EFA-8567 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EPG update "vrf-delete" fail with error "Operation 'vrf-event-dev-delete' not allowed for 'vrf-create'" | ||
Condition: |
1) Create VRF with routing type 'centralized' 2) Create EPG using VRF created in above step and ports from leaf devices & border-leaf(s) 3) Create another EPG using same VRF and different ports from leaf devices & border-leaf(s) 4) Remove border-leaf devices from fabric. After some time, it is observed both EPGs goes to "vrf-delete-pending" state 5) Perform EPG update "vrf-delete" on one of the EPG 6) Perform EPG update "vrf-delete" on another EPG |
||
Workaround: | To remove VRF from EPG, delete the EPG followed by re-create EPG as a workaround. | ||
Recovery: | No recovery is required. |
Parent Defect ID: | EFA-8568 | Issue ID: | EFA-8568 |
Severity: | S2 - High | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | DRC displayed few vlans as drifted with state "cfg-ready" after creating multiple networks/routers from OpenStack. Device has the config in this scenario. | ||
Condition: |
1) Create multiple stacks from OpenStack 2) Create multiple networks and routers, both centralized and distributed. 3) Ensured all epg/vrfs are created in EFA and on switch 4) Ran drift for all leaf and border leaf switches. They are showing drift for the some vlans created. |
||
Recovery: | Delete the corresponding EPG/VRF and recreate them. |
Parent Defect ID: | EFA-8573 | Issue ID: | EFA-8573 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | In few cases, networks in EPG will remain in cfg-in-sync state even if they are created with partial success topology (MCT pair with one admin-up device and one admin-down device). | ||
Condition: |
The issue is seen with the below steps 1) Configure a fabric 2) Create Tenant 3) Create multi-homed portchannel 4) Bring one of the devices of the MCT pair(having the PO created in step 3) admin-down to create a partial success topology 5) Create EPGs on the partial success topology |
||
Recovery: | Bring all the devices in admin-up state. It should push all the configs on devices and everything will be in cfg-in-sync. |
Parent Defect ID: | EFA-8574 | Issue ID: | EFA-8574 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | Networks will remain in ACTIVE state when EFA is not reachable during bulk operations | ||
Condition: | When EFA is not reachable and networks are created in bulk, Network Status on Neutron will remain ACTIVE and Network Status moves to DOWN status only when the Network provisioning is attempted on EFA. This can be delayed based on the number of entries in the Journal. | ||
Workaround: | No Workaround | ||
Recovery: | The system would recover on it own after the Network Operation is failed and the status would move to DOWN. If there are pending entries, they would recover when EFA becomes reachable |
Parent Defect ID: | EFA-8584 | Issue ID: | EFA-8584 |
Severity: | S3 - Medium | ||
Product: | Extreme Fabric Automation | Reported in Release: | EFA 2.4.0 |
Symptom: | EPG delete fails with error "Service is not available or internal server error has occurred, please try again later" | ||
Condition: |
1) Create Tenant with ports. 2) Create EPG(s) on tenant created in step1. 3) Remove device(s) from fabric or inventory which are used in Tenant created in step1. 4) Delete EPG(s) created in step3. |
||
Workaround: | Add any device to Tenant followed by deleting EPG(s) | ||
Recovery: | No recovery is required. |