Open Defects

The following defects are open in Extreme Fabric Automation 2.5.1.

Parent Defect ID: EFA-5592 Issue ID: EFA-5592
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.2.0
Symptom: Certificates need to be manually imported on replaced equipment in-order to perform RMA.
Condition: RMA/replaced equipment will not have ssh key and auth certificate, in-order to replay the configuration on new switch user needs to import the certificates manually.
Workaround:

import certificate manually

efa certificates device install --ips x,y --certType

Parent Defect ID: EFA-5928 Issue ID: EFA-5928
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.2.0
Symptom: Configuring devices to default startup-config and adding them to a non-clos fabric does not enable all MCT ports resulting into fabric validation failure for missing link
Condition: Added devices immediately after setting to default startup config
Workaround:

Remove the devices from fabric and re-add

efa fabric device remove --name <fabric-name> --ip <device-ips>

efa inventory device delete --ip <device-ips>

efa fabric device add-bulk --name <fabric-name> --rack <rack-name> --username <username> --password <password> --ip <device-ips>

Parent Defect ID: EFA-8297 Issue ID: EFA-8297
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.0
Symptom:

EPG update anycast-ip-delete operation succeeded for deletion of provisioned anycast-ip for admin-down device.

This issue is observed only if an update anycast-ip-add operation is performed after device is put in admin down state and the new config is in non-provisioned state followed by anycast-ip-delete operation for already configured anycast-ip.

Condition:

Steps to reproduce issue:

1) Configure EPG with anycast-ip (ipv4/ipv6)

2) Make one device admin-down

3) Anycast-ip update-add new anycast-ip (ipv6/ipv4)

4) Update-delete provisioned anycast-ip configured in step-1 (ipv4/ipv6)

Step (4) should fail as IP is already configured on the device and trying to delete it should fail as part of APS.

Workaround: No workaround for this.
Recovery: Recovery can be done by configuring EPG again with the required configuration using efa or cleaning device config for anycast-ip on the switch.
Parent Defect ID: EFA-8448 Issue ID: EFA-8448
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.0
Symptom:

When the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel, the PO goes into delete pending state. However, the ports are not deleted from the PO.

They get deleted from the tenant though.

Condition: This issue is seen when the ports provided by the user in “tenant update port-delete operation” contains all the ports owned by the port-channel resulting in an empty PO.
Workaround: User needs to provide ports for “tenant update port-delete operation” which do not result in an empty PO i.e. PO needs to have at least 1 member port.
Recovery: Add the ports back using "tenant port-add operation" so that the port-channel has at least 1 member port. The use "efa configure tenant port-channel" to bring the po back to stable state.
Parent Defect ID: EFA-8535 Issue ID: EFA-8535
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.0
Symptom: On a single-node installation of TPVM, after ip-change, EFA is not operational.
Condition: After IP change of the host system, if 'efa-change-ip' script is run by a different user other than the installation user, in that case, EFA is not operational.
Workaround: Restart k3s service using the command 'sudo systemctl restart k3s'
Parent Defect ID: EFA-8904 Issue ID: EFA-8904
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.2
Symptom: Single node deployment fails with 'DNS resolution failed.'
Condition: After a multi-node deployment and then un-deployment is done on a server, if single-node deployment is tried on the same server, the installer exits with the error, 'DNS resolution failed.'
Workaround: After un-deployment of the multi-node installation, perform a reboot of the server/TPVM.
Parent Defect ID: EFA-9010 Issue ID: EFA-9010
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.2
Symptom:

Creation of 100 Openstack VM/stacks fails at the rate of 10 stacks/min

One stack has 1 VM , 2 networks and 3 Ports (2 dhcp and one nova port)

Condition:

100 openstack stacks created at the rate of 10 stacks/min are sent to the EFA.

The EFA processing requests at such high case resuts in overwhelming the CPU,

Since the EFA cannot handle requests at such high rates, backlog of requests are created. This eventually results in VM reschedules and failure to complete some stacks with errors.

Workaround: 100 openstack stacks can be created with lower rate of creation consistently eg 3 stacks/min
Recovery: Delete the failed or all openstack stacks and create them at lower rate of creation e.g 3 stacks/min
Parent Defect ID: EFA-9439 Issue ID: EFA-9439
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Dev-State and App-State of EPG Networks are not-provisioned and cfg-ready
Condition:

Below are the steps to reproduce the issue:

1) Create VRF with local-asn

2) Create EPG using the VRF created in step 1

3) Take one of the SLX devices to administratively down state

4) Perform VRF Update "local-asn-add" to different local-asn than the one configured during step 1

5) Perform VRF Update "local-asn-add" to the same local-asn that is configured during step 1

6) Admin up the SLX device which was made administratively down in step 3 and wait for DRC to complete

Workaround: No workaround as such.
Recovery:

Following are the steps to recover:

1) Log in to SLX device which was made admin down and then up

2) Introduce local-asn configuration drift under "router bgp address-family ipv4 unicast" for the VRF

3) Execute DRC for the device

Parent Defect ID: EFA-9456 Issue ID: EFA-9456
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.3
Symptom: Issue is seen when the devices which are being added to fabric have IP addresses already configured on interfaces and are conflicting with what EFA assigns.
Condition: Issue will be observed if devices being added to fabric have IP addresses assigned on interfaces and these IP addresses are already reserved by EFA for other devices.
Workaround:

Delete the IP addresses on interfaces of devices having conflicting configuration so that new IP addresses can be reserved for these devices. One way to clear the device configuration is using below commands:

1. Register the device with inventory

efa inventory device register --ip <ip1, ip2> --username admin --password password

2. Issue debug clear "efa fabric debug clear-config --device <ip1, ip2>"

Recovery:

Delete the IP addresses on interfaces of devices having conflicting configuration so that new IP addresses can be reserved for these devices. One way to clear the device configuration is using below commands:

1. Register the device with inventory

efa inventory device register --ip <ip1, ip2> --username admin --password password

2. Issue debug clear "efa fabric debug clear-config --device <ip1, ip2>"

3. Add the devices to fabric

Parent Defect ID: EFA-9570 Issue ID: EFA-9570
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Add Device Failed because ASN used in border leaf showing conflict
Condition: If there are more than one pair of Leaf/border leaf devices then devices which are getting added first will get the first available ASN in ascending order and in subsequent addition of devices if one of device is trying to allocate the same ASN because of brownfield scenario then EFA will throw an error of conflicting ASN
Workaround:

Add the devices to fabric in following sequence

1)First add brownfield devices which have preconfigured configs

2)Add remaining devices which don't have any configs stored

Recovery:

Removing the devices and adding the devices again to fabric in following sequence

1)First add brownfield devices which have preconfigured configs

2)Add remaining devices which don't have any configs stored

Parent Defect ID: EFA-9591 Issue ID: EFA-9591
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When certain BGP sessions are not in ESTABLISHED state after clearing the BGP sessions as part of fabric configure, we see this issue.
Condition: This condition was seen when "efa fabric configure --name <fabric name>" was issued after modifying the MD5 password.
Workaround: Wait for BGP sessions to be ready. Check the status of BGP sessions using "efa fabric topology show underlay --name <fabric name>"
Recovery: Wait for BGP sessions to be ready. Check the status of BGP sessions using "efa fabric topology show underlay --name <fabric name>"
Parent Defect ID: EFA-9645 Issue ID: EFA-9645
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When the fabric setting is updated with this particular password "password$\n", md5 password doesn't get configured on the backup routing neighbors that was already created.
Condition:

1. Configure fabric

2. Create tenant, po, vrf and epg

3. Update fabric setting with "password$\n" and configure fabric

4. MD5 password is not configured on backup routing neighbors under BGP address family ipv4/ipv6 vrf

Workaround: Update the fabric setting with any other password combination that does not include "$\n" combination.
Recovery: Update the fabric setting with any other password combination that does not include "$\n" combination.
Parent Defect ID: EFA-9674 Issue ID: EFA-9674
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.2
Symptom: Creation and deletion of stacks can result in failure. Network create fails as the previous network with same VLAN is not deleted.
Condition: Network is deleted and created in quick succession. Since the EFA processing takes time to delete the network at EFA, another call made for network create with same vlan id is also processed. This network create call will end in failure.
Workaround: Add delay between delete and create of stacks to allow more time for efa processing.
Recovery: Cleanup and recreate the failed network/stack at openstack
Parent Defect ID: EFA-9758 Issue ID: EFA-9758
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When user modifies the remote-asn of BGP peer out of band, drift and reconcile is not reconciling the intended remote-asn of BGP peer configuration.
Condition: Issue will seen if the user modifies the remote ASN of BGP peer through out of band means, DRC is not reconciling the remote ASN.
Workaround: Login to the device where the remote ASN is modified and revert it back to what EFA has configured.
Recovery: Revert the remote ASN of BGP peer on the device through SLX CLI to what EFA has configured previously.
Parent Defect ID: EFA-9799 Issue ID: EFA-9799
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: 'efa status' response shows standby node status as 'UP' when node is still booting up
Condition: If SLX device is reloaded where EFA standby node resides, then 'efa status' command will still show the status of standby as UP.
Workaround: Retry the same command after sometime.
Parent Defect ID: EFA-9813 Issue ID: EFA-9813
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.3
Symptom: When doing RMA of device the port connections for the new device must be identical.
Condition: New device's port connections were not identical to the device being RMAed.
Workaround: When doing RMA of device the port connections for the new device must be identical.
Parent Defect ID: EFA-9874 Issue ID: EFA-9874
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When EPG is in the "anycast-ip-delete-pending" state and the user performs "epg configure", it will succeed without actually removing anycast-ip from SLX.
Condition:

Below are the steps to reproduce the issue:

1) Configure EPG with VRF, VLAN and anycast-ip (ipv4/ipv6) on a single rack Non-CLOS fabric.

2) Bring one of the devices to admin-down.

3) EPG Update anycast-ip-delete for anycast-ip ipv4 or ipv6. This will put EPG in "anycast-ip-delete-pending" state.

4) Bring the admin-down device to admin-up.

5) In this state, the only allowed operations on EPG are "epg configure" and EPG update "anycast-ip-delete".

6) Perform "epg configure --name <epg-name> --tenant <tenant-name>".

Workaround: No workaround.
Recovery: Perform the same anycast-ip-delete operation when both devices are admin-up.
Parent Defect ID: EFA-9906 Issue ID: EFA-9906
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When concurrent EFA tenant EPG create or update operation is requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "EPG: <epg-name> Save for Vlan Records save Failed".
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-9930 Issue ID: EFA-9930
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Periodic backup happens according to the system timezone.
Condition: If the nodes in HA are not configured in the same timezone, then periodic backup is scheduled according to the timezone of the active node. When a failover happens, the schedule is changed to the timezone of the new active node.
Workaround: Configure the same timezone on both the nodes in a multi-node installation
Parent Defect ID: EFA-9952 Issue ID: EFA-9952
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: When concurrent EFA tenant EPG delete operations are requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "EPG network-property delete failed"
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed
Parent Defect ID: EFA-9990 Issue ID: EFA-9990
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: EPG update ctag-range-add operation with the existing ctag-range (i.e. ctag1, ctag2) and modified native vlan (ctag2) succeeds without any effect
Condition:

Below are the steps to reproduce the issue:

1. Create Endpoint group with ctag1, ctag2 and native vlan as ctag1

2. Update the Endpoint group (created in step 1) using ctag-range-add operation with the same set of ctags (i.e. ctag1, ctag2) and different native VLAN ctag2

Workaround: If user intends to modify the native vlan from ctag1 to ctag2 in an EPG, then the user will need to remove ctag1 (using ctag-range-delete) from the EPG and add ctag2 (using ctag-range-add) as native vlan to the EPG
Parent Defect ID: EFA-10026 Issue ID: EFA-10026
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: 'efa inventory device interface unset-fec' command will set the fec mode to 'auto-negotiation' instead of removing fec configuration.
Condition: Once fec mode is set on the interface, the configuration cannot be removed. 'efa inventory device interface unset-fec' command will set the fec mode to 'auto-negotiation' instead of removing fec configuration. This is because 'no fec mode' command is no longer supported on SLX.
Workaround: Default value for fec-mode is 'auto-negotiation' and will show up as-is in the output of 'show running-config'. Users can set a different value using 'efa inventory device interface set-fec', if required.
Recovery: Default value for fec-mode is 'auto-negotiation' and will show up as-is in the output of 'show running-config'. Users can set a different value using 'efa inventory device interface set-fec', if required.
Parent Defect ID: EFA-10048 Issue ID: EFA-10048
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom:

EPG: epgev10 Save for devices failed

When concurrent EFA tenant EPG create or update operation is requested where the commands involve large number of vlans and/or ports, one of them could fail with the error "EPG: <epg-name> Save for devices Failed".

Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-10062 Issue ID: EFA-10062
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Removing device from Inventory would not clean up breakout configuration on interfaces that are part of port channels.
Condition: This condition occurs when there is breakout configuration present on device that is being deleted from EFA Inventory, such that those breakout configurations are on interfaces that are part of port-channels
Workaround: Manually remove the breakout configuration, if required.
Recovery: Manually remove the breakout configuration, if required.
Parent Defect ID: EFA-10063 Issue ID: EFA-10063
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Deleting device from EFA Inventory would not bring up interface to admin state 'up' after unconfiguring breakout configuration
Condition: This condition occurs when there is breakout configuration present on device that is being deleted from EFA Inventory
Workaround: Manually bring the admin-state up on the interface, if required
Recovery: Manually bring the admin-state up on the interface, if required
Parent Defect ID: EFA-10093 Issue ID: EFA-10093
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Deletion of the VLAN/BD based L3 EPGs in epg-delete-pending state will result in creation and then deletion of the VLAN/BD on the admin up device where the VLAN/BD was already removed
Condition:

Issue occurs with the below steps:

1. Create L3 EPG with VLAN/BD X on an MCT pair

2. Admin down one of the devices of the MCT pair

3. Delete the L3 EPG. This results in the L3 configuration removal (corresponding to the L3 EPG getting deleted) from the admin up device and no config changes happen on the admin down device and the EPG transits to epg-delete-pending state

4. Admin up the device which was made admin down in step 2

5. Delete the L3 EPG which transited to epg-delete-pending state in step 3

Recovery: Not needed
Parent Defect ID: EFA-10110 Issue ID: EFA-10110
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: EFA fabric and tenant operations are not blocked when (manual) DRC operation is triggered and in-progress.
Condition: DRC operations may fail/timeout with fabric/tenant operations taking longer time to complete.
Workaround: Do not run fabric configure or tenant operations when manual DRC for a device is in progress.
Parent Defect ID: EFA-10252 Issue ID: EFA-10252
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When concurrent EFA tenant EPG update port-group-add operations are requested where the tenant is bridge-domain enabled, one of them may fail with the error "EPG network-property delete failed"
Condition: The failure is reported when concurrent resource allocations by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed
Parent Defect ID: EFA-10266 Issue ID: EFA-10266
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When concurrent EPG update on bd-enabled tenant with vrf-add operation is requested where the commands involve large number of vlans, local-ip and anycast-ip addresses, one of them may fail with the error "EPG: <epg-name> Save for Vlan Records save Failed".
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-10268 Issue ID: EFA-10268
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When concurrent EPG deletes on bd-enabled tenant are requested where the EPGs involve large number of vlans, local-ip and anycast-ip addresses, one of them may fail with the error "EPG: <epg-name> Save for Vlan Records save Failed".
Condition: The failure is reported when concurrent DB write operation are done by EFA Tenant service as part of the command execution.
Workaround: This is a transient error and there is no workaround. The failing command can be executed once again and it will succeed.
Recovery: The failing command can be rerun separately and it will succeed.
Parent Defect ID: EFA-10284 Issue ID: EFA-10284
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom:

When a bgp peer-group update operations are performed after the device is admin down, the entire config from admin up device gets deleted.

Steps to reproduce this issue:

1. Create bgp peer group

2. Admin down both devices

3. peer add fails with appropriate message

4. Admin up one of the devices

5. perform peer-group add

6. update bgp peer group to delete the peer that was created in step 1 for both devices

7. perform bgp configure operation while one of the devices is still in admin down state

After performing the configure operation when one of the devices is still down, the entire config from admin up device gets deleted.

Condition:

Create a bgp peer group and then put one of the devices into admin down state.

Then perform update operation on both devices

After which perform a configure bgp peer-group operation while one of the devices are still down.

The entire config from the device which is still in admin up state gets deleted.

The peers which are in sync and configured on the switch for the admin up device must not be deleted but as the bgp peer-group goes into delete pending state, the entire config gets deleted.

Workaround: The devices need to be in admin up state for the update with peer-group-delete operation to be performed.
Recovery: The bgp peer-group which has been deleted will need to be re-created.
Parent Defect ID: EFA-10288 Issue ID: EFA-10288
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom:

The bgp peer gets deleted from the SLX but not from EFA. This issue is seen when the following sequence is performed.

1. Create static bgp peer

2. Admin down one of the devices

3. Update the existing bgp static peer by adding a new peer

4. Update the existing bgp static peer by deleting the peers which were first created in step1. Delete from both devices

5. Admin up the device

6. efa tenant service bgp peer configure --name "bgp-name" --tenant "tenant-name"

Once the bgp peer is configured, the config is deleted from the switch for the device which is in admin up state whereas EFA still has this information and displays it during bgp peer show

Condition: When a bgp peer is created and update operations are performed when one of the devices are in admin down state, the configuration for the admin up device is deleted from the slx switch but remains in efa when "efa tenant service bgp peer configure --name <name> --tenant <tenant>" is performed.
Workaround: Delete the peer for admin up device first and then delete the peer from admin down device as a separate cli command.
Recovery: Perform a drift reconcile operation for the admin up device so that the configuration gets reconciled on the switch.
Parent Defect ID: EFA-10305 Issue ID: EFA-10305
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.4.2
Symptom: EndpointGroup creation fail with error - "Device: <ip-address> has a VRF <vrf-name> configuration with different number of Static Routes"
Workaround: No workaround
Recovery:

Below are the steps to recover from the issue:

1. Delete VRF from all EndpointGroups by performing EPG Update <vrf-delete> operation

2. Add VRF to EndpointGroups by performing EPG Update <vrf-add> operation

Parent Defect ID: EFA-10307 Issue ID: EFA-10307
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: Unable to login to efa after fresh installation.
Condition: During installation, if a wrong peer-ip input is given and then changed to a correct IP.
Recovery: Re-install EFA with correct set of inputs.
Parent Defect ID: EFA-10370 Issue ID: EFA-10370
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: A tpvm-upgrade and firmware-download execution can be launched simultaneously which could possibly lead to both EFA HA nodes going down at the same time.
Condition: Running a tpvm-upgrade and firmware-download execution simultaneously.
Workaround: Do not execute a tpvm-upgrade and firmware-download simultaneously for the same fabric.
Parent Defect ID: EFA-10371 Issue ID: EFA-10371
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: Additional Route Target is seen under VRF "address-family ipv6 unicast"
Workaround: No workaround
Recovery:

Below are the steps to recover from the issue:

1. Delete VRF from all EndpointGroups by performing EPG Update <vrf-delete> operation

2. Add VRF to EndpointGroups by performing EPG Update <vrf-add> operation

Parent Defect ID: EFA-10377 Issue ID: EFA-10377
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: Manual or Auto DRC can timeout on a scaled setup (when attempted soon after the fabric configure is success) with more number of Backup Routing enabled VRFs because EFA would start a mini DRC in the background (as soon as fabric configure is success) to provision the updated MD5 password on the Backup Routing neighbours.
Condition:

1. Configure non-clos fabric with backup-routing enabled.

2. Configure tenant, po, lot of vrf (e.g. 50), epgs, bgp peer-group, bgp static peers.

3. Configure maintenance-mode enable-on-reboot on the SLX.

4. Update fabric setting to configure MD5 password.

5. Configure fabric created in step 1 in order to provision MD5 password on Backup Routing neighbors for all the tenant VRFs.

6. Reload SLX to trigger MM triggered DRC or trigger manual DRC, as soon as the fabric configure is complete in step 5.

7. DRC will timeout since the provisioning of MD5 password on the Backup Routing neighbours was not allowed to be completed after step 5.

Workaround:

1. Update MD5 password setting in fabric and configure fabric.

2. Allow MD5 password to get provisioned on all BR neighbors of all the tenant VRFs on the SLX.

3. Perform manual/auto DRC once MD5 password is provisioned.

Recovery: Manual or Auto DRC can be reattempted again once the fabric MD5 password is provisioned on all Backup Routing neighbors for all tenant VRFs.
Parent Defect ID: EFA-10380 Issue ID: EFA-10380
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When preparing the fabric for a firmware-download where the fabric has more than 10 devices, up to 10 devices will be prepared successfully but the rest of the devices will error out stating "Cannot find firmware. The server is inaccessible or firmware path is invalid. Please make sure the server name or IP address, the user/password and the firmware path are valid."
Condition:

The firmware-host is registered to use the SCP protocol and the firmware-host has an unset or default "MaxStartups" config in the /etc/ssh/sshd_config file. The default "MaxStartups" is typically "10:30:60" when the configuration is not set.

MaxStartup Configuration:

1) "Start" - 10: The number of unauthenticated connections allowed.

2) "Rate" - 30: The percent chance that the connection is dropped after the "Start" connections are reached which linearly increases thereafter.

3) "Full" - 60: The maximum number of connections after which all subsequent connections are dropped.

Workaround: Edit the /etc/ssh/sshd_config and specify an appropriate "MaxStartups" configuration on the firmware-host. An appropriate value would be: "Full" is greater than "Start" and "Start" is greater than the number of devices in the fabric. After the /etc/ssh/sshd_config file is edited, restart the sshd service for the changes to take effect. The sshd service restart will not affect currently connected ssh sessions.
Parent Defect ID: EFA-10387 Issue ID: EFA-10387
Severity: S3 – Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: EFA OVA services not starting if no IP address is obtained on bootup.
Condition: When EFA OVA is deployed, and does not obtain a DHCP IP address, not all EFA services will start
Workaround

Configure static IP, or obtain IP address from DHCP.

cd /opt/godcapp/efa

type: source deployment.sh

When the EFA installer appears, select Upgrade/Re-deploy

Select OK

Select single node, Select OK

Select the default of No for Additional management networks.

Select yes when prompted to redeploy EFA.

Once EFA has redeployed, all services should start

Parent Defect ID: EFA-10389 Issue ID: EFA-10389
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: When upgrade process is quit at any possible stage, older EFA stack doesn't get identified from the same node on which process has been initiated.
Condition:

If user selects "No" when EFA asks for final confirmation before upgrade process gets started, the process gets terminated, but older stack can't be identified any longer on SLX. Checking "show efa status" reflects "EFA application is not installed. Exiting..."

However there is no functional impact on EFA setup and EFA setup continues to work properly on TPVMs with existing version.

Workaround: Upgrade process can be initiated again from peer node
Parent Defect ID: EFA-10397 Issue ID: EFA-10397
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.0
Symptom: Native Vlan gets added as trunk VLAN to ports/ port channels after DRC is executed
Condition:

1. Create EPG1 with PO1 and switchport mode trunk with native VLAN V1.

2. Create EPG2 with the same PO as used in step1 i.e. PO1 and new port channel PO2 . Switchport mode is configured as trunk without native VLAN.

3. Execute Manual/auto DRC .

4. Native VLAN V1 gets added as trunk VLAN to PO2, on the SLX.

Recovery:

1.Introduce manual drift on SLX by executing "no switchport" on port channels which are not intended to have Native VLAN as trunk VLAN.

2.Perform manual/auto DRC.

3. After DRC execution only the required VLAN members will be added to the port-channel. Native VLAN will be removed from the not intended port channels

Parent Defect ID: EFA-10398 Issue ID: EFA-10398
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: EFA Tenant REST Request fails with an error "service is not available or internal server error has occurred, please try again later"
Condition: Execution of the EFA Tenant REST requests which take more time (more than 15 minutes) to get completed
Workaround:

Execute "show" commands to verify if the failed REST request was indeed completed successfully.

Re-execute the failed REST request as applicable.

Recovery: No recovery
Parent Defect ID: EFA-10403 Issue ID: EFA-10403
Severity: S2 - High
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: EFA fabric state shows as "cfg-refreshed", and there is drift in the configuration identified by EFA compare with SLX.
Condition: when EFA is upgraded to 2.5.1, with configured BFD and MCT clusters.
Workaround: None
Recovery: Execute Manual Drift/Reconcile step, EFA fabric state shows as "cfg-sync", and all the configurations are in sync between EFA and SLX.
Parent Defect ID: EFA-10412 Issue ID: EFA-10412
Severity: S3 - Medium
Product: Extreme Fabric Automation Reported in Release: EFA 2.5.1
Symptom: While polling the "efa inventory device firmware-download show" command, the user can sometimes observe the firmware download status change to the "Firmware Committed" status completion after the device has been reloaded with the new firmware during the workflow, but then change back and continue to the end.
Condition: The device has reloaded and boots up completely with the new firmware. After the this time, the firmware may become committed and the status is updated before the drift-reconciliation completion check status starts.