Defects Closed with Code Changes

The following defects were closed in ExtremeCloud Orchestrator 3.5.0.

Defects Closed with Code Changes in ExtremeCloud Orchestrator 3.5.0

Parent Defect ID: XCO-7899 Issue ID: XCO-7899
Product: XCO Reported in Release: XCO 3.3.0
Symptom: BGP peer delete with MP-BGP support enabled for additional path advertise fails with netconf error - '%Error: 'additional-paths advertise' is configured, cannot remove 'additional-paths select' command'.
Condition: If the MP-BGP neighbor is associated to additional path select, then the deletion of the bgp neighbor fails with the following netconf error - '%Error: 'additional-paths advertise' is configured, cannot remove 'additional-paths select' command'
Workaround: There is no workaround for this issue.
Recovery: Run the peer delete command again and it gets deleted on the second attempt.
Parent Defect ID: XCO-8735 Issue ID: XCO-8735
Product: XCO Reported in Release: XCO 3.3.0
Symptom: Inventory and device page shows different firmware version.
Condition: Post firmware upgrade, inventory and device page shows different firmware version.
Workaround: Post device discovery, device page shows correct firmware version.
Parent Defect ID: XCO-8829 Issue ID: XCO-8829
Product: XCO Reported in Release: XCO 3.2.1
Symptom: New firmware-host registry fails when single quote is used in the password.
Condition: Single quote is used in the password.
Workaround: Use the password without single quote.
Parent Defect ID: XCO-9137 Issue ID: XCO-9137
Product: XCO Reported in Release: XCO 3.3.0
Symptom: EFA upgrade from release 2.7.2 to 3.3.0
Condition: DNS was removed before upgrade.
Workaround: DNS configuration should not be changed between upgrades.
Recovery:
If DNS config is removed after upgrade to XCO 3.3.0, use update_dns.sh script to disallow DNS using following steps.
  1. bash update-dns.sh --dns-action disallow
  2. Get the core dns pod name using k3s kubectl get pods -n kube-system
  3. Restart core dns pod using k3s kubectl delete pod <coredns pod name> -n kube-system
  4. Wait for few mins or restart all XCO pods using the following commands:

    sudo efactl stop

    sudo efactl start

Parent Defect ID: XCO-9216 Issue ID: XCO-9216
Product: XCO Reported in Release: XCO 3.3.0
Symptom: Multiple subscription on devices leads to memory leak.
Condition: Memory leak occurs when one of the devices is in unhealthy state.
Parent Defect ID: XCO-9284 Issue ID: XCO-9284
Product: XCO Reported in Release: XCO 3.4.0
Symptom: Copy default-config to startup-config with maintenance mode enabled will remove all config including QoS policies on a device. Further, running DRC does not properly re-install all QoS configuration.
Condition: Copy default-config startup-config with maintenance mode enabled.
Recovery: Remove the device from inventory and then re-register the device.
Parent Defect ID: XCO-9291 Issue ID: XCO-9291
Product: XCO Reported in Release: XCO 3.4.0
Symptom: The fabric internal ports QoS profile is not getting applied on fabric internal ports when leaf devices are converted from single-homed to multi-homed by adding a new leaf device.
Condition: The fabric internal ports QoS profile is not getting applied on fabric internal ports when leaf devices are converted from single-homed to multi-homed by adding a new leaf device.
Workaround: User can issue unbind of fabric internal port QoS profile and rebind the fabric internal port QoS profile using the following commands:

Unbind Fabric internal ports QoS profile:

efa policy qos profile unbind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Bind Fabric internal ports QoS profile:

efa policy qos profile bind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Recovery: User can issue unbind of fabric internal port QoS profile and rebind the fabric internal port QoS profile using the following commands:

Unbind Fabric internal ports QoS profile:

efa policy qos profile unbind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Bind Fabric internal ports QoS profile:

efa policy qos profile bind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Parent Defect ID: XCO-9331 Issue ID: XCO-9331
Product: XCO Reported in Release: XCO 3.4.0
Symptom: If a tenant interface level QoS profile binding exists on a port channel and the port channel is removed from the device using OOB (Out Of Band) triggering, DRC will not re-install the tenant level interface binding.
Condition: Removing a port channel from a device using OOB (Out of Band) triggers DRC.
Workaround: When the port channel is restored by the DRC process on the device the user will need to re-apply/rebind the desired QoS profile on the tenant interface (port channel) using

efa policy qos profile bind --name <profile_name> --tenant <tenant_name> --po <port channel ID>

Recovery: When the port channel is restored by the DRC process on the device the user will need to re-apply/rebind the desired QoS profile on the tenant interface (port channel) using

efa policy qos profile bind --name <profile_name> --tenant <tenant_name> --po <port channel ID>

Parent Defect ID: XCO-9336 Issue ID: XCO-9336
Product: XCO Reported in Release: XCO 3.4.0
Symptom: Inventory device delete is not removing QoS config on the spine device.
Condition: Device deletion from inventory which has QoS configuration.
Workaround: User needs to unbind the policies (QoS) from all the relevant targets (fabric/tenant/port/po) before running the inventory device delete.
Recovery: User needs to unbind the policies (QoS) from all the relevant targets (fabric/tenant/port/po). After this, the user needs to delete the leftover QoS configuration from SLX.
Parent Defect ID: XCO-9362 Issue ID: XCO-9362
Product: XCO Reported in Release: XCO 3.4.0
Symptom: The fabric internal ports QoS profile is not getting applied on intended ports:
  1. When a new device is being added to CLOS fabric and fabric is configured.
  2. When a new rack is added to non-CLOS fabric and fabric is configured.
Condition: Pre-condition:

Fabric internal ports QoS profile is already applied on a fabric (CLOS or non-CLOS).

Issue will be seen:
  1. When a new device is being added to CLOS fabric and fabric is configured.
  2. When a new rack is added to non-CLOS fabric and fabric is configured.
Workaround: User can issue unbind of fabric internal port QoS profile and rebind the fabric internal port QoS profile using the following commands:

Unbind Fabric internal QoS profile:

efa policy qos profile unbind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Bind Fabric internal QoS profile:

efa policy qos profile bind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Recovery: User can issue unbind of fabric internal port QoS profile and rebind the fabric internal port QoS profile using the following commands:

Unbind Fabric internal QoS profile:

efa policy qos profile unbind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Bind Fabric internal QoS profile:

efa policy qos profile bind --name <profile_name> --fabric <fabric_name> --port fabric-internal

Parent Defect ID: XCO-9381 Issue ID: XCO-9381
Product: XCO Reported in Release: XCO 2.7.2
Symptom: 9740 devices with breakout port configured, DRC fails for even numbered port.
Condition: If XCO is upgraded from previous version to 3.2.0 version.
Workaround: Perform fresh install followed by reconfiguration of breakout ports and its respective configuration.
Parent Defect ID: XCO-9420 Issue ID: XCO-9420
Product: XCO Reported in Release: XCO 3.3.1
Symptom: When standby TPVM is down, 'efa health show' shows the status as 'Red'.
Condition: When standby TPVM is down, 'efa health show' status must be 'Orange'.
Workaround: N/A
Recovery: N/A
Parent Defect ID: XCO-9659 Issue ID: XCO-9659
Product: XCO Reported in Release: XCO 3.4.0
Symptom: Duplicate qos-profile entries listed in "efa policy qos-profile list" command
Condition:

•Create Qos-profile

efa policy qos map create --type dscp-tc-map --name qosMapPort2 --rule "dscp[20],tc[2],dp[2]"

efa policy qos service-policy-map create --name servicePolicyPort --rule "strict-priority[5],dwrr[0;0;100],class[default]"

efa policy qos profile create --name profile2 --trust dscp --dscp-tc qosMapPort2 --service-policy "name[servicePolicyPort],dir[out]"

•Create Tenant

efa tenant create --name "vpod01" --type private --vlan-range 100 --vrf-count 0 --port 10.20.48.110[0/4]

•Attach the qos-profile to Tenant

efa policy qos profile bind --name profile2 --tenant vpod01

•List the qos-profile to make sure binding exist at Tenant level

efa policy qos profile list --ip 10.20.48.110 --interface "Ethernet 0/4"

•Now add the same port to the Tenant again

efa tenant update --operation=port-add --port 10.20.48.110[0/4] --name vpod01

•Now check qos-profile list whether duplicate entry is listed

efa policy qos profile list --ip 10.20.48.110 --interface "Ethernet 0/4"

Workaround: Avoid adding same port to Tenant when already exist
Recovery: Detach qos-profile from Tenant and Re-attach it again
Parent Defect ID: XCO-9664 Issue ID: XCO-9664
Product: XCO Reported in Release: XCO 3.3.1
Symptom: During XCO upgrade on fabric with mct nodes, database restore resulted in bringing down port-channel 64 with sudden impact on mct cluster.
Condition:

1. Create a CLOS or Non-CLOS fabric with atleast one leaf node pairs (i.e. MCT pair)

2. Have only two icl links between the mct pair and connect these interfaces in criss-cross fashion i.e. 0/55 to 0/56 and 0/56 to 0/55 and configure fabric.

3. Take a database backup with above scenario and change the criss-cross links to normal links i.e. 0/55 to 0/55 and 0/56 to 0/56 followed by fabric configure.

4. While upgrading XCO, apply the database backup created during criss-cross links and do DRC before fabric configure.

5. With above database restore in one of old releases, stale entries were persisted into database for criss-cross even though the mistake was rectified and changed to direct links.

6. Restore the database with stale entries for next XCO upgrade.

Workaround: In case of criss-coss connection with restore case, with out DRC execute efa fabric configure to have correct entries populated and old entries to be removed.
Recovery: Remove stale entries in database manually and execute fabric configure to restore PO64 up.
Parent Defect ID: XCO-9753 Issue ID: XCO-9753
Product: XCO Reported in Release: XCO 3.3.1
Symptom: Ping-target configuration is missing in “efa version” output.
Condition: Trigger "efa version" in multi-node setup when ping-target is already configured.
Workaround:

Refer to the /apps/etc/efa/efa.conf file which holds the configured ping targets details.

EFA_DEPLOYMENT_PING_TARGET_ENABLED=yes

EFA_DEPLOYMENT_HA_HEALTH_CHECK_IPS=<List of IP addresses>

Parent Defect ID: XCO-9772 Issue ID: XCO-9772
Product: XCO Reported in Release: XCO 3.4.1
Symptom: EFA policy route-map-match delete option allows for non-existing matches for community-list.

It supposed to deny the command with a does not exist error.

Condition: Use non-existing community-list name while deleting the matches using EFA policy route-map-match delete option.
Workaround: Use correct community-list name while deleting the matches using EFA policy route-map-match delete option.
Recovery: N/A
Parent Defect ID: XCO-9794 Issue ID: XCO-9794
Product: XCO Reported in Release: XCO 3.3.1
Symptom: JWT certificate renewal fails after XCO upgrade.
Condition: Kustomization YAML files were not copied to the installation directory after upgrade. These files are needed for certificate generation.
Workaround:

Move the "kustomization.yaml" files to the correct locations as shown below:

SERVER:

/opt/efa/certs/cert

/opt/efa/certs/key

TPVM:

/apps/efa/certs/cert

/apps/efa/certs/key

Parent Defect ID: XCO-10000 Issue ID: XCO-10000
Product: XCO Reported in Release: XCO 3.4.1
Symptom: Binding QosProfile to Pport-channel ID applies to all port-channels when more than one port-channel is using the same identical port-channel ID.
Condition: No option to bind QosProfile to specific port-channel name when more than one port-channel is using the same identical port-channel ID.
Workaround: Use unique Po ID for each port-channel to ensure that QosProfile binding to Po ID will not overlap with each other.
Recovery: N/A