PR Number |
Synopsis |
Category: EVO interface software |
1563684 |
ARP may not resolve and traffic may be dropped on Junos Evolved platforms
Product-Group=evo |
In scaled IFL scenario when interface connecting customer with many IFLs in same port is flapped, ARP may not resolve and traffic may be dropped. |
PR Number |
Synopsis |
Category: EVO platform software |
1534996 |
Interfaces may take longer to come up after loading baseline and rollback configuration
Product-Group=evo |
In scaled setup with a large number of routes and next-hops, and reloading the baseline config; interfaces and specifically AE interfaces take longer to come up. After loading baseline and rollback traffic loss is expected. The router comes back into service after 5 mins on evo whereas it takes 1.5 mins on Junos. This a router reconfig/upgrade scenario. / |
PR Number |
Synopsis |
Category: Express BT PFE L3 Features |
1551363 |
Traffic drops may be seen after egress FPC restarts and comes back online
Product-Group=evo |
On PTX platforms, the throughput might fall below the advertised line-rate if multiple traffic flows use the same egress interface. Hence traffic loss may be observed. |
PR Number |
Synopsis |
Category: EVO Class of Services |
1544531 |
The cosd may not come up after FPC restart
Product-Group=evo |
On Evo platforms, cosd process may not come up after FPC restart. Due to this, the classifier/rewrite/scheduler functionalities may stop working. This issue may be seen during validation check for CNP (Congestion Network Profile) and Drop profile configuration. This issue has a functional impact. |
PR Number |
Synopsis |
Category: Issues related to evo operations - libevo infra, typeinfo .. |
1564156 |
It might take a long time to create IFDs after restarting the FPC
Product-Group=evo |
In a Junos Evolved chassis with multiple FPCs scenario, while restarting an FPC, it might sync the configuration from other FPCs in the system. But if any of the FPC holds a scale of configuration, the evo-aftmand application might take a lot of time to reconcile the huge number of objects and will be stuck in an "online" state on this FPC, the IFDs might not be crated for a long time (above 1 hour). |
PR Number |
Synopsis |
Category: Interface PRs defect & enhancement requests |
1566752 |
PTX10001-36MR: Control IFL may not be present for ports et-0/0/11 and et-0/2/11
Product-Group=evo |
PTX10001-36MR: Control IFL may not be present for ports et-0/0/11 and et-0/2/11 |
PR Number |
Synopsis |
Category: Express PFE MPLS Features |
1562503 |
Slow memory leak
Product-Group=evo |
For topologies involving High Ingress and transit LSP scale, error messages can be seen in journalctl when tearing down the Ingress and Transit LSPs. This also leads to slow memory leak. |
PR Number |
Synopsis |
Category: Interface PR for Lazurite LC |
1562471 |
Interface loopback might not work if there is no optics connected to the port on PTX10000
Product-Group=evo |
For retimer ports on PTX10000 platforms, loopback configuration might not work if there is no optics connected to the port. |
PR Number |
Synopsis |
Category: OSPF routing protocol |
1563350 |
The rpd might crash on Backup RE after the rpd restart is triggered on Master RE
Product-Group=evo |
On all Evo platforms, if the rpd restart is triggered on Master RE (using the following command : restart routing immediately) then rpd might crash on Backup RE. |
PR Number |
Synopsis |
Category: UI Infrastructure - mgd, DAX API, DDL/ODL |
1559786 |
It fails if the xml output from command "request vmhost mode test | display xml rpc" is picked and used in netconf
Product-Group=evo |
On vmhost platforms, if the xml output from command "request vmhost mode test | display xml rpc" is picked and used in netconf, it will fail. set vmhost mode custom test layer-3-infrastructure cpu count MIN set vmhost mode custom test layer-3-infrastructure memory size MIN set vmhost mode custom test nfv-back-plane cpu count MIN set vmhost mode custom test nfv-back-plane memory size MIN set vmhost mode custom test vnf cpu count MIN |
PR Number |
Synopsis |
Category: Express ZX PFE L3 Features |
1560901 |
[NGMVPN] [IR] : EVO-Brackla :: 20.4R1-EVO: Packets dropped with NORMAL DISCARD after PFE 0 power off/on done on the receiver PE with multicast receivers for groups 225.2.1.1 and 225.3.1.1
Product-Group=evo |
Once a pfe restart is done, MVPN traffic coming from the core will not get forwarded . This is because before reset, FDB was having entries for forwarding the MVPN multicast traffic (classD) . Once the restart is done, these entries are deleted and not created again. This results in MVPN packets taking the wrong path (unicast forwarding path) and thereafter hits the RESOLVE route which is seen as policer drop/trapcode. In short MVPN functionality will not work if a pfe restart happens, as the MVPN handling FDB entries are not added back . To fix the issue, MVPN fdb enties should be added on per pfe basis and make sure the classD entries are getting added when a PFE is restarted |
1562452 |
Complete ingress multicast traffic loss may be seen on interfaces that are flapped using PFE offine/online command
Product-Group=evo |
In a scenario where the upstream interface is an AE interface, if a PFE is made offline, one of the member link of the AE interface may also get removed. The ingress NH programming for the PFE instance will be removed from the hardware. If that PFE is brought online again, there is no NH rebake and the ingress NH words may be missing in hardware because of which there is multicast traffic drop. |