Search our Knowledge Base sites to find answers to your questions.
Ask All Knowledge Base Sites All Knowledge Base Sites JunosE Defect (KA)Knowledge BaseSecurity AdvisoriesTechnical BulletinsTechnotes Sign in to display secure content and recently viewed articles20.1R2-S1-EVO: Software Release Notification for JUNOS Software Version 20.1R2-S1-EVO
Junos Software service Release version 20.1R2-S1-EVO is now available.
20.1R2-S1-EVO - List of Fixed issuesPR Number | Synopsis | Category: Express BT PFE L3 Features |
---|---|---|
1503260 | PTX10008:Aggregated interface flap (AE) causing next hops to contain wrong encap information, with router having 800k IP routes, 2k ingress LSPs , around 500 L3VPNs and LSPs have link protection enabled. Product-Group=evo |
In PTX10008, for release 20.1R2, AE interface flap causing unexpected behavior. Few of the outgoing traffic streams carrying wrong encap information which causing them to be dropped in the next hop router. This issue doesn't happen every time, its very sporadic. Issue seen when router having around 800K IP routes, 2K ingress LSPs and around 500 L3VPNs and LSPs have link protection enabled. |
1513306 | [jvision][OCST_Hardening]Scapa: [Error] AftTelemetry : AftTelemetryHeaderGetComponentId: Unable to get FPC number error message seen Product-Group=evo |
AftTelemetry Error will be seen on subscription for the first time subscribing. |
PR Number | Synopsis | Category: Issues related to evo operations - libevo infra, typeinfo .. |
1512065 | Firewalld cored after deleting and adding the filter back in single commit. Product-Group=evo |
The firewalld process crashes when a filter is deleted and added back in a single commit. This creates a firewalld core file. No other impact observed. |
PR Number | Synopsis | Category: EVO linux defects & enhancement requests |
1482363 | EVO: Telnet login related issue with Template (TACACS & Radius) Product-Group=evo |
When a TACACS user or a RADIUS user logs in through Telnet, the username displayed in login prompt, show cli authorization output, show system users output, and accounting logs is template username and not the actual username of the logged-in user. The issue here is a display issue where the username displayed will be the template username instead of the actual logged -in username, but there is no functional issue. The problem is specific to Telnet and when the user logs in through SSH, the actual logged-in username is displayed. |
PR Number | Synopsis | Category: Express PFE CoS Features |
1475694 | Traffic is silently discarded (without notification) on PTX10008 after FPC physical OIR operations with aggregated Ethernet bundle. Product-Group=evo |
On PTX10008 with LC1201 running EVO, traffic might not be recovered after FPC rebooted due to FPC ungracefully removed or restarted. At the end, there is service impact when the issue happens. Please regard to workaround solution to avoid this problem. This is a regression issue. |
1494785 | [cos] [scheduler] scapa:scheduler ingress PFE VOQ drop counters doesn't match egress queue drop counters, diff > 100000 Product-Group=evo |
Scheduler ingress Packet Forwarding Engine VOQ drop counters do not match egress queue drop counters (diff > 100,000). |
1514722 | SCAPA: During system comes up, if fpc offline is performed, evo-aftmand crash might be seen. Product-Group=evo |
During system comes up, if fpc offline is performed, evo-aftmand crash might be seen. |
1515785 | [cos] [scheduler] EVO SCAPA: Continuous evo-aftmand-bt error messages "Jexpr: decremented Stats failed:" are observed during the FPC restart Product-Group=evo |
|
PR Number | Synopsis | Category: Multiprotocol Label Switching |
1502993 | CSPF job might get stalled for new/existing LSP in high scale LSP setup Product-Group=evo |
On all Junos platforms under MPLS-TE scenario with high scale LSPs (e.g., 20K), CSPF job might get stalled for new/existing LSP if some configuration changes (which impacts the rpd process) are done when CSPF job is suspended and pending. TED (Traffic Engineering Database) CSPF (Constrained Shortest Path First) job goes in a state where it is not able to recover till the time rpd process is restarted. This defect could only be observed in 20.2R1 and 20.2R1-EVO releases. |
PR Number | Synopsis | Category: PTX10K platform specific fabric PRs |
1506866 | Scapa Observe smll packet loss randomly during sib offline Product-Group=evo |
On a PTX10008, six SIBs are required to carry line-rate traffic, with no fabric redundancy. Even when ingress traffic rate is such that five SIBs are sufficient to carry ingress traffic , graceful SIB offline might result in small transient loss of traffic, till system re-routes traffic around fabric paths go offline. |
1510726 | PTX10008 When sib is taken offline quickly after it goes online, not all fabric links recover when the sib is brought back up at a later time Product-Group=evo |
PTX10008 - When sib is taken offline quickly after it goes online, not all fabric links recover when the sib is brought back up at a later time |
1510763 | In a scenario where there are multiple 'degraded_fabric_reachability_to_peer_pfe' to different SIBs, on SIB restart see inconsistency in errors that get cleared. Product-Group=evo |
In a scenario where there are multiple 'degraded_fabric_reachability_to_peer_pfe' to different SIBs, on SIB restart see inconsistency in errors that get cleared. Errors will be seen even though there is recovery of few planes from fabric degradation. |
1510766 | SIB LINK Error alarms are getting cleared after recovery of few fault planes by restarting SIB even though there are some planes which are in Fault state to different SIB. Product-Group=evo |
Alarms for SIB Link error under 'show system alarms' are getting cleared after recovery of few planes corresponding to a restarted SIB even though there are active link faults to other SIB. |
1512270 | Both degraded_fabric_reachability_to_peer_pfe and degraded_fabric_condition_on_pfe are seen on a same PFE Product-Group=evo |
In scenarios with fabric degradation enabled, both errors degraded_fabric_reachability_to_peer_pfe, degraded_fabric_condition_on_pfe are seen on PFE, However PFE is disabled and all the interfaces on the PFE go down due to above condition. |
1512814 | Fabric cards may get stuck in 'offlining' state some times if offlined/onlined in quick succession Product-Group=evo |
Fabric cards may get stuck in 'offlining' state some times if offlined/onlined in quick succession without allowing for the cards to get to steady state. |
1519402 | SCAPA: Longevity: after "request system application node re0 app "fabricHub" restart" , interface drop and Major dp_1_zfo_intr_dp1_fabcell_drop Error Product-Group=evo |
FabircHub restart or GRES might result in interface flaps and ASIC major errors. |
PR Number | Synopsis | Category: PTX10K specific platform PRs |
1513067 | [PDT] [SCAPA] evo-cda-bt core is seen on FPC0 after rebooting the DUT. Product-Group=evo |
evo-cda-bt core may be seen sometimes during asic initialization when fpc is in the process of coming up as "online". |
PR Number | Synopsis | Category: Issues related to control plane security |
1467467 | SSH service is unresponsive after setting system services ssh key-exchange dh-group1-sha1. Product-Group=evo |
When the SSH service is configured with certain key-exchange algorithms, the SSH server becomes unresponsive. For example: set system services ssh key-exchange dh-group1-sha1. |
PR Number | Synopsis | Category: PTX10003 Hardware Generic |
---|---|---|
1510587 | [interface] [brackla_ifd] 400G interface link up issue Product-Group=evo |
400G interface takes a long time to come up sometimes. |
PR Number | Synopsis | Category: PFE SW evo-pfemand,packet-io on BRCM platforms running EVO |
1513109 | [cos] [scheduler] sapphire:regression:servicability scheduler on LAG with 2 member links doesn't work Product-Group=evo |
|
PR Number | Synopsis | Category: Express BT PFE L3 Features |
1491770 | Offline then online of all SIBs can cause traffic blackhole Product-Group=evo |
When all SIBs are taken offline, use chassis restart to bring them back online. Failure to restart can cause traffic to be silently discarded (without notification). |
PR Number | Synopsis | Category: EVO Class of Services |
1500722 | The cosd process might crash and generate core when the wildcard interface is used Product-Group=evo |
On a system running Junos Evolved, class of service configuration does not bind to the intended interfaces when using interfaces' wildcard matching (such as "et-0/*/*". The wildcard class-of-service interface config along with interfaces catastrophe events (Del/Add of many IFDs) might also result in cosd process crash and core. There might be traffic impact as the right Traffic Control profile may not be installed on the IFDs. |
PR Number | Synopsis | Category: management ethernet related issues - mgmt-ethd daemon |
1458228 | The aggregated Ethernet LACP Distributed Mode:Tracking PR for display of FPC in Distribution address in run show ppm adjacencies. Product-Group=evo |
New display is added "show ppm adjacencies" With FPC slot |
PR Number | Synopsis | Category: System Management High availability |
1514060 | "show system switchover" output can be inaccurate in the initial 360 seconds after a switchover Product-Group=evo |
The output of "request chassis routing-engine master switch check" and the output of "show system switchover" might be out of sync in the first 360 seconds after a switchover. We need to look at both CLI output in order to decide if the system is ready for switchover. |
PR Number | Synopsis | Category: EVO L2 Control Plane PRs |
1457825 | The switch-options configuration not available on lean rpd images. Product-Group=evo |
switch-options config not available on lean RPD images |
PR Number | Synopsis | Category: eventd, syslog infra issues |
1513447 | Stale processes are lying around holding deleted files Product-Group=evo |
This issue can potentially happen when someone does ctrl+c when the command is running. Here in this case, it is "show system boot-messages. When this issue happens, workaround is to find the pids of "journalctl -b" processes and kill them |
PR Number | Synopsis | Category: mgd, ddl, odl infra issues |
1485419 | [firewall] [core] Evo-Scapa-Commit takes longer than 20 seconds to go through Product-Group=evo |
Commit takes longer than 20 seconds to go through. |
1513163 | [firewall] [filter_installation] Scapa : Commit is taking longer with basic AE config Product-Group=evo |
For vScapa devices, commit time with loading the baseline configuration using load override approach is having an average of 20-25 secs and loading the AE config using load override approach is having an average of 25-30 secs. On a stable system, we can expect commit time to be on the lower side for both baseline config and AE config. Commit time on Physical Scapa devices also is expected to be on the lower side of the average commit times. Load override is an expensive operation amongst all the load configuration approaches. Technically, it discards the existing candidate configuration and loads with new config. All system processes re parse the configuration. All translation and commit scripts are run etc. https://www.juniper.net/documentation/en_US/junos/topics/topic-map/junos-config-files-loading.html Recommendation is to use "load update" option wherever possible, which is more efficient and optimised approach. The load update operation compares the candidate configuration and the new configuration data, and only changes the parts of the candidate configuration that are different from the new configuration. Better commit time results can be seen with load update option when compared to load override approach. |
PR Number | Synopsis | Category: Express PFE CoS Features |
1473844 | EVO:SCAPA [SNP]: Jflow Sampling Traffic is not rate limited in host pipe Product-Group=evo |
J-Flow sampling traffic is not rate-limited in host pipe. |
1478811 | [cos] [scheduler] scapa:scheduler with all queues oversunscribed, max latency is different on different queues - 21ms ~ 29ms Product-Group=evo |
CoS scheduler: With all queues oversubscribed, maximum latency varies among queues (21ms to 29 ms) |
1485478 | In strict-priority mode, with any form of rate-limiter/shaper configured, it might create some accuracy gaps in the scheduler across queues. Product-Group=evo |
BGP started creating route pointing to List NH having INDIRECT members inside after comcast N+1 feature. But resolver treats LIST NH as a leaf node and does not do loop detection inside List NH members (Day 1 behavior). This causes wedges in Packet Forwarding Engine as loop detection does not happen. |
1501083 | [cos] [scheduler] [Regression] EVO SCAPA: In undersubs scenario @ 80% of total ingress load, Tail drops (VOQ drops) are consistently seen on a High priority queue (AF4) with "shared" buffer configs (strict-priority-scheduler is NOT configured), due to no load balancing among the child links of AE bundle. Product-Group=evo |
If number of flows is less, load balancing may not work properly on AE members. If the traffic rate is over the line limit, tail drop will be seen. |
1501252 | Accuracy of Queue Drop statistics on a multi-FPC system Product-Group=evo |
Queue drop stats will be zero/not correct when (all the) ingress and egress IFDs are on different FPCs. |
1503292 | In Strict Priority mode, in certain scenarios when the traffic quite more on the low priority Queues, it may undermine higher priority queues. Product-Group=evo |
In Strict Priority mode, in certain scenarios when the traffic quite more on the low priority Queues, it may undermine higher priority queues. |
1506855 | Shapers applied on Interface Output Queues, either through the transmit-rate "exact" or "rate-limit" knob may not sometimes draw the expected output scheduler accuracy. Product-Group=evo |
Shapers applied on Interface Output Queues, either through the transmit-rate "exact" or "rate-limit" knob may not sometimes draw the expected output scheduler accuracy. |
1513451 | SCAPA: Longevity: VIQ wedge detected on local & remote FPCS, after ungraceful FPC Reboot(node reboot/packet-io restart) and unplanned aftmand core Product-Group=evo |
If FPC Reboots ungracefully (including using node reboot fpc ) then we see a traffic impact OR host path it self not functional, due to this FPC may not be useful. |
1514396 | BT design limits a number of drop profile points on BT-based platforms to at most 2 points Product-Group=evo |
BT design limits a number of drop profile points on BT-based platforms to at most 2 points. The same RLI also puts the following restrictions when configuring 2 points: 1. Point 1 (left point) drop-probability value MUST be <= 25% 2. Point 2 (right point) drop-probability value MUST be > point 1 drop-probability value. 3. Point 2 fill-level value MUST be >= 1.2 x point 1 fill-level value. See RLI 38556 for details and updates on this issue. |
1514491 | [cos] [core] EVO SCAPA: After FPC restart multiple times, FRU is not online immediately followed by the evo-cda-bt.fpc core on that FPC4 starting Cda::BtAsicInstance::asicInit --> cdaZephyrAsicInit --> zephyr_init --> zephyr_chip_init --> zephyr_chip_setup Product-Group=evo |
|
1515806 | [cos] [scheduler] EVO SCAPA: Continuous evo-cda-bt CDA syslog error messages are observed during the negative triggers (AE configs disable/enable and FPC restart) Product-Group=evo |
|
1517461 | Continuous evo-cda-bt syslog error messages "CDA :CoS: expr_cos_zephyr_ifd_ps_check_l4_pkt_cnt.516: ZHCHIP[1] unable to drain the OQs for ifd" are seen during the AE configs and AE member links disable/enable Product-Group=evo |
When configuring and disabling AE legs, the message "unable to drain the OQs for ifd" can be seen. These messages are informational and do not impact the operation of the system. |
PR Number | Synopsis | Category: Express PFE FW Features |
1495118 | PTX10003: Offline/Online of FPCs is not fully supported Product-Group=evo |
Offline/Online FPC feature is not fully supported on PTX10003 series. We can still do offline, but for bringing FPC or the associated ports back online requires a reboot of system. It is recommended we do not use FPC offline/online this feature is completely supported. It was also seen in some cases with scale configurations and heavy churn in system, off-lining an FPC were disabling packet forwarding on other FPCs as well. To recover, system reboot the system. |
1497856 | Colored packets are not policed as expected due to policer in accuracy for color-blind single rate three color policer at lower BW limits Product-Group=evo |
Colored packets are not policed as expected due to policer in accuracy for color-blind single rate three color policer at lower BW limits |
1499294 | On a QFX5220 :: show firewall cli shows error error: communication failure with /re0/evo-pfemand/ when unique filter is applied over all physical interfaces. Product-Group=evo |
show firewall cli shows error error: communication failure with /re0/evo-pfemand/ when unique filter is applied over all physical interfaces. |
1503145 | There is a discrepancy of 22 bytes for the same exact packets between the firewall filter in EVO and in Junos Product-Group=evo |
There is a discrepancy of 22 bytes for the exact same packets between the firewall filter in Junos OS Evolved and in Junos OS. From an RPM perspective, probes are sent and received correctly. Issue is in the byte counts reported by the firewall filter. |
PR Number | Synopsis | Category: Express PFE L3 Multicast |
1513474 | Out of order delete messages are seen when doing AE link flap Product-Group=evo |
The following error msg might be seen when doing link flap: PE1.r0.P1-fpc7 evo-aftmand-bt[16680]: [Critical] Em: Possible out of order deletion of AftNode AftNode details - AftExprNhFwd token:507436 group:0 nodeMask:0xffffffffffffffff flabel:18308 oifToken:507451 proto: index:2 vpfe:0 oqGroup:0 May 14 02:25:12 paulo.PE1.r0.P1-fpc7 resiliencyd[12980]: Cmerror Op Clear: btchip: chip_num:4 intr_full_name:hostsys_hostif_hifregs_hostif_local_intr4_grp0_status_wnack3 intr_bit_pos:7 (URI: /fpc/7/evo-cda-bt/0/cm/0/btchip/4/hostsys_hostif_hifregs_hostif_local_intr4_grp0_wnack3) To ensure there is no traffic impact, execute the following CLI and expect empty output: show sandbox tokens zombie yes |
PR Number | Synopsis | Category: Express PFE MPLS Features |
1472908 | Traffic loss of up to 400 ms can be seen in MPLS FRR scenario Product-Group=evo |
Traffic loss of upto 400 ms is seen during MPLS FRR test with following scale: 600K IPv4 routes 40K IPv6 routes 19K Transit LSPs 1.6K ingress LSPs 10K BGP LU. |
1511788 | show mpls lsp statistics values are incorrect for lsp-packets Product-Group=evo |
Basically "show mpls lsp stats" collects LSP stats from PEF on a single synchronous request basis. Due to high load of requests, PFE can be slow to reply with the stats. Due to the slowness, the lsp stats output can be displayed prematurely with slow processing of LSP stats by RPD. This appears incorrect LSP stats. |
PR Number | Synopsis | Category: Health-Monitoring related issues |
1514105 | "show system errors active" command with/without fru filters would give empty output when there have been no errors ever in the system Product-Group=evo |
When there have been no errors ever in the system, the show system errors active command with/without fru filters would give empty output i.e. no counts are shown. This does not affect functionality since what is shown is correct. When FRUs are offlined, errors that may have been raised on them would get deleted from the DB. If in case all the errors that have been raised get deleted, we come to the same situation as mentioned earlier, and the show system errors active command with/without fru filters would give empty output. |
PR Number | Synopsis | Category: Path computation client daemon |
1481462 | [northstar_pccd] [tag_rro_mismatch] EVO-Brackla : After interface flap,RRO mismatch between pcep and mpls(p2mp) Product-Group=evo |
There is a minor data mismatch in RRO between PCE and PCC. |
PR Number | Synopsis | Category: RPD Next-hop issues including indirect, CNH, and MCNH |
1501935 | VPN traffic gets black-holed in a cornered L3VPN scenario Product-Group=evo |
in certain corner configuration, where in L3VPN scenario, there are 2 EBGP (1 multihop and 1 single-hop) and 1 IBGP between CE and PE, "equal-external-internal" may not work correctly with L3VPN composite-next-hop knob enabled. |
PR Number | Synopsis | Category: Resource Reservation Protocol |
1518968 | EVO:Scapa: [updated] After graceful switchover, ingress RSVP sessions go down and take up to 5 minutes to come up. Product-Group=evo |
After graceful switchover, ingress RSVP sessions in new master go down and take up to 5 minutes to come up. |
PR Number | Synopsis | Category: PTX10K platform specific fabric PRs |
1486023 | [fabric] [fabrictag] SCAPA:EVO Restart fabspoked-pfe (when it is online/active) several times leads traffic lost 100% Product-Group=evo |
Restarting the fabspoked-pfe application for the line card restarts the line card. |
1497212 | [fabric] [PTX10008]: 6 seconds convergence time is seen during fabric card removal Product-Group=evo |
On a PTX10008, six SIBs are required to carry line-rate traffic, with no fabric redundancy. Even when ingress traffic rate is such that five SIBs are sufficient to carry ingress traffic (for example, traffic is less than 1280 Gbps), ungraceful SIB failures result in transient loss of traffic, till system failure handling is triggered. In Junos OS Release 20.1R2, failure handling may result in about 4-6 seconds of traffic loss. We recommend that you take the fabric cards offline by using the request chassis sib offline command before removing the SIBs for maintenance. |
1511910 | Fabric degradation errors/alarms are not raised when sib hits power fault during run time Product-Group=evo |
With fabric degradation detection enabled, if a fabric card goes down due to a fault at run time, relevant alarms/errors may not be raised and related fault action will not be taken. |
1511918 | fabric_down_condition_on_pfe errors not cleared when fpc is offlined after fabricHub app restart Product-Group=evo |
fabric_down_condition_on_pfe errors not cleared when fpc is offlined after fabricHub app restart |
1512271 | degraded_fabric_reachability_to_peer_pfe are not regenerated on PFE after degraded_fabric_condition_on_pfe is cleared on the same PFE. Product-Group=evo |
With fabric degradation enabled, once degraded_fabric_condition_on_pfe is cleared on PFE, Evo applications fail to set degraded_fabric_reachability_to_peer_pfe on the same PFE therby. |
1512272 | SIB <> FPC Link Errors seen prior to switchover do not get cleared when switchover is followed by SIB restart. Product-Group=evo |
SIB <> FPC Link Errors seen prior to switchover do not get cleared when switchover is followed by SIB restart. |
1515790 | For fabspoked-pfe, 'Restart Supported' and 'Restart Node' flags are incorrect in the output of 'show system applications app' Product-Group=evo |
For fabspoked-pfe, 'Restart Supported' and 'Restart Node' flags are incorrect in the output of 'show system applications app fabspoked-pfe detail'. Show command shows that app can be restarted without needing node reboot, but restarting the app leads to node reboot. |
PR Number | Synopsis | Category: PTX10K Line Card specific interface PRs |
1514058 | [interface] [scapa_ifd] Scapa: Madison optics: Inphi link staying down and not coming up post fpc restart/picd restart Product-Group=evo |
After picd app restart or FPC restart, Inphi colored optics doesn't link up |
PR Number | Synopsis | Category: PTX10K RE EVO Issues |
1496895 | Noticed File transmit rate over WAN port is low for EVO platforms Product-Group=evo |
The copying of files to the RCB over WAN ports is slow. This is observed across all platforms running Junos OS Evolved platforms. |
1503158 | [fabric] [generic_evo] Scapa : [[EVO-SCAPA] : PDT: DCDCEdge-VPNTunnelMulticastL3L2: hwdre.re core in evoapp_run on new master RE1 after plug out RE0 ] Product-Group=evo |
Routing engine removal and insertion will rarely result in hwdre process restart(crash), with no functionality impact. |
1503269 | [generic_evo] - Scapa: RCB: Switchover Status is not reflecting as expected at "show system switchover" in new backup post performing mastership switchover Product-Group=evo |
According to the current implementation, to display switchover status using the show system switchover command on backup, only the readiness of configuration database, object database and applications: ready state are considered. It does not check for the output of the request chassis routing-engine master switch check command. As a result, in some of the cases the output of both the commands could be different for 'switchover readiness' for a short duration of time. This is a day-1 behavior in Junos OS Evolved. |
PR Number | Synopsis | Category: Configuration mgmt, ffp, load-action, commit processing |
1513142 | [ui] [generic_evo] Scapa : PDT - SCAPA - commit synchronize successed on master with commit complete on both REs, but backup RE has different config Product-Group=evo |
On dual RE box, by default, the commit process synchronizes the configuration across both the REs. Rarely, commit synchronization process malfunction and thus could not synchronize the RE0 configuration to the RE1. Under the problem scenario, the following steps can synchronize the RE0 configuration to the RE1. 1. Login to the RE1 shell 2. rm /config/.mgdInitialized 3. systemctl restart mgd-init 4. systemctl restart mgd 5. systemctl restart config-sync |
PR Number | Synopsis | Category: Express ZX PFE L3 Features |
1489139 | Brackla: Leaf to Spine Link protection, after ae disable on peer node, Traffic loss is seen on Static vrf stream entering tunnel on Brackla PE Product-Group=evo |
The local repair time for fast reroute is 50 ms. If the system has a scaling configuration or is heavily loaded for processing, the local repair time may be longer than 50 ms. In this case, it gets 65 ms local repair time. |
Getting Up and Running with Junos
Getting Up and Running with Junos Security Alerts and Vulnerabilities Product Alerts and Software Release Notices Problem Report (PR) Search Tool EOL Notices and Bulletins JTAC User Guide Customer Care User Guide Pathfinder SRX High Availability Configurator SRX VPN Configurator Training Courses and Videos End User Licence Agreement Global Search