Support Support Downloads Knowledge Base Case Manager My Juniper Community

Knowledge Base

Search our Knowledge Base sites to find answers to your questions.

Ask All Knowledge Base Sites All Knowledge Base Sites JunosE Defect (KA)Knowledge BaseSecurity AdvisoriesTechnical BulletinsTechnotes Sign in to display secure content and recently viewed articles

[vMX] High CPU utilization for vFP on Hypervisor

0

0

Article ID: KB35116 KB Last Updated: 01 Nov 2019Version: 1.0
Summary:

This article provide details regarding the high CPU utilization reported on Hypervisor for vFP VM.

Solution:

In performance mode, typically one lcore (logical core) is assigned for one port and a multiple/single lcore is allocated to the worker thread. That lcore executes a function which runs in tight loop and tries to process packets even though it may or may not get any packets to process. Process packet means, it will either try to flush out TX Q, or try to receive packet from RX Q, etc. Even if it does not find packets to process, it will again go over and try to process packets. So that particular CPU will have 100% consumption. In addition, the CPU is allocated to the worker thread, so the CPU would be high. 

However, in lite mode, the same infinite function is executed and tries to process packet (rx, tx). But if packet is not found in current iteration, then it goes again and tries to check for packet. The execution will go to sleep for some time. This means it will free up the CPU for some time. This is the reason we don’t observe this behavior in lite mode.

Sample outputs to verify the details:

Below are the two processes which will be running on vFP, and would consume CPU. The vFP is configured with 10CPU and 16GB memory with 2 NIC.

root@localhost:~# ps -aux | grep riot

Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
root      1360  0.0  0.0   9368   916 ?        S    12:04   0:00 sh /usr/share/pfe/start_dpdk_riot.sh 0x0BAA
root      1373  0.0  0.0   9592  1340 ?        S    12:04   0:00 sh start_riot.sh
root      1495  676  0.4 44804808 74444 ?      Sl   12:04  15:47 /home/pfe/riot/build/app/riot -c 0x3ff -n 2 --log-level=5 -w 13:00.0 -w 1b:00.0 -- --rx (0,0,0,3),(1,0,1,4), --tx (0,3),(1,4), --w 5,6,7,8,9 --f 1 --l2_mode 1 --rpio local,3000,3001 --hostif local,3002 --bsz (32,32),(64,64),(512,512)
root      1914  0.0  0.0   4408   508 ttyS0    R+   12:06   0:00 grep riot

root@localhost:~# ps -aux | grep vmxt
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
root      1570 19.6  1.2 2199404 207700 ?      Ssl  12:04   0:28 /usr/share/pfe/vmxt -N -C 4
root      1945  0.0  0.0   4408   504 ttyS0    S+   12:07   0:00 grep vmxt
root@localhost:~root@localhost:~# 

Next, when checking “top“ output for the above process, the lc can be seen:

root@localhost:~# top -H -p 1495
top - 12:07:36 up 4 min,  1 user,  load average: 6.98, 3.28, 1.27
Tasks:  12 total,   7 running,   5 sleeping,   0 stopped,   0 zombie
Cpu(s): 66.4%us,  1.6%sy,  0.0%ni, 32.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16431908k total, 13421092k used,  3010816k free,    18628k buffers
Swap:        0k total,        0k used,        0k free,  2340312k cached
 
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
1512 root      20   0 42.7g  72m 2444 R  100  0.5   2:51.78 lcore-slave-3
1513 root      20   0 42.7g  72m 2444 R  100  0.5   2:51.98 lcore-slave-4
1514 root      20   0 42.7g  72m 2444 R  100  0.5   2:51.84 lcore-slave-5
1515 root      20   0 42.7g  72m 2444 R  100  0.5   2:51.87 lcore-slave-6
1516 root      20   0 42.7g  72m 2444 R  100  0.5   2:51.86 lcore-slave-7
1517 root      20   0 42.7g  72m 2444 R  100  0.5   2:51.87 lcore-slave-8
1518 root      20   0 42.7g  72m 2444 R  100  0.5   2:51.86 lcore-slave-9
1510 root      20   0 42.7g  72m 2444 S   12  0.5   0:26.95 lcore-slave-1
1495 root      20   0 42.7g  72m 2444 S    0  0.5   0:13.78 riot
1509 root      20   0 42.7g  72m 2444 S    0  0.5   0:00.00 eal-intr-thread
1511 root      20   0 42.7g  72m 2444 S    0  0.5   0:00.00 lcore-slave-2
1521 root      20   0 42.7g  72m 2444 S    0  0.5   0:00.00 riot
 
root@localhost:~# top -H -p 1570
top - 12:08:02 up 5 min,  1 user,  load average: 7.13, 3.61, 1.44
Tasks:   5 total,   0 running,   5 sleeping,   0 stopped,   0 zombie
Cpu(s): 71.2%us,  1.6%sy,  0.0%ni, 32.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16431908k total, 13421092k used,  3010816k free,    18628k buffers
Swap:        0k total,        0k used,        0k free,  2340312k cached
 
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
1570 root      20   0 2147m 202m 163m S   18  1.3   0:39.12 J-UKERN
1571 root      20   0 2147m 202m 163m S    0  1.3   0:00.01 J-UDOG
1572 root      20   0 2147m 202m 163m S    0  1.3   0:00.02 J-LOG
1573 root     -14   0 2147m 202m 163m S    0  1.3   0:00.23 J-SCHED
1574 root      -4   0 2147m 202m 163m S    0  1.3   0:00.00 J-REMOTE-PIO-EV

root@localhost:~#

As highlighted above, 7 CPU’s are seen running with 100% and core-slave-1 and J-UKERN combined using 30%(12+18). From the top output, approximately 66-72% of CPU is in use.

Comment on this article > Affected Products Browse the Knowledge Base for more articles related to these product categories. Select a category to begin.

Getting Up and Running with Junos

Getting Up and Running with Junos Security Alerts and Vulnerabilities Product Alerts and Software Release Notices Problem Report (PR) Search Tool EOL Notices and Bulletins JTAC User Guide Customer Care User Guide Pathfinder SRX High Availability Configurator SRX VPN Configurator Training Courses and Videos End User Licence Agreement Global Search