This tutorial provides hands-on experience with the concepts in homework 10. ************************************** * LINUX Kernel Scheduling Algorithms * ************************************** This portion of the tutorial covers process scheduling in the Linux kernel. Understanding how a system handles scheduling is part of good coding practice and essential to system administration. For a user process in Unix scheduling is controlled by process priority which is set by a 'nice' value. On Linux, nice values range from -20 (most favorable scheduling) to 19 (least favorable). This gives you 40 scheduling priority values. User processes are assigned a default nice value of 0, which is scheduling priority 20. A child created by fork(2) inherits its parent's nice value and the nice value is preserved across execve(2). It is not permitted for a user to increase priority by reducing the nice value of a process. However, it is possible to decrease priority by increasing a nice value. This essentially means being nice to all other users on the system. Superuser can increase priority of any process. The degree to which a relative nice value affects the scheduling of processes varies across Unix systems, and, on Linux, across kernel versions. Starting with kernel 2.6.23, Linux adopted an algorithm that causes relative differences in nice values to have a much stronger effect. This causes very low nice values (+19) to truly provide little CPU to a process whenever there is any other higher priority load on the system, and makes high nice values (-20) deliver most of the CPU to applications that require it (e.g., some audio applications). Your job is to execute a program with varying nice values, comparing impact on performance. Copy this program: $ cp /home/fac/donna/public_html/cs360/examples/week10/row_reduce.cpp . This program is very CPU intensive - it takes 30+ seconds with a normal nice value. Your task is to compile and execute the program with varying nice values, using time(1) to generate running time stats. Grab the time for each trial. Modify the nice value in increments of 5,10,15,19. Recall that larger nice values result in less processor time. $ g++ -o row_reduce row_reduce.cpp # note this is a C++ program $ time ./row_reduce 2>/dev/null $ time nice -n +5 ./row_reduce 2>/dev/null $ time nice -n +10 ./row_reduce 2>/dev/null $ time nice -n +15 ./row_reduce 2>/dev/null $ time nice -n +19 ./row_reduce 2>/dev/null Record the values. You should be able to see a trend that backs up Linux's round-robin scheduling algorithm which gives some, but not much, precedence to process priority. Q: What trend do you see? Q: From this evidence can you determine what scheduling algorithm Linux uses? ******************* * SMP SCHEDULING * ******************* In this part of the tutorial you will make some empirical observations about symmetric multiprocessor scheduling in the Linux kernel. You will time the execution of a multithreaded CPU intensive program scheduled across the CPUs. Copy and compile this program: $ cp /home/fac/donna/public_html/cs360/examples/week10/row_reduce_thr.c . $ gcc -o row_reduce_thr row_reduce_thr.c -lm -lpthread The program row_reduce_thr.c uses threads to perform the same CPU intensive task as in part B. Run row_reduce_thr.c with 1 to 8 processors utilized, timing the process as follows: $ time ./row_reduce_thr 1 2>/dev/null # this is utilizing 1 processor $ time ./row_reduce_thr 2 2>/dev/null # this is utilizing 2 processors $ time ./row_reduce_thr 3 2>/dev/null # this is utilizing 3 processors ... $ time ./row_reduce_thr 8 2>/dev/null # this is utilizing 8 processors You can verify that multiple processors are in fact being scheduling for your threads use top. Start top in one window: $ top -H -u {username} -d .5 # start top with a delay of .5 seconds The'-H' switch views all threads. Hit 'f' 'J' to view processor number. Then hit '1' (one) to view all CPUs. Start row_reduce_thr from another window. You should be able to see that processors 1 - 8 are being scheduled (it is not deterministic). Grab the output from each run. Focus on the user time (user is the time spent in user mode, sys is time spent in privileged mode running kernel code.) Since most of the time is in user mode, this stat is a more reliable indicator. Compare the outcomes. You should be able to see a trend. Some things to consider. What is the trend? Why do you think this unusual behavior is occurring?