|
|
The scheduler uses ts_dptbl(4), the time-sharing scheduler (or dispatcher) parameter table, to manage time-sharing LWPs. A default version of ts_dptbl is delivered with the system, and you can change it to suit local needs. First, save a backup of the default version of ts_dptbl. ts_dptbl is specified in the space.c file in the /etc/conf/pack.d/ts directory. It is automatically built into the kernel as part of system configuration.
You can change the size and values of ts_dptbl depending on your local needs. The default values have a long history of good performance over a wide range of environments. Changing the values is not likely to help much, and inappropriate values can have a dramatically negative effect on system performance.
If you do decide to change ts_dptbl, we recommend that you include at least 40 time-sharing global priorities. A range this large gives the scheduler enough leeway to distinguish LWPs based on their CPU use, which it must do to give good response to interactive processes. The default configuration has 60 time-sharing priorities. This example is part of an ts_dptbl:
glbpri | qntm | tqexp | slprt | mxwt | lwt |
---|---|---|---|---|---|
0, | 100, | 0, | 1, | 5, | 1, |
1, | 90, | 0, | 2, | 5, | 2, |
2, | 80, | 1, | 3, | 5, | 3, |
3, | 70, | 1, | 4, | 5, | 4, |
4, | 60, | 2, | 5, | 5, | 5, |
5, | 50, | 2, | 6, | 5, | 6, |
6, | 40, | 3, | 7, | 5, | 7, |
7, | 30, | 3, | 8, | 5, | 8, |
8, | 20, | 4, | 9, | 5, | 9, |
9, | 10, | 4, | 9, | 5, | 9, |
In the table above, the global priorities run from a high of 9 to a low of 0.
In the table, time slices run from 10 clock ticks for the highest priority LWPs to 100 clock ticks for the lowest priority LWPs.
It is usually reasonable to lower the priority of a time-sharing LWP whose time slice expires, because the LWP is too CPU-bound for its current priority. A long, CPU-intensive LWP is an extreme example of such an LWP, and its priority should usually be lowered in favor of LWPs that sleep after a little CPU use and are more likely to be interactive LWPs.
In the table above, priorities are cut roughly in half when a time slice expires. The lowest priority (0) stays at 0, priority 1 is reduced to 0, priorities 2 and 3 are reduced to 1, and so on.
Generally, it is a good idea to penalize processes that expire their time slices substantially more than they are rewarded for voluntarily relinquishing the CPU. This helps prevent the CPU from being monopolized at high priority by processes that exhibit a burst of interactive behavior, such as reading a large number of disk blocks, and then change their behavior and become CPU bound.
In the table above, LWP priorities are incremented by 1 after they sleep, except that the highest time-sharing priority (9) stays the same.
In the table above, all priorities are recalculated after a wait of 5 seconds.
In the table above, LWP priorities are incremented by 1 when they have been runnable for 5 seconds, except that the highest priority time-sharing priority (9) stays the same.