Improving Goodput by Coscheduling CPU and Network Resources

The industry often measures the success of a network connection by looking at throughput. However, throughput is not the only metric to consider – goodput may be an even more important factor.

This article will discuss developing co-scheduling techniques for CPU and network resources in Condor pools that improve goodput. It will also describe implementing these techniques in Pollux, a real-world HTCondor cluster scheduler.

Co-scheduling

Coscheduling is the process of scheduling related processes to run at the same time in concurrent systems. This allows all the processes in a group to be started at the same time and prevents exceptions in some of the processes from blocking the entire group. Coscheduling is also referred to as gang scheduling and is one of the most popular optimization techniques in parallel computing.

The main motivation for this work is to improve goodput by using a combination of coscheduling and gang scheduling. The authors report that they were able to improve the goodput of their Condor pool by up to 40% with this approach.

In addition to enabling coscheduling, the patch set introduces a new kernel command-line option called “cosched_max_level.” This parameter specifies the highest level of the scheduling domain hierarchy at which coscheduling should occur. The default value of this option is zero, which disables coscheduling altogether.

Coordination

A major challenge in user-centric distributed multiple-input multiple-output (D-mMIMO) networks is the co-scheduling techniques for CPU and network resources. A computationally efficient coordination algorithm is required, which can improve spectral efficiency and energy efficiency without degrading the overall network performance. In this paper, a two-stage heuristic approach to reduce inter-CPU coordination is proposed. The first step involves a dynamic selection of a primary access point (AP). The second stage involves a fine-tuning algorithm that drops the worst inter-CPU coordination. Simulation results show that the heuristic approach can achieve a good compromise between spectral efficiency and energy efficiency.

A common assumption is that increasing a processor’s internal clock speed increases its goodput, but this is not always the case. An increased internal clock speed requires more power to operate, so it is important to balance the trade-off between goodput and power consumption. A sophisticated processor may automatically adjust its internal clock speed to achieve optimal power consumption and goodput.

Read Also: The Ultimate Guide to Digital Audio: Everything You Need to Know about Sound in the Modern Era

Scheduling algorithms

In computer multitasking, scheduling algorithms determine which process gets the CPU next. They are used to balance the goals of maximizing CPU utilization, throughput, and fairness (equal time for processes with equal priority), among other concerns. In practice, these goals often conflict (e.g., throughput versus response time). A good scheduling algorithm like co-scheduling techniques for CPU and network resources should maximize one of these goals and minimize the others.

A common scheduling method is Shortest Job First (SJF). This algorithm assigns a burst time to each process, which represents the amount of time that it will take to complete its execution. Then, when the CPU is available, it selects a process from the ready queue with the shortest burst time to execute. This approach can be preemptive or non-preemptive.

Co-scheduling techniques for CPU and network resources methods have the advantage of being easy to implement, but they may have performance problems if the scheduler is not preemptive. It can also lead to starvation if shorter processes continuously enter the queue. To avoid this, the system can use a multilevel queue to schedule multiple processes at different priority levels.

About The Author

Back to top