Your boss thinks you can speed up an application you are writing by running it on multiple processors in parallel.
Your application performs k independent tasks that each take 1 millisecond to run on a single processor.
Distributing a task across several processors does not make it run faster, but running different tasks on
different processors may indeed make your application faster.
Unfortunately, when your application runs on more than one processor, communication overheads between processors become a significant
factor. In particular, every pair of processors spends overhead milliseconds communicating with each other before work on the tasks can begin. Worse, because the processors share a single bus for communications, different pairs of processors cannot communicate in parallel. For example, if overhead is 2 milliseconds and you run your application on 3 processors, there will
be a 6 millisecond delay before the actual computations begin: 2 milliseconds for processors 1 and 2 to communicate, 2 milliseconds for processors 1 and 3 to communicate, and 2 milliseconds for processors 2 and 3 to communicate. Note that, once the initial communications
phase has completed, no further communications are required, even if each processor is executing multiple tasks.
Your task is to determine the number of processors that will run the k tasks in the minimum amount of time, assuming overhead
milliseconds of communication overhead per pair of processors.
If several configurations of processors will achieve the minimum amount of time, prefer the configuration with the smallest number of
processors.
|