Concurrency can be simulated on a single processor using the technique of time-slicing, giving each process a chance to run for
some small, fixed interval before switching to the next process. For example, two processes might take turns executing for 10 milliseconds each until both have finished. However, switching between processes is expensive, so it is sometimes desirable to allow a
process to execute more than one time slice in a row. On the other hand, to preserve the illusion of parallel execution, it is desirable
that no process have to wait too long before getting a chance to run. One way to balance these conflicting desires is to set a limit K on the number of time slices a process might wait between turns. More precisely, no process that still needs more time slices will ever wait more than K time slices before getting a chance to run.
Consider two processes, A and B, that both need to run for N time slices. Given N and K, your task is to determine how many ways there are to schedule A and B such that A gets the first time slice and B gets the last time slice. For example,
if N=3 and K=3, there are six possible schedules:
If K is reduced to 2, then the AAABBB schedule is eliminated, leaving five possible schedules.
Your task is to write a method that takes two integers, N (the number of time slices needed by each process) and K (the maximum number of time slices an unfinished process is willing to wait),
and returns the number of possible schedules as a long (see examples).