The Developer’s Cry

Yet another blog by a hobbyist programmer

Work, stress, and management

Now that we have entered the multi-core age, we should all be writing multi-threaded code. Even though it’s nothing new, I’d still like to devote a (another) blog post to it. I came up with an extremely generic model that applies in most cases. It is based on the producer-consumer model, but with a twist. The standard producer-consumer is often a bit too simplistic in practice.

The Worker

Suppose we have a main thread that has a bunch of work to do. The work must be a number of tasks that are all the same, like for instance scaling a number of images. It doesn’t really matter what kind of work it is, as long as you have a number of units that must be processed.
A single work unit is abstracted into a work object. This single work object will be processed by a process function. The process will be executed by a worker thread, who handles the work units. The work will be given to the worker by the main thread; the main thread acts like a manager.
This is a standard producer-consumer paradigm where the producer injects work, while the consumer takes work.

The Stressful Worker

Now imagine that processing a work unit takes some time, and the main thread doesn’t have much else to do. This will mean that the main thread will be quick to hand out more work, piling it up for the worker as it is processing each unit one by one. This is overburdening the worker.
A solution to this problem is to have the manager wait until the worker is finished. You can optimize this and hand out a number of work units to a worker before waiting until the workload drops below a certain threshold. Note that the workload is really the length of the work queue (in the case where all work units are alike).
Another solution is to have the manager co-operate and let him do some work himself. This changes the model (and the code!) a lot since in this case, it’s better to have the worker take on work by itself rather than being a slave. Another solution is to have the manager manage multiple worker threads, one for each available core in the system of course. Depending on the type of work, it may or may not be easily possible to make use of multiple threads.

Some pseudo-code

Here’s some pseudo-code to illustrate what I’ve been talking about.

The naive implementation, which causes overburdening the worker:

# the manager (producer)

for work in pile_of_work {
    worker.give_work(work)
}
worker.time_to_go_home = 1

# the worker (consumer)

forever {
    work = get_work()
    if not work {
        if time_to_go_home {
            exit
        }
        continue
    }
    process(work)
}

The stressful worker, with a kind manager:

# the manager (producer)

for work in pile_of_work {
    while worker.workload >= max_load {
        sleep a bit or do something else
    }
    worker.give_work(work)
}
worker.time_to_go_home = 1

# the worker (consumer)

forever {
    work = get_work()
    if not work {
        if time_to_go_home {
            exit
        }
        continue
    }
    workload += 1
    process(work)
    workload -= 1
}

Clever people will note that sleeping is ugly and the manager should block instead until it is notified by the worker that it is done doing its job. This is best implemented using condition variables:

# the manager (producer)

for work in pile_of_work {
    worker.give_work(work)
    manager_condition.wait()
}
worker.time_to_go_home = 1

# the worker (consumer)

forever {
    work = get_work()
    if not work {
        if time_to_go_home {
            exit
        }
        continue
    }
    workload += 1
    process(work)
    workload -= 1

    if workload < maxload {
        # signal I'm ready for more
        manager_condition.signal()
    }
}

In this last case, I let the worker decide if he’s ready to take on more work. Python programmers get away easy: there happens to be a standard Queue class that implements the above workqueue, complete with a maximum workload that blocks the producer when the worker is being overburdened.