I just stumbled over the article written by Edward A. Lee professor at Berkely University with the title "The Problem with Threads" (PDF):
In this technical report he raises some interesting points. Actually we all know how difficult it is to program "clean" multi-threaded applications, particularly when data has to be shared. He analyses the problems in detail and illustrates that even quite simple algorithms/patterns like the observer pattern are very tricky to be implemented in a truly thread-safe way.
What is particularly dangerous is the fact, that a programmer might think, he or she had done a thread-safe implementation but actually did oversee issues that could lead to deadlocks or concurrency issues. He discussed the inherent complexity of concurrent programming, and I would add the resulting complexity in testing these multi-threaded programs. They tend to behave in a non-deterministic way, with strong dependencies on operating system- or virtual machine implementations (like different thread scheduling on differnt platforms) and hardware differences (single processor, single core; multi-processor, multi-core machines). Test could run fine on one machine and (occasionally) fail on others.
Summing it up, many bugs in such programs will not be found, and might emerge with new (different) hardware or usage scenarios, as Lee poses it:
What is particularly dangerous is the fact, that a programmer might think, he or she had done a thread-safe implementation but actually did oversee issues that could lead to deadlocks or concurrency issues. He discussed the inherent complexity of concurrent programming, and I would add the resulting complexity in testing these multi-threaded programs. They tend to behave in a non-deterministic way, with strong dependencies on operating system- or virtual machine implementations (like different thread scheduling on differnt platforms) and hardware differences (single processor, single core; multi-processor, multi-core machines). Test could run fine on one machine and (occasionally) fail on others.
Summing it up, many bugs in such programs will not be found, and might emerge with new (different) hardware or usage scenarios, as Lee poses it:
"I conjecture that most multi-threaded general-purpose applications are, in fact, so full of concurrency bugs that as multi-core architectures become commonplace, these bugs will begin to show up as system failures. This scenario is bleak for computer vendors: their next generation of machines will become widely known as the ones on which many programs crash.
These same computer vendors are advocating more multi-threaded programming, so that there is concurrency that can exploit the parallelism they would like to sell us. Intel, for example, has embarked on an active campaign to get leading computer science academic programs to put more emphasis on multi-threaded programming. If they are successful, and the next generation of programmers makes more intensive use of multithreading, then the next generation of computers will become nearly unusable."
He further analysis ways to "prune" nondeterminancy by software engineering process, specific test-procedures or explicit language thread support like in Java and the problems associated with it. He also suggests altenatives to threads and ways to implement concurrent programs like coordination languages. He concludes:
"Concurrent programming models can be constructed that are much more predictable and understandable than threads. They are based on a very simple principle: deterministic ends should be accomplished with deterministic means. Nondeterminism should be judiciously and carefully introduced where needed, and should be explicit in programs. This principle seems obvious, yet it is not accomplished by threads. Threads must be relegated to the engine room of computing, to be suffered only by expert technology providers."
No comments:
Post a Comment