M
Martin Brown
John said:I don't think the culture, in academia or in business, plans for
quality. The big issue that Computer Science should be addressing, the
They have but the young new programmers don't often get the chance to
practice what they have been taught when they go into industry.
issue where they could actually affect society in a meaningful way, is
programming quality.
It's my opinion that high-quality software is on average cheaper to
make than buggy software.
That has been known for decades the IBM analysis of formal reviews as a
method of early detection of specification errors in 1981 highlighted
this early on. Popularised version in the "Mythical Man Month". Formal
reviews saved them a factor of 3 in bug fixing costs and reduced the
defects found after shipping by a factor of 4. It still isn't standard
practice except in the best places.
In the UK the problem with every large government computer project stems
from the fact that the managers are innumerate civil servants. The new
ID card thing should be hilarious. One database to bind them all...
The real difficulty is in persuading customers that they do not want a
quick hack to give them something ASAP. Most times that is exactly what
they ask for even when they don't yet know what they want it to do. It
is the job of the salesman to get the order in ASAP so guess what happens...
Add in a few of the political factors like the guys who can see that if
the computer system works right they will be out of a job and you can
see how the specifications end up corrupt at the outset.
If you tried to build hardware with the sort of "specifications" some
major software projects start out with you would have a rats nest of
components miles high by the time it was delivered.
Consider an X-Y plot: X axis is programmer experience; Y-axis is bug
density. The engineering units are admittedly fuzzy, but go with the
concept.
So far so good. The problem is that the variation of the position of
these lines on the graph for different individuals is more than an order
of magnitude. That is the worst programmers have a defect curve that is
more than 10x higher than the best practitioners. And there are not
enough good or excellent programmers. You cannot change the availability
of highly intelligent and trained staff so you have to make the tools
better.
BTW don't blame the programmers for everything - a lot of the problems
in the modern software industry stem from late injected feature creep.
I think the popular languages and culture tend to make the droop down
fron the initial start, but then flatten out or even start to curve
back up. More experienced programmers go faster and use trickier
constructs and the newest tools to keep their bug rate up.
The newest tools like static testing and complexity analysis are all
about detecting as many common human errors at compile time as possible.
It is telling that the toolset for this is *only* available in the most
expensive corporate version of MS development tools.
When they should be in the one sold to students!!! A problem with
student projects is that they are small enough that any half decent
candidate can rattle them off with or without the right approach.
Now consider plotting graphs in different colors for different
languages: Fortran, Cobol, C, C++, ADA, Perl, Python, Java. Are we
making progress?
If you want a safe well behaved curve then something like one of Wirth's
languages Pascal or Modula2 is about as good as it gets (for that
generation of language). Java is pretty good for safety if a bit slow
and Perl has its uses if you like powerful dense cryptic code. APL is
even terser and has seen active service in surprising places.
Domain specific languages or second generation languages augmented by
well tested libraries can be way more productive.
Mathematica is one example.
Regards,
Martin Brown