Maker Pro
Maker Pro

another bizarre architecture

R

Rich Grise

Many years ago, a friend of mine made a nice little joke program. You
could feed uncommented code into it and it would produce code with very
nice comments. It would look at two instructions and look up phrases
based on them and the random number generator. You knew you were in
trouble when they seemed to be making sense.

I once wrote a "poetry generator" which did pretty much the same - random
noun, random verb, random adverb, random adjective, and so on. It came out
kind of like Haiku, but some of the results were kind of spooky. ;-)

Cheers!
Rich
 
T

Terry Given

Rich said:
I once wrote a "poetry generator" which did pretty much the same - random
noun, random verb, random adverb, random adjective, and so on. It came out
kind of like Haiku, but some of the results were kind of spooky. ;-)

Cheers!
Rich

years ago at uni, we had to do a C code metric for a group project.
turns out its kinda impossible to do properly thanks to the
pre-processor (its easy to bollocks up code that way), but we counted
lines of comments, and used an X-bar-R chart to measure their
distribution, figuring that dispersed comments are probably better than
a single essay at the beginning.

Cheers
Terry
 
J

John Larkin

John, this is comp.arch.embedded - the answer is *always* "it depends".
For some products, it is vital to hit the schedule - even if there are
known bugs or problems. Perhaps you ship the hardware now, and upgrade
the software later - perhaps you ship the whole thing even with its
outstanding problems. For other products, you have to set the highest
possible standards, and quality cannot be lowered for any purpose. I
have no idea what sort of systems VV works with - they could well be of
the sort where issues such as cost and timing are more important than
quality and reliability.

There are two methodologies to consider:

1. Write a lot of code fast. Once you get a clean compile, start
testing it on the hardware and look for bugs. Keep fixing bugs until
it's time that you have to ship. Intend to find the rest of the bugs
later, which usually means when enough customers complain.

2. Write and comment the code carefully. Read through it carefully to
look for bugs, interactions, optimizations. Fix or entirely rewrite
anything that doesn't look right. Figure more review time than coding
time. NOW fire it up on the hardware and test it.

Method 2, done right, makes it close to impossible to ship a product
with bugs, because most of the bugs are found before you even run the
code. Nobody can walk up and say "we have to ship it now, we'll finish
debugging later."

Method 2 is faster, too.

John
 
R

Rich Grise

There are two methodologies to consider:

1. Write a lot of code fast. Once you get a clean compile, start
testing it on the hardware and look for bugs. Keep fixing bugs until
it's time that you have to ship. Intend to find the rest of the bugs
later, which usually means when enough customers complain.

2. Write and comment the code carefully. Read through it carefully to
look for bugs, interactions, optimizations. Fix or entirely rewrite
anything that doesn't look right. Figure more review time than coding
time. NOW fire it up on the hardware and test it.

Method 2, done right, makes it close to impossible to ship a product
with bugs, because most of the bugs are found before you even run the
code. Nobody can walk up and say "we have to ship it now, we'll finish
debugging later."

Method 2 is faster, too.

Isn't it so sad, when you realize that the vast majority - well, maybe
they're only half-vast - use method 1, and still get paid! =:-O

Thanks,
Rich
 
D

David Brown

John said:
There are two methodologies to consider:

1. Write a lot of code fast. Once you get a clean compile, start
testing it on the hardware and look for bugs. Keep fixing bugs until
it's time that you have to ship. Intend to find the rest of the bugs
later, which usually means when enough customers complain.

2. Write and comment the code carefully. Read through it carefully to
look for bugs, interactions, optimizations. Fix or entirely rewrite
anything that doesn't look right. Figure more review time than coding
time. NOW fire it up on the hardware and test it.

Method 2, done right, makes it close to impossible to ship a product
with bugs, because most of the bugs are found before you even run the
code. Nobody can walk up and say "we have to ship it now, we'll finish
debugging later."

Method 2 is faster, too.

John

Method 2 is an ideal to strive for, but it is not necessarily possible -
it depends on the project. In some cases, you know what the program is
supposed to do, you know how to do it, and you can specify, design, code
and even debug the software before you have the hardware. There's no
doubt that leads to the best software - the most reliable, and the most
maintainable. If you are making a system where you have the time,
expertise (the customer's expertise - I am taking the developer's
expertise for granted here :), and budget to support this, then that is
great.

But in many cases, the customer does not know what they want until you
and they have gone through several rounds of prototyping, viewing, and
re-writing. As a developer, you might need a lot of trial and error
getting software support for your hardware to work properly. Sometimes
you can do reasonable prototyping of the software in a quick and dirty
way (like a simulation on a PC) to establish what you need, just like
breadboarding to test your electronics ideas, but not always. A
"development" project is, as the name suggests, something that changes
with time. Now, I am not suggesting that Method 1 is a good thing -
just that your two methods are black and white, while reality is often
somewhat grey. What do you do when a customer is asking for a control
system for a new machine he is designing, but is not really sure how it
should work? Maybe the mechanics are not finished - maybe they can't be
finished until the software is also in place. You go through a lot of
cycles of rough specification, rough design, rough coding, rough testing
with the customer, and repeat as needed. Theoretically, you could then
take the finished system, see what it does, write a specification based
on that, and re-do the software from scratch to that specification using
Method 2 above. If the machine in question is a jet engine, then that's
a very good idea - if it is an automatic rose picker, then it's unlikely
that the budget will stretch.

I think a factor that makes us appear to have different opinions here is
the question of who is the customer. For most of my projects, we make
electronics and software for a manufacturer who builds it into their
system and sells it on to end users. You, I believe, specify and design
your own products which you then sell to end users. From our point of
view, you are your own customers. It is up to the customer (i.e., the
person who knows what the product should do) to give good
specifications. As a producer of high-end technical products, you might
be able to give such good specifications - for many developers, their
customers are their company's marketing droids or external customers,
and they don't have the required experience. As a developer, you can do
the best you can with the material you have - but don't promise perfection!

Programming from specifications is like walking on water - it's easy
when it's frozen.
 
C

CBFalconer

David said:
John Larkin wrote:
.... snip ...

Method 2 is an ideal to strive for, but it is not necessarily
possible - it depends on the project. In some cases, you know
what the program is supposed to do, you know how to do it, and you
can specify, design, code and even debug the software before you
have the hardware. There's no doubt that leads to the best
software - the most reliable, and the most maintainable. If you
are making a system where you have the time, expertise (the
customer's expertise - I am taking the developer's expertise for
granted here :), and budget to support this, then that is great.

Many moons ago I had to develop some software to handle an existing
mechanical and chemical monstrosity. I rolled up a system and
recorded a file with about 15 minutes of events, each time
labelled. Then I went away and developed the software.

It replaced the various interrupt routines with a 'read next event'
function, which then simulated the occurence of the appropriate
interrupt. I developed the whole system based on this nicely
repeatable input. Then I added the interrupt routines, which were
all simple, and used the internal time stamping mechanism.
Everything worked on first roll-out. Everything was developed
top-down.

Note that I could easily bugger the input file (actually a copy) to
simulate various faults.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>

"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
 
J

John Larkin

Method 2 is an ideal to strive for, but it is not necessarily possible -
it depends on the project. In some cases, you know what the program is
supposed to do, you know how to do it, and you can specify, design, code
and even debug the software before you have the hardware. There's no
doubt that leads to the best software - the most reliable, and the most
maintainable. If you are making a system where you have the time,
expertise (the customer's expertise - I am taking the developer's
expertise for granted here :), and budget to support this, then that is
great.

But in many cases, the customer does not know what they want until you
and they have gone through several rounds of prototyping, viewing, and
re-writing. As a developer, you might need a lot of trial and error
getting software support for your hardware to work properly. Sometimes
you can do reasonable prototyping of the software in a quick and dirty
way (like a simulation on a PC) to establish what you need, just like
breadboarding to test your electronics ideas, but not always. A
"development" project is, as the name suggests, something that changes
with time. Now, I am not suggesting that Method 1 is a good thing -
just that your two methods are black and white, while reality is often
somewhat grey. What do you do when a customer is asking for a control
system for a new machine he is designing, but is not really sure how it
should work? Maybe the mechanics are not finished - maybe they can't be
finished until the software is also in place. You go through a lot of
cycles of rough specification, rough design, rough coding, rough testing
with the customer, and repeat as needed. Theoretically, you could then
take the finished system, see what it does, write a specification based
on that, and re-do the software from scratch to that specification using
Method 2 above. If the machine in question is a jet engine, then that's
a very good idea - if it is an automatic rose picker, then it's unlikely
that the budget will stretch.

I think a factor that makes us appear to have different opinions here is
the question of who is the customer. For most of my projects, we make
electronics and software for a manufacturer who builds it into their
system and sells it on to end users. You, I believe, specify and design
your own products which you then sell to end users. From our point of
view, you are your own customers. It is up to the customer (i.e., the
person who knows what the product should do) to give good
specifications. As a producer of high-end technical products, you might
be able to give such good specifications - for many developers, their
customers are their company's marketing droids or external customers,
and they don't have the required experience.

Programming from specifications is like walking on water - it's easy
when it's frozen.

There's no doubt that the task to be performed may change; it usually
does. But that doesn't mean you have to use Method 1 on whatever code
is changed. On the contrary, the easiest way to create bugs is to dash
off quick changes without taking into account the entire context and
interactions. A change in the spec is no excuse for abandoning careful
coding practices.

What we do is write the manual first, and get the customer to agree
that's what he wants, and use the manual as our design spec. Changes
may still happen, but the mechanism is to edit the manual and get
agreement again, then change the code. Carefully, without hacks.

And, when we're done, we have a manual!
As a developer, you can do
the best you can with the material you have - but don't promise perfection!

The surest way to have bugs is to assume their inevitability. We
expect, from ourselves, perfect, bug-free code, and tell our customers
that they can expect bug-free products from us. When people say "all
software has bugs" it really means that *their* software has bugs.

John
 
J

John Larkin

Many moons ago I had to develop some software to handle an existing
mechanical and chemical monstrosity. I rolled up a system and
recorded a file with about 15 minutes of events, each time
labelled. Then I went away and developed the software.

It replaced the various interrupt routines with a 'read next event'
function, which then simulated the occurence of the appropriate
interrupt. I developed the whole system based on this nicely
repeatable input. Then I added the interrupt routines, which were
all simple, and used the internal time stamping mechanism.
Everything worked on first roll-out. Everything was developed
top-down.

Note that I could easily bugger the input file (actually a copy) to
simulate various faults.

I don't like interrupts. The state of a system can become
unpredictable if important events can happen at any time. A
periodically run, uninterruptable state machine has no synchronization
problems. Interrupts to, say, put serial input into a buffer, and
*one* periodic interrupt that runs all your little state blocks, are
usually safe. Something like interrupting when a switch closes can get
nasty.

John
 
T

Terry Given

John said:
I don't like interrupts. The state of a system can become
unpredictable if important events can happen at any time. A
periodically run, uninterruptable state machine has no synchronization
problems. Interrupts to, say, put serial input into a buffer, and
*one* periodic interrupt that runs all your little state blocks, are
usually safe. Something like interrupting when a switch closes can get
nasty.

John

which is, of course, event driven software. which is (AIUI) what windows
is all about. perhaps that explains it.

Cheers
Terry
 
R

Rich Grise

.
I don't like interrupts. The state of a system can become
unpredictable if important events can happen at any time. A
periodically run, uninterruptable state machine has no synchronization
problems. Interrupts to, say, put serial input into a buffer, and
*one* periodic interrupt that runs all your little state blocks, are
usually safe. Something like interrupting when a switch closes can get
nasty.

So, when you're testing an engine, do you sample all of your inputs
fast enough to keep it from blowing up if, say, some valve breaks or
it loses a turbine blade? Or is that the kind of thing that even a
vectored interrupt couldn't be fast enough to handle?

And, just out of curiosity, have you ever actually blown up an engine?
I've heard ULs about such things - or maybe one of your customers,
prompting them to buy your controller? ;-)

Thanks,
Rich
 
J

John Larkin

So, when you're testing an engine, do you sample all of your inputs
fast enough to keep it from blowing up if, say, some valve breaks or
it loses a turbine blade? Or is that the kind of thing that even a
vectored interrupt couldn't be fast enough to handle?

Polling at, say, 1000 times a second is plenty fast enough to save an
engine, if it can be saved. In the case of an overspeed, the actual
rpm limit check would be done in software, so there's no overspeed
interrupt as such. If you snap a shaft in the hp stage of a big steam
turbine, the blades *will* fly off before you can close the steam
valve.
And, just out of curiosity, have you ever actually blown up an engine?

No, but I almost ripped a ship off the dock at Avondale Shipyards, on
the Mississippi. I got it up to almost 60 RPM while tweaking a
nonlinear function generator (120 RPM is full power.)
I've heard ULs about such things - or maybe one of your customers,
prompting them to buy your controller? ;-)

We are doing overspeed trip stuff these days. That's scairy.

My customers do deliberately destroy a jet engine at full speed, to
make sure the blade containment is adequate. The big commercial jet
engines have a huge band of epoxy-kevlar wrapped around the front of
the engine to catch the big (~~12 foot) fan blades.

John
 
J

Jim Thompson

Polling at, say, 1000 times a second is plenty fast enough to save an
engine, if it can be saved. In the case of an overspeed, the actual
rpm limit check would be done in software, so there's no overspeed
interrupt as such. If you snap a shaft in the hp stage of a big steam
turbine, the blades *will* fly off before you can close the steam
valve.


No, but I almost ripped a ship off the dock at Avondale Shipyards, on
the Mississippi. I got it up to almost 60 RPM while tweaking a
nonlinear function generator (120 RPM is full power.)


We are doing overspeed trip stuff these days. That's scairy.

My customers do deliberately destroy a jet engine at full speed, to
make sure the blade containment is adequate. The big commercial jet
engines have a huge band of epoxy-kevlar wrapped around the front of
the engine to catch the big (~~12 foot) fan blades.

John

I have spun communication satellites on a test stand at Honeywell
Satellite Systems Division.

I stood behind a huge building pillar in case it decided to do
untoward things ;-)

...Jim Thompson
 
R

Rich Grise

No, but I almost ripped a ship off the dock at Avondale Shipyards, on
the Mississippi. I got it up to almost 60 RPM while tweaking a
nonlinear function generator (120 RPM is full power.)

Tim the Toolman Taylor all over! ;-D ;-D ;-D

Thanks!
Rich
 
J

joseph2k

John said:
The PDP-11 was stunningly beautiful in its cleanliness and symmetry.
The preferred radix was octal, and the instruction set and addressing
modes fit perfectly into octal digits. I can still assemble a bit from
memory...

123722 = add byte, source absolute address, destination indirect
register 2, autoincrement

Its instruction set was the basis for C. 68K has more registers and is
a 32-bit machine, but is less orthogonal and nothing you can easily
assemble from memory. Only its MOVE instruction has the
source/destination symmetry that nearly all PDP-11 opcodes had.

John

I suspect you would have likes the NS32000, 32 bit, 16 registers, hex
assembly, near perfect instruction set symmetry. real purty in my book.
 
J

joseph2k

Terry said:
Rich said:
[....]

The worst are comments that don't tell you anything, usually caused by
some supervisor ordering a bunch of code grunts, "Your code WILL be
commented!" and you get this:

LABEL1: MOV BP,SP ; move the contents of the stack pointer to the base
; pointer register

Many years ago, a friend of mine made a nice little joke program. You
could feed uncommented code into it and it would produce code with very
nice comments. It would look at two instructions and look up phrases
based on them and the random number generator. You knew you were in
trouble when they seemed to be making sense.


I once wrote a "poetry generator" which did pretty much the same - random
noun, random verb, random adverb, random adjective, and so on. It came
out kind of like Haiku, but some of the results were kind of spooky. ;-)

Cheers!
Rich

years ago at uni, we had to do a C code metric for a group project.
turns out its kinda impossible to do properly thanks to the
pre-processor (its easy to bollocks up code that way), but we counted
lines of comments, and used an X-bar-R chart to measure their
distribution, figuring that dispersed comments are probably better than
a single essay at the beginning.

Cheers
Terry

I do some of each, at the top i document the interface restrictions. In the
code i document the algorithms.
 
J

John Larkin

I suspect you would have likes the NS32000, 32 bit, 16 registers, hex
assembly, near perfect instruction set symmetry. real purty in my book.

And what we all got was x86 for the big stuff and 8051 for the small
stuff.

John
 
T

Terry Given

joseph2k said:
John Larkin wrote:




I suspect you would have likes the NS32000, 32 bit, 16 registers, hex
assembly, near perfect instruction set symmetry. real purty in my book.

wow, I'm not the only person to have ever heard of them.

Werent they used in the microVax?

Cheers
Terry
 
C

Colin Paul Gloster

Vladimir Vassilevsky wrote:

"[..]

Deadlines. That's another reason for the software to be the far from
perfect.

[..]"
John Larkin replied:

"So, you will actually release software that you know is buggy, or that
you haven't tested, because of some schedule? [..]"


In 2001 I learnt that the Rosetta spacecraft of the European Space
Agency ( WWW.ESA.int/SPECIALS/Rosetta/ ) (which is to intercept a
comet in 2014 and had still been under development in 2001) had
extremely unacceptable onboard software. Someone explained to me that
the software was written without a good hold on requirements and that
ground testing was not indicative of what it will be doing. Rosetta
was launched in 2004. A lot of updating of software was planned for
the period between launch and interception as it was argued that
software could be fixed during the many years when cruising. However,
this attitude was partially related to a need to launch the spacecraft
by a deadline so that orbits would make it feasible to reach a
particular comet, a deadline which was not adhered to so Rosetta is
actually cruising to a different comet than had been planned in
2001. (The cruise's duration for the original plan was nine years.)

Regards,
Colin Paul Gloster
 
D

David Brown

John Larkin wrote:
I don't like interrupts. The state of a system can become
unpredictable if important events can happen at any time. A
periodically run, uninterruptable state machine has no synchronization
problems. Interrupts to, say, put serial input into a buffer, and
*one* periodic interrupt that runs all your little state blocks, are
usually safe. Something like interrupting when a switch closes can get
nasty.

John

Interrupts should normally only be used when polling is unsuitable -
either because you need fast reactions, or because you don't want to
have to add polling mechanisms. I've seen programs where pressing a
switch triggers an interrupt causing action such as messages on a
screen, which is daft. The person pressing the switch will not notice
the delay polling would cause, it completely messes up the timing of
important interrupts, and can cause conflicts between the main program
loop (which may be writing to the screen at the time). Very bad.

On the other hand, I often use a software timer setup where program
modules can register a function to be called regularly in the
background. When the hardware timer underlying this system times out,
it's interrupt handler calls all the pending software timer functions
after first re-enabling interrupts. In this way, I can have predictable
periodic functions that may take significant run time (though less than
a timer tick in total) without affecting fast interrupts, and without
having to be polled in the main loop. Of course, sometimes the timer
function merely sets a flag that *is* polled in the main loop - it
depends on the situation. I've even had programs where the main loop is
nothing but a "sleep" instruction, with everything handled in interrupts
this way.
 
D

David Brown

John said:
There's no doubt that the task to be performed may change; it usually
does. But that doesn't mean you have to use Method 1 on whatever code
is changed. On the contrary, the easiest way to create bugs is to dash
off quick changes without taking into account the entire context and
interactions. A change in the spec is no excuse for abandoning careful
coding practices.

Indeed - I am not advocating Method 1, just saying that a pure Method 2
is not always possible or even appropriate.
What we do is write the manual first, and get the customer to agree
that's what he wants, and use the manual as our design spec. Changes
may still happen, but the mechanism is to edit the manual and get
agreement again, then change the code. Carefully, without hacks.

And, when we're done, we have a manual!

On some products, we can do that - and there is no doubt that it is best
for everyone when that is the case. Unfortunately it is not always
possible - it depends highly on the customer, the type of project, and
limitations such as time and budget.
The surest way to have bugs is to assume their inevitability. We
expect, from ourselves, perfect, bug-free code, and tell our customers
that they can expect bug-free products from us. When people say "all
software has bugs" it really means that *their* software has bugs.

I mostly agree. There is certainly no excuse for saying that all
software has bugs - and there is no reason for coding in a way which
makes bugs almost inevitable. But on some sorts of systems, you *do*
have to assume that there may be bugs in the software. To a fair
extent, it does not really matter if the failure is due to the software,
the electronics, or outside effects (unexpected temperature changes,
physical damage, cosmic rays, or whatever) - you have to assume the
possibility of the control system failing. That's all part of risk
analysis.

Bug-free code is about quality. Top quality products cost time and
money, and are thus not always appropriate - "good enough" is, after
all, good enough. Perhaps they will save time and money in the long
run, but perhaps not - and perhaps the customer has little money at the
start and would rather deal with long term costs even if they turn out
to be higher. For some jobs, the appropriate quality for the software
is zero bugs, but in many cases you can tolerate minor flaws as long as
the job gets done. The real dangers with software bugs, unlike other
flaws in a system, is that they are often hidden, and they can have
unexpectedly wide consequences. Modularisation and isolation of
software parts can be a big win here - your critical control loops can
be bug-free, while glitches in display code might be tolerated.

"There are two ways of constructing a software design; one way is to
make it so simple that there are obviously no deficiencies, and the
other way is to make it so complicated that there are no obvious
deficiencies. The first method is far more difficult."
- C. A. R. Hoare
 
Top