Maker Pro
Maker Pro

Computer programmers' habits in electronics

R

Rich Grise, but drunk

I am employed, I have been programming for money for 13 years or
so. (not counting the gigs I had as a student).


Yes, if I am hired to design stuff, I better produce design, sure.

i

Got any overflow work?
Thanks,
Rich
 
Rich said:
Well, the first step to correcting a problem is to admit that you have
the problem.

Slapdash, slipshod, by-guess-and-by-gosh might get you through opening
a file and writing to it, but it's stupid to try to get from point A to
point B when you don't even know where point B is. "Gawrsh, let's write
a program!" "Whut's it gonna do?" "Dunno yet, but we'll shore find out
when Uh'm done with it!"

And all of the roadmaps in the world aren't going to do any good if
you don't even know where point _A_ is.


Two words for ya: job security.

:)
 
I

Ignoramus10397

The biggest programming job I have worked on was one that topped (I estimate)
30,000 lines. There were two of us on that, and we needed only informal
get-togethers to split the tasks reasonably smoothly.

Sounds quite sensible. Judicious use of interprocess communications
protocols often help seperate "the task" into smaller separate
pieces.

There are many consultant gurus going around inventing methodologies,
selling books etc, whereas what is mostly needed is having good
programmers, attention to details and separation of tasks between both
people and code.

A quick check suggests that I wrote about 50k lines of code here at
work. Automatically generated lines are not counted.

I am not saying that documentation or design are never needed. Surely,
sometimes they are. But not as often as it is suggested. Stacks of papers
are not really a substitute for well written code.

i
Of course, what makes the job I am currently working on even easier is that I'm
designing the h/w along with the s/w. However, I *will* be using circuit diagrams!


--
 
Rich Grise wrote:

When you build a house, do you buy a bunch of lumber and shingles and
bricks and start throwing them into a pile to see if a house comes out?

That reminds me... this might work in certain third-world countries.

Then again, if the house falls down in an earthquake or storm, the
locals might just execute the one who built it...
Good Luck!

That's fer shuure!!!

Madman Mike
 
I

Ignoramus10397

your head and then produce a circuit from it (which I sort of do to).
I would suggest that electrical drawings and documentation be produced
for all personal and job related projects, and here is why.

I find that once I have an idea of a circuit and put it down on paper
(or on a computer) I can better visualise it and make modifications and
tweaks to make it work or work better (fewer parts, more efficient,
more rugged, etc).
Having the circuits documented is also helpful if you want to produce
something similar in the future.

For job related projects (especially if you are doing contract work)
drawings and documentation are a must. There is nothing worse than
when a piece of electrical equipment malfunctions at a site and a
technician (or contractor) is called in to try and fix it and no
documentation or circuit diagrams exist. Much time and money is wasted
(and stress created).

I also find for medium to large jobs that having documented test
results on circuit operation and performance (voltage and current
measurements, timing measurements, digital scope printouts of wave
forms, etc) are useful, especially if the circuit does not work first
time. This data is also useful for checking if a circuit actually
works as intended, and is a powerful tool for fault-finding the
circuits if (or when) they break down in the field.

Unlike software code (which I have also done), electronics have a lot
more potential problems which can cause an otherwise fine circuit that
works on the lab bench to fail (immediately or after a time) when put
into the field (component drift and aging, environmental conditions,
noise, harsh or unexpected use by end user, etc, etc).

In the end it really comes down to what works best for you and your
employer (althou you only need to be burned once by a project, where
documentation would have saved you to realise just how useful a tool it
is).

Thanks. What you say makes sense. Yes, there is a difference between
at least some kinds of software and circuits.

There are many problems with documentation, the main one is that it is
usually out of date or wrong. It is nice to have it, though, when it
is correct and well written.

i
 
D

Dirk Bruere at Neopax

PeteS said:
The last *major* piece of software I was involved in was the
diagnostics for a video-on-demand system. The diagnostics alone totaled
over 800,000 lines of code written by 10+ people over some years.

Without the spec (tightly written) and a 'black box' diagram (which I
use both for hardware and software, however apparently trivial the
project) we would never have got it done. I also designed the
diagnostics for the next gen system (massively distributed processes)
and helped design the actual system hardware, which helped enormously.

That system, incidentally, was a 'massively parallel system' with up to
320 parallel processors (I love the C construct 'where(x)' :)

We wrote both the host and target code. The host code included not only
task control but our own drivers onto our own boards in UltraSparcs
which connected to the target. Without the spec and a diagram of what
does what, we would have been lost.

That's not to say I haven't done drivers by the seat of my pants, but
generally I sketch out what the driver is supposed to do so I (or
others) can write the interface to it while I am still debugging the
low level driver.

I've done a lot of code and hardware since then, but no code on the
scale of that system.That said, doing any non-trivial project without
some semblance of documentation is generally bound to have lasting
consequences (the 'why is that part there? syndrome).

Depends what you mean by documentation.
I document the code with plenty of comments because I know that I will likely
have to return to it in a year or two when I've forgotten everything.
Any 'real' documentation can wait.

--
Dirk

The Consensus:-
The political party for the new millenium
http://www.theconsensus.org
 
I

Ignoramus10397

Don't spend time documenting = man-years of work wasted down the road
fixing stupid problems and soothing pissed off customers.

Do spend time documenting = no product at that vital trade show, no
customers, company goes down the tubes, end of story.

The excuse that the requirements change has been answered by Extreme
Programming. Shoddy work remains shoddy work.

Would you agree if I replaced "lack of documentation" with "not
understandable software"? I agree that stuff that is written in a
manner that is hard to understand, needs at least documentation,
although I feel that it is better to rewrite it.

In many, if not most, instances, there is a way to write code that is
basically very self explanatory or needs just a fewe comments on top
and some sprinkled in.

i
 
P

PeteS

Ignoramus10397 said:
Would you agree if I replaced "lack of documentation" with "not
understandable software"? I agree that stuff that is written in a
manner that is hard to understand, needs at least documentation,
although I feel that it is better to rewrite it.

I would not agree, honestly. I just did 3 fairly trivial CPLD designs
(all related) that need their own spec document so everyone can agree
(you / I will not be the only ones in the code) what the thing is
supposed to do. Apart from that, I (like many designers) juggle
multiple designs - when I go back to something, it's nice to be able to
read the spec document to remind myself of just what I am trying to
achieve in the minimum possible time.
In many, if not most, instances, there is a way to write code that is
basically very self explanatory or needs just a fewe comments on top
and some sprinkled in.

i
There's a difference between self documenting code (a good thing[tm])
that specifies what and how the code is achieving something and a spec
document that specifies what the code is supposed to achieve in the
first place.

Cheers

PeteS
 
I

Ignoramus10397

Well, the first step to correcting a problem is to admit that you have
the problem.

Slapdash, slipshod, by-guess-and-by-gosh might get you through opening
a file and writing to it, but it's stupid to try to get from point A to
point B when you don't even know where point B is. "Gawrsh, let's write
a program!" "Whut's it gonna do?" "Dunno yet, but we'll shore find out
when Uh'm done with it!"

I think that you both have a good point and at the same time got
carried away a little.

In real life, it is often unknown just what the end result _should_
be. And if people think that they know, they are usually wrong.

And all of the roadmaps in the world aren't going to do any good if
you don't even know where point _A_ is.

But you could find out by starting out...
Then start paying attention, and start figuring out, "What is it I'm
trying to do here?" and that sort of thing. Maybe read a book or two.

Absolutely true.
You already know the answer to that one, from experience. If it's more
than a few wires, you _have_ to have some kind of a plan.

When you build a house, do you buy a bunch of lumber and shingles and
bricks and start throwing them into a pile to see if a house comes out?

I never built a house, but I built a shed this way, it stands and is
very functional. A house is a perfect example of where such an
haphazard approach would not work, for many reasons. We do know
however, that many houses have all sorts of issues even though they
were designed.

We are not living in a simple world.

i
 
J

Jim Thompson

Would you agree if I replaced "lack of documentation" with "not
understandable software"? I agree that stuff that is written in a
manner that is hard to understand, needs at least documentation,
although I feel that it is better to rewrite it.

In many, if not most, instances, there is a way to write code that is
basically very self explanatory or needs just a fewe comments on top
and some sprinkled in.

i

My oldest son is a bit of a software guru. He _insists_ that I write
documentation as comments right in the code... why did you do this...
why did you do that... what outcome is expected... etc.

He's a real stickler for details.

He makes BUCKETS of money... far more than I do, so I'm sure that he
is correct.

...Jim Thompson
 
I

Ignoramus10397

As a circuit designer I've always liked "block" diagramming of a
system before I begin, so I don't create redundant (or useless)
circuit chunks.

So I find it hard to fathom how you can write software without some
similar organizing scheme.

I once took a course at the community college in Pascal (that will
date me :)

The instructor insisted on using "outlining" which, to me, was trying
to write raw code without any sense of direction.

When I kept using block diagramming she got pissed at me and started
giving me F's on the assignments, in spite of the resulting code being
quite compact.

Then I skipped the final since I could care less about the credit.

So I got a F for the course.

The dean, Shirley something or other, wrote me a letter expressing
concern for my academic future.

I sent her back a note, "Surely Shirley, Aren't you capable of reading
my records? I already possess a Masters in electrical engineering."

She didn't reply ;-)

That's a nice story.

i
 
I

Ignoramus10397

My oldest son is a bit of a software guru. He _insists_ that I write
documentation as comments right in the code... why did you do this...
why did you do that... what outcome is expected... etc.

He's a real stickler for details.

He makes BUCKETS of money... far more than I do, so I'm sure that he
is correct.

I am very much for adding comments to the code. It is always very
helpful as

1) the documentation is where they belong semantically
2) they are more likely to be kept up to date

i
 
T

Tim Williams

Are you talking about how programming often involves sitting down and
pissing and twiddling with the code until it does approximately what you set
out to do, without too many bugs?

In my earlier years, I did a lot of that. I wrote a ray caster pseudo-3D
walkthrough program, even with textures (okay, so 8x8 isn't much for a
texture, but still), and today I look at the QuickBasic code I wrote and I
just think

Huh?

I know I did it basically by iterating a line, segment by segment (yeah,
slow) from the viewpoint until it intersects a wall, and that the decimal
fraction of where it intersects determines what column of pixels to draw
from the texture in memory, and that distance times the cosine of the ray's
angle determines drawn height on screen, but damned if I can read the code
and figure out exactly how I did it.

And not only that, but once a piece of code is down, whether or not it works
"well", it works, so you're not apt to change it ("if it ain't broke don't
fix it").

I think this compares well to electronics. Maybe you haven't figured it out
yet, but you're new, too.

I don't know what anyone else calls it, but I'd like to call it "simulator
syndrome". You start with a basic pretense of a circuit, maybe some
theoretical setup that forms the heart of your project. Then you
crystallize parts around that, twiddling until it works to your
satisfaction. So now you have a circuit, that works, in the simulator. The
problem is, the simulator is an idealized reality, and like so many
philosophies[*], it can just crumble to a stinking bucket of shit when you
print it with real components.

[*] Homer Simpson quote:
"In theory! In theory, communism works. In theory!"

Even without a simulator, you can still fall into this fallacy. I
personally have built a preamplifier, that worked, by using single base bias
resistors from +V to base. You're supposed to use a voltage divider, so
bias current is set by that voltage and the emitter resistor. It'll work
either way, but my way won't work very well with different transistors (even
from the same batch) or across a temperature range, or across much time for
that matter!

I think these situations are analogous to the mindless programming I
described earlier. It's like using, well hell, anything QBasic at all -- I
don't think I've *ever* compiled an .EXE under 30 kilobytes! What
bloatware. And you can print "Hello World!" with probably about one
thousandth that of just machine code (compiled ASM), a large percentage of
that being the message itself. They accomplish the exact same tasks, so
clearly you can say they both "work", so what's the difference, who cares if
it's 29,970 bytes bigger?

On the other hand, the methodical approach to programming and electronics
involves laying things out in convienient blocks and working each relatively
discrete unit seperately. Recently (last year) I made a delve into
programming, writing a 3D-point-placing program (again, still in QB) to go
with a fly-through-space program I made years ago. I started with the
original program's engine, but because I need two seperate systems, edit and
test, I chucked that in a subroutine, copied its control loop (IF Keyboard$
= "A" THEN ...) and changed, added and removed keypress functions.

Since all the type defs need to be there, plus a few extra, I kept that on
the main module. I added a bunch more shared variables, as is required.
QBasic sucks at memory (one could argue it is I who doesn't know how to use
it, fair enough ;) so for the main data storage I initialized an array using
all remaining available memory, about 36kB, and then I merely adjust the
maximum index value according to how many points are in the loaded file.
When initializing a new point, its value is reset to default (0,0,0) so,
although the old data remains in memory, you can never access it.

Since the screen fonts suck on a graphical display, I already planned to add
a font I created some time ago. To keep the main module *relatively* neat,
I put the display redraw under a sub, which erases, draws all the lines and
windows and stuff, then adds text bits (menu, vital information, etc.) and
finally places dots representing the verticies in view of each window (all
three of which can be scrolled and zoomed individually). Mouse was a
required part so I dropped that in...actually I didn't since the program had
it to begin with. ;) I did add a function that selects the nearest point
when you click the mouse on a viewing pane, not to mention all the clickable
areas for entering data. BTW, 657 lines, about 22kb file, 70KB compiled.
The 3D viewer alone (which I started from, and which is contained within
said) is 9KB code.

But for all of the immense complexity, I completed it in record time, a few
months maybe. The progression of 3D programs I've made took years of
on-and-off tinkering and twiddling. Likewise, of my electronics, two of my
tube amplifiers...
http://webpages.charter.net/dawill/tmoranwms/Elec_Frankenhouse.html
http://webpages.charter.net/dawill/tmoranwms/Elec_Revision3.html
(See, there's a reason the latter is named what it is :)
....have been subject to a range of tinkerings. The pages above should be
the latest model, while back schematics may be under my schematic
collection. But my most recent project, the induction heater, has been a
logical progression: theory model, large theory model, switching model,
model sized prototype, final prototype. And in that, I've been keeping the
more complicated circuit down to more basic elements.

For example:
http://webpages.charter.net/dawill/Images/Induction_Heater_Draft1.gif
- It's like, what the **** is this? Besides my habit of drawing tight,
confusing schematics, it's hard to see what's going on because there's just
so much going on. It doesn't even fit on one screen! (Shut up Jim ;-)
This, on the other hand:
http://webpages.charter.net/dawill/tmoranwms/Elec_IndHeat6.html
is broken into five seperate images, and most of them (especially #3)
include a lot of air between sections, letting you see much easier the
further discrete circuits: transistor switches, comparators, amplifiers,
followers.

The first one I drew as a composite of six sheets of paper I scribbled my
first draft on. The second [set] I drew after a few suggestions and
revisions, but I think represents the block theory better than the first,
which represents chaos rather than the order it's supposed to be.

If you should take any class at all, I suggest a high level math course,
with a good professor. Learn problem solving tactics. The thing about math
is, you can only do so many things with an equation, so think about it,
consider which tool fits the nut the best, then give it a twist. And
everything in math is derived from everything else: calculus seems
intimidating yet is so wholly intuitive and can be expressed in terms of the
most basic theorems and arithmatic. The only thing hard about it is
remembering all the identities and how to derive them, but if you think
about them you can cover those too. A lot applies to electronics: analyze
the situation, figure out what functions you need to perform, then figure
out which circuit elements can be combined to serve that function.

I could go on and on, happily because it's such a pleasure to make things
and use logic, but alas, I've more or less hit the stopping point of what I
wanted to say. If it makes any sense at all...

Tim
 
R

Richard Henry

Ignoramus10397 said:
Would you agree if I replaced "lack of documentation" with "not
understandable software"? I agree that stuff that is written in a
manner that is hard to understand, needs at least documentation,
although I feel that it is better to rewrite it.

In many, if not most, instances, there is a way to write code that is
basically very self explanatory or needs just a fewe comments on top
and some sprinkled in.

In what languages are you coding?
 
I

Ignoramus10397

Are you talking about how programming often involves sitting down and
pissing and twiddling with the code until it does approximately what you set
out to do, without too many bugs?

That often happens indeed.
In my earlier years, I did a lot of that. I wrote a ray caster pseudo-3D
walkthrough program, even with textures (okay, so 8x8 isn't much for a
texture, but still), and today I look at the QuickBasic code I wrote and I
just think

Huh?

I know I did it basically by iterating a line, segment by segment (yeah,
slow) from the viewpoint until it intersects a wall, and that the decimal
fraction of where it intersects determines what column of pixels to draw
from the texture in memory, and that distance times the cosine of the ray's
angle determines drawn height on screen, but damned if I can read the code
and figure out exactly how I did it.

And not only that, but once a piece of code is down, whether or not it works
"well", it works, so you're not apt to change it ("if it ain't broke don't
fix it").

Yes, especialy if it is packaged into a nice callable function, class,
or whatever. (encapsulated)
I think this compares well to electronics. Maybe you haven't figured it out
yet, but you're new, too.

Indeed I am a complete newbie to electronics.
I don't know what anyone else calls it, but I'd like to call it
"simulator syndrome". You start with a basic pretense of a circuit,
maybe some theoretical setup that forms the heart of your project.
Then you crystallize parts around that, twiddling until it works to
your satisfaction. So now you have a circuit, that works, in the
simulator. The problem is, the simulator is an idealized reality,
and like so many philosophies[*], it can just crumble to a stinking
bucket of shit when you print it with real components.

Very well said.
I think these situations are analogous to the mindless programming I
described earlier. It's like using, well hell, anything QBasic at all -- I
don't think I've *ever* compiled an .EXE under 30 kilobytes! What
bloatware. And you can print "Hello World!" with probably about one
thousandth that of just machine code (compiled ASM), a large percentage of
that being the message itself. They accomplish the exact same tasks, so
clearly you can say they both "work", so what's the difference, who cares if
it's 29,970 bytes bigger?

Yes. Plus, what's 29,970 bytes these days.
On the other hand, the methodical approach to programming and
electronics involves laying things out in convienient blocks and
working each relatively discrete unit seperately.

Which is definitely a good practice when it comes to programming.
But for all of the immense complexity, I completed it in record time, a few
months maybe. The progression of 3D programs I've made took years of
on-and-off tinkering and twiddling. Likewise, of my electronics, two of my
tube amplifiers...
http://webpages.charter.net/dawill/tmoranwms/Elec_Frankenhouse.html
http://webpages.charter.net/dawill/tmoranwms/Elec_Revision3.html

Looks cute.
For example:
http://webpages.charter.net/dawill/Images/Induction_Heater_Draft1.gif
- It's like, what the **** is this? Besides my habit of drawing tight,
confusing schematics, it's hard to see what's going on because there's just
so much going on. It doesn't even fit on one screen! (Shut up Jim ;-)
This, on the other hand:
http://webpages.charter.net/dawill/tmoranwms/Elec_IndHeat6.html
is broken into five seperate images, and most of them (especially #3)
include a lot of air between sections, letting you see much easier the
further discrete circuits: transistor switches, comparators, amplifiers,
followers.

Yes, indeed splitting stuff into blocks works well.
If you should take any class at all, I suggest a high level math course,
with a good professor. Learn problem solving tactics. The thing about math
is, you can only do so many things with an equation, so think about it,
consider which tool fits the nut the best, then give it a twist. And
everything in math is derived from everything else: calculus seems
intimidating yet is so wholly intuitive and can be expressed in terms of the
most basic theorems and arithmatic.

Of all my college classes, I liked calculus the best.
The only thing hard about it is remembering all the identities and
how to derive them, but if you think about them you can cover those
too. A lot applies to electronics: analyze the situation, figure
out what functions you need to perform, then figure out which
circuit elements can be combined to serve that function.

I could go on and on, happily because it's such a pleasure to make things
and use logic, but alas, I've more or less hit the stopping point of what I
wanted to say. If it makes any sense at all...

Tim
i
 
G

Geoff

I love the C construct 'where(x)' :)

There's a 'where(x)' construct in C? How is it implemented?
Where is this documented?

Surely you mean "while(x)", yes?

Methinks you have been programming Fortran or LISP too much.
 
I

Ignoramus10397

Forget about designing electronics!

Well, I am not trying to get into this professionally. Computer
programming is better for me anyway. I just need some electronic
pieces from time to time, that's as far as I am willing to go.

i
 
Top