Maker Pro
Maker Pro

How to develop a random number generation device

J

John Larkin

I guess that makes me a Software Engineer. I design, write, and deliver
bug-free code that does exactly what it is instructed to do, no more,
no less.

Admittedly, I've never done a whole OS, but all one would need to do
is apply the same principle of "design it before you start to build
it". :)

I think part of the blame for crappy programmers these days can be
traced back to BASIC - the language where the scriptkiddie starts
entering his program before it's even written! :)

No, most bad programmers these days first learn C.

The first non-assembly language I used was Basic-Plus, and I still
program in PowerBasic, and assembly!

There's nothing wrong with Basic, especially the modern versions.
Given an adequate language that doesn't positively force bad habits,
the programmer is what matters.

Interesting, but I don't really design a program first, other than
some rough notions; I start coding it bottom-up, and design the
structure along the way. What matters is the final product, which I
get to by lots of reading and re-writing until it's perfect. Works for
me.

I do write the manuals first.

Read "Dreaming in Code" by Scott Rosenberg.


John
 
K

krw

That may be true, but it is no reason to throw away a drive when it
has the lowest probability of failing in favor of one with a higher
likelihood.
We bought a *lot* of identical drives when we bought the batch of PCs.
I don't want to worry about PCs for another 4 or 5 years maybe.

If they're all identical, you better hope it's a good design/lot. I
think I have what must be the last IBM 75GXP (a.k.a. "Death Star") on
the planet that still works. I don't trust it though.
 
J

John Larkin

Sounds like you have things under control. Just out of curiosity-- do
you have an "IT guy" or are things kept running by real engineers?

We have a young guy who started casually helping Mo with some computer
problems at her work, and then he started helping us, and then we
hired him, and now he's our marketing manager and "IT" guy. He manages
the internet connection and the company file server and the Google ads
and such. The engineers mostly futz with their own PCs, once he does
the basic setup.


John
 
M

MooseFET

Programmers have pretty much proven that they cannot write bug-free
large systems.
In every other area, humans make mistakes and yet we seem surprised
that programmers do too.

In most other areas of endeavour small tolerance errors do not so
often lead to disaster. Boolean logic is less forgiving. And fence
post errors which even the best of us are inclined to make are very
hard to spot. You see what you intended to write and not what is
actually there. Walkthroughs and static analysis tools can find these
latent faults if budget permits.

Static analysis tools can only find some bugs. Some code has to be
stepped through to see if it ever gets stuck or goes into a loop. I'm
thinking of things like:

while (X > 1) do
if (X is even) X = X/2;
else X = 3 * X + 1;


It is really hard to see whether for some values of X this sticks in a
loop or not.

Some practitioners, Donald Knuth for instance have managed to produce
virtually bug free non-trivial systems (TeX).

Given the right motivation, I suspect that quite a few programmers
could do it. The problem is partly the one you point out below and
partly that those guys already have jobs. There is a lot of quite
good code being written. It doesn't get noticed because of the
mountains of crap it is hiding in.
OTOH the current
business paradigm is ship it and be damned. You can always sell
upgrades later. Excel 2007 is a pretty good current example of a
product shipped way before it was ready. Even Excel MVPs won't defend
it.

I have always worked in an environment where bugs are not allowed. I
don't have a perfect record, but I'm sure that my rate of making bugs
is way lower than that of the average programmer in an environment
where bugs are allowed. Practice helps.

Software programming hasn't really had the true transition to a hard
engineering discipline yet. There hasn't been enough standardisation
of reliable component software parts for sale off the shelf equivalent
in complexity to electronic ICs that really do what they say on the
tin and do it well.

.... and further: In a lot of ways, we need better languages. Back in
the days of DOS I was helping someone fix a program that used a
library for working with the serial port, and by time we got done we
no longer used the library. I don't think there was really much
wrong with the library it was just that we couldn't figure out how to
make it do what was needed. It had a think book full of documentation
of the dozens of function it contained. Going cover to cover several
times we simply couldn't find the needed routines or how to call them.

We needed a "get the next character from COM1 or any change in the
modem status in the order they happened please" function.


By comparison thanks to Whitworth & Co mechanical engineering has
standardised nuts and bolts out of a huge arbitrary parameter space.
Nobody these days would make their own nuts and bolts from scratch
with randomly chosen pitch, depth and diameter.

Perhaps not today but back in 1998, someone in China was making his
own bolts etc. They were threaded with an odd pitch. They were
roughly the typical US sizes rounded off to the nearest metric thing
his lathe could do. He was threading into plastic, the standard
metric threads wouldn't hold and he could find the right sort of
insert.

How about calling them modules. Re-usable components with a clearly
defined and documented external interface that do a particular job
extremely well. NAGLIB, the IJG JPEG library or FFTW are good examples
although arguably in some cases the external user interface is more
than a bit opaque.

I like the idea of modules. Maybe we could have a programming system
that is more like designing analog hardware. You indicate where the
data goes, putting down the modules you need and wiring them up.

Have you ever played with "artsbuilder". It is sort of what I'm
thinking of.

And the very complex hardware is invariably designed using software
tools. The problem is not that it is impossible to make reliable
software. The problem is that no-one wants to pay for it.

I suspect that we are paying for it many times over. It is just that
people can't see the money going out of their pocket.
In hardware chip design the cost of fabricating a batch of total junk
is sufficiently high and painful that the suits will usually allow
sufficient time for testing before putting a new design into full
production. Not so for software where upgrade CDs and internet
downloads are seen as dirt cheap.

Also: "software is magic. Hardware has all those transistor thingies
in it."


[...]
For N<=4 full multiprocessing is fairly tractable and after that it
suddenly gets a lot harder. And I have know one multiprocessing
mainframe box where N=3 was perfect and the advertised upgrade to N=4
was a dogs dinner. Worth pointing out here that very few software
programs these days are that heavily CPU bound.

Did you ever work on an Itell Series 4 development system. It was
multiprocessor and multiuser. In both cases "multi" meant 1.5. It
had an 8086 and an 8080 in it. If the user on the 8086 started a
compile, the 8080 froze up.

Most CPUs in PCs these days are vastly overpowered for the normal day
to day work. Only a few power users and gamers really push them hard.

You obviously have seen how fast I typed all this :)


I rarely need all of the power of my PC. Sometimes though a run on
LTSpice will take all night. I rarely press the gas all the way to
the floor on my car too.
 
J

John Larkin

John said:
John Larkin wrote:
On Mon, 17 Sep 2007 18:40:35 +0200, David Brown

John Larkin wrote:
On Sun, 16 Sep 2007 22:07:42 +0200, David Brown

John Larkin wrote:
On Sep 15, 11:09 am, John Larkin
[....]
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.
I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point
ALUs than actual integer ALUs.

Sounds sort of like Sun's Niagra chips, which have (IIRC) 8 cores, each
with 4 threads, but only a few floating point units. For things like
web serving, it's ideal.

Yup. Low-horsepower tasks can just be a thread on a multithread core,
and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many,
many real or virtual CPUs are available. One CPU would be the manager,
and every task, process, or driver could have its own, totally
confined and protected, CPU, and there would be no context switching
ever, and few interrupts in fact.

That's not going to work for Linux, anyway - there is a utility thread
spawned per cpu at the moment (work is underway to avoid this, because
it is a bit of a pain when you have thousands of cpus in one box).

However, there is no point in having a cpu (or even a virtual cpu)
dedicated to each task. Many sorts of tasks spend a lot of time
sleeping while waiting for other events - a cpu in this state is a waste
of resources.
Only if you think of a CPU as a valuable resource. As silicon shrinks,
a CPU becomes a minor bit of real estate. It makes sense to use it
when there's something to do, and put it to sleep when there's not.
Lots of power gets saved by not doing context switches.

CPUs *are* a valuable resource - modern cpu cores take up a lot of
space, even when you exclude things like the cache (which take more
space, but cost less per mm^2 since you can design in a bit of
redundancy and thus tolerate some faults).

The more CPUs you have, the more time and space it costs to keep caches
and memory accesses coherent. There are some sorts of architectures
which work well with multiple CPU cores, but these are not suitable for
general purpose computing.

My point is that large numbers of CPU cores *will* become common and
cheap, and we need a new type of OS to take advantage of this new
reality. Done right, it could be simple and astoundingly secure and
reliable.

I would be very surprised to see a system where the number of CPU cores
was greater than the number of processes. I expect to see the number of
cores increase, especially for server systems, but I don't expect to see
systems where it is planned and expected that most cores will sleep most
of the time.

Well, I remember 64-bit static rams, and 256-bit DRAMS. I can't see
any reason we couldn't have 256 or 1024 cpu's on a chip, especially if
a lot of them are simple integer RISC machines.

You can certainly get 1024 CPUs on a chip - there are chips available
today with hundreds of cores. But there are big questions about what
you can do with such a device - they are specialised systems. To make
use of something like that - you'd need a highly parallel problem (most
desktop applications have trouble making good use of two cores - and it
takes a really big web site or mail gateway to scale well beyond about
16 cores). You also have to consider the bandwidth to feed these cores,
and be careful that there are no memory conflicts (since cache coherency
does not scale well enough).

No, no, NO. You seem to be assuming that we'd use multiple cores the
way Windows would use multiple cores. I'm not talking about solving
big math problems; I'm talking about assigning one core to be a disk
controller, one to do an Ethernet/stack interface, one to be a printer
driver, one to be the GUI, one to run each user application, and one
to be the system manager, the true tiny kernal and nothing else.
Everything is dynamically loadable, unloadable, and restartable. If a
core is underemployed, it sleeps or runs slower; who cares if
transistors are wasted? This would not be a specialized system, it
would be a perfectly general OS with applications, but no process
would hog the machine, no process could crash anything else, and it
would be fundamentally reliable.

That would be an absurd setup. There is some justification for wanting
multiple simple cores in server systems (hence the Sun Niagara chips),
but not for a desktop system. The requirements for a disk controller, a
browser, and Doom are totally different. With a few fast cores like
today's machines, combined with dedicated hardware (on the graphics
card), you get a pretty good system that can handle any of these. With
your system, you'd get a chip with a couple of cores running flat out
(but without a hope of competing with a ten year old PC, as they could
not have comparable bandwidth, cache, or computing resources in each
core), along with a few hundred cores doing practically nothing. In
fact, most of the cores would *never* be used - they are only there in
case someone wants to do a few extra things at the same time since you
need a core per process.
This is not about performance; hardly anybody needs gigaflops. It's
all about reliability.

Until you can come up with some sort of justification, however vague, as
to why you think one cpu per process is more reliable than context
switches, this whole discussion is useless.

You define yourself by the ideas you refuse to consider. So I suppose
you'll still be running Windows 20 years from now.

John
 
M

Martin Brown

[....]
Programmers have pretty much proven that they cannot write bug-free
large systems.
In every other area, humans make mistakes and yet we seem surprised
that programmers do too.
In most other areas of endeavour small tolerance errors do not so
often lead to disaster. Boolean logic is less forgiving. And fence
Software programming hasn't really had the true transition to a hard
engineering discipline yet. There hasn't been enough standardisation

Compare a software system to an FPGA. Both are complex, full of state
machines (implicit or explicit!), both are usually programmed in a
heirarichal language (C++ or VHDL) that has a library of available
modules, but the FPGAs rarely have bugs that get to the field, whereas
most software rarely is ever fully debugged.

I think that hardware engineers get a better grounding in logical
design (although I haven't looked at modern CS syllabuses so I may be
out of date).

But it is mostly a cultural thing. Software houses view minimum time
to market and first mover advantage to gain maximum market share as
more important than correct functionality. And it seems they are
right. Just look at Microsoft Windows vs IBMs OS/2 a triumph of superb
marketting over technical excellence!

And I have bought my fair share of hardware that made it onto the
market bugs and all too. My new fax machine caught fire. Early V90
modems that only half work etc.
So, computers should use more hardware and less software to manage
resources. In fact, the "OS kernal" of my multiple-CPU chip could be
entirely hardware. Should be, in fact.

You are treating the symptoms and not the disease. Strongly typed
languages already exist that would make most of the classical errors
of C/C++ programmers go away. Better tools would help in software
development, but until the true cost of delivering faulty software is
driven home the suits will always go for the quick buck.

Regards,
Martin Brown
 
M

Martin Brown

Static analysis tools can only find some bugs. Some code has to be
stepped through to see if it ever gets stuck or goes into a loop. I'm
thinking of things like:

while (X > 1) do
if (X is even) X = X/2;
else X = 3 * X + 1;

It is really hard to see whether for some values of X this sticks in a
loop or not.

Halting problems are intrinsically hard. Nothing much you can do about
that.
So hard in fact that the hailstone numbers belong to the Collatz
conjecture and are expected to terminate eventually with the repeating
pattern 4 - 2 - 1 - 4 .... ad infinitum.

However no proof exists. And a brute force search out to around 2^58
failed to find any numbers that didn't.
I have always worked in an environment where bugs are not allowed. I
don't have a perfect record, but I'm sure that my rate of making bugs
is way lower than that of the average programmer in an environment
where bugs are allowed. Practice helps.

Depends what you are working on. If a failure can have mission
critical implications then people pay a lot more attention. If the
screen refresh gets garbled and a few icons go missing nobody really
cares.
... and further: In a lot of ways, we need better languages. Back in

Better languages already exist, but almost no-one uses them.
We needed a "get the next character from COM1 or any change in the
modem status in the order they happened please" function.
I like the idea of modules. Maybe we could have a programming system
that is more like designing analog hardware. You indicate where the
data goes, putting down the modules you need and wiring them up.

Already exists Nicklaus Wirth's minimalist language Modula-2 more or
less fits the bill. Small simple language with very tightly defined
module interface opaque types and low level generic IO primitives. It
never really caught on.
Logitech (now of mouse fame) sold commercial versions of the ETH
Zurich M2 compiler for PCs in the mid 80's.

http://www.modula2.org/
http://www.cfbsoftware.com/modula2/

Have some of the history. Ada tried to be everything to all men and
became bloated as a result.
Have you ever played with "artsbuilder". It is sort of what I'm
thinking of.

No. But I was once a fan of Nassi-Schniederman diagrams which
encapsulate program logic in a visual form. Sadly the graphical tools
of the day were not really up to it. Sceptics called them nasty
spiderman diagrams.

Regards,
Martin Brown
 
J

John Larkin


Programmers have pretty much proven that they cannot write bug-free
large systems.
In every other area, humans make mistakes and yet we seem surprised
that programmers do too.
In most other areas of endeavour small tolerance errors do not so
often lead to disaster. Boolean logic is less forgiving. And fence
Software programming hasn't really had the true transition to a hard
engineering discipline yet. There hasn't been enough standardisation

Compare a software system to an FPGA. Both are complex, full of state
machines (implicit or explicit!), both are usually programmed in a
heirarichal language (C++ or VHDL) that has a library of available
modules, but the FPGAs rarely have bugs that get to the field, whereas
most software rarely is ever fully debugged.

I think that hardware engineers get a better grounding in logical
design (although I haven't looked at modern CS syllabuses so I may be
out of date).

Hardware can be spaghetti too, and can be buggy and nasty, if one does
asynchronous design. But in proper synchronous design, controlled by
state machines, immensely complex stuff just works. It's sort of
ironic that in a big logic design, 100K gates and maybe 100 state
machines, everything happens all at once, every clock, across the
entire chip, and it works. Whereas with software, there's only one PC,
only one thing happens, at a single location, at a time, and usually
nobody can predict the actual paths, or write truly reliable code.

But it is mostly a cultural thing. Software houses view minimum time
to market and first mover advantage to gain maximum market share as
more important than correct functionality. And it seems they are
right. Just look at Microsoft Windows vs IBMs OS/2 a triumph of superb
marketting over technical excellence!

And I have bought my fair share of hardware that made it onto the
market bugs and all too. My new fax machine caught fire. Early V90
modems that only half work etc.

You are treating the symptoms and not the disease. Strongly typed
languages already exist that would make most of the classical errors
of C/C++ programmers go away. Better tools would help in software
development, but until the true cost of delivering faulty software is
driven home the suits will always go for the quick buck.

No, I am making the true observation that complex digital logic
designs are usually bug-free, simple software systems have a chance of
being so, and complex software systems never are.

John
 
D

David Brown

John said:
[....]

Programmers have pretty much proven that they cannot write bug-free
large systems.
In every other area, humans make mistakes and yet we seem surprised
that programmers do too.
In most other areas of endeavour small tolerance errors do not so
often lead to disaster. Boolean logic is less forgiving. And fence
post errors which even the best of us are inclined to make are very
hard to spot. You see what you intended to write and not what is
actually there. Walkthroughs and static analysis tools can find these
latent faults if budget permits.

Some practitioners, Donald Knuth for instance have managed to produce
virtually bug free non-trivial systems (TeX). OTOH the current
business paradigm is ship it and be damned. You can always sell
upgrades later. Excel 2007 is a pretty good current example of a
product shipped way before it was ready. Even Excel MVPs won't defend
it.
Unless there's some serious breakthrough - which is
really prohibited by the culture
I think there really is a fundamental limitation that makes it such
that the programming effort becomes infinite to make a bug free large
system. We do seem to be able to make bug free small systems,
however.
Software programming hasn't really had the true transition to a hard
engineering discipline yet. There hasn't been enough standardisation
of reliable component software parts for sale off the shelf equivalent
in complexity to electronic ICs that really do what they say on the
tin and do it well.

By comparison thanks to Whitworth & Co mechanical engineering has
standardised nuts and bolts out of a huge arbitrary parameter space.
Nobody these days would make their own nuts and bolts from scratch
with randomly chosen pitch, depth and diameter. Alas they still do in
software:(

Compare a software system to an FPGA. Both are complex, full of state
machines (implicit or explicit!), both are usually programmed in a
heirarichal language (C++ or VHDL) that has a library of available
modules, but the FPGAs rarely have bugs that get to the field, whereas
most software rarely is ever fully debugged.

There are a few points here. If you take a "typical" embedded card with
an FPGA and a large program, you'll find the software part has orders of
magnitude more lines of programmer-written code than the FPGA. In an
FPGA, the space is often taken with pre-written code (such as an
embedded processor or other high-level macros), and much of the
remaining space is taken by multiple copies of components. Although
getting each line of the FPGA code right is harder than getting each
line of the C/C++ right, there are less lines in total. And for various
reasons (not all of which are understood), studies show that the rate of
bugs in programs is roughly proportional to the number of lines, almost
independent of the language and of the type of programming. Weird, but
apparently true. That's part of the reason for using higher level
languages like Python (or MyHDL for FPGA design) rather than C++ - not
only do programmers typically code faster, they make less mistakes.

Of course, an FPGA project typically involves a lot more comprehensive
testing than a typical C++ project, and is typically better planned
(with less feature creep), both of which are critical to getting low bug
rates.

However, you have to remember that hardware (and FPGA) and software do
different jobs. Sometimes there are jobs that can be implemented well
in either, but that's seldom the case. And when there is, it is
generally much faster to develop a software solution. What is often
missing in the software side is a commitment of time and resources to
proper development and testing that would mean the development took
longer, but gave a more reliable result (and thus often saves money in
the long term). With FPGA design, if you don't make such a commitment,
your project will never work at all - thus it is more likely that a
released product is nearly bug-free.
So, computers should use more hardware and less software to manage
resources. In fact, the "OS kernal" of my multiple-CPU chip could be
entirely hardware. Should be, in fact.

There are certainly benefits in putting some of an OS in hardware - but
the hardware can never be as flexible as software. If you want an
example of a device with a hardware OS, have a look at
http://www.innovasic.com/fido.htm (it's 68k based, so you'll like it).
I've seen other cpus with OS hardware - typically it is to make task
switching more predictable so that hardware devices like timers and
UARTs can be simulated in software.
Yes. The bug level is proportional to the ease of making revisions.
That's why programmers type rapidly and debug, literally, forever.

No, a lack of commitment to proper design strategy and testing is why
software developers typically start in the middle of a project and never
properly finish it. The ease of making revisions and sending out
updates is part of why such a commitment is never made - managers
believe it is cheaper to ship prototype software and let users do the
testing.

The number of bugs is roughly proportional to the lines of code. It's
the debugging and testing (or lack thereof) that is often the problem,
combined with structural failures due to lack of design.
 
R

Rich Grise

.
line of the C/C++ right, there are less lines in total. And for various
reasons (not all of which are understood), studies show that the rate of
bugs in programs is roughly proportional to the number of lines, almost
independent of the language and of the type of programming. Weird, but
apparently true.

Well, duh. People, in general, make some percentage of mistakes, i.e., the
absolute number of mistakes will be proportional to the amount of work
performed, regardless of whether it's programming or building a boat.

With hardware, there's a lot more rigorous checking on the way - their
guidelines aren't as nebulous as software specs. ;-)

But software that's designed right shouldn't have any bugs; saying
"Oh, software _always_ has bugs" is just a lame excuse for sloppy design.

Admittedly, using defective tools isn't any help, but real programmers
don't use M$ Windoze anyway. ;-)

Cheers!
Rich
 
R

Rich Grise

Static analysis tools can only find some bugs. Some code has to be
It didn't take me any time at all to see that this has no bounds checking;
what happens if someone passes X=1.42857 to it?

This little example-oid was clearly written by one of those lame
programmer-wannabees who keeps sniveling "All Software Has Bugs And
There's Nothing You Can Do About It!!!" as an excuse for his
incompetence/laziness.

Cheers!
Rich
 
D

David Brown

John said:
John said:
On Mon, 17 Sep 2007 23:04:03 +0200, David Brown

John Larkin wrote:
On Mon, 17 Sep 2007 18:40:35 +0200, David Brown

John Larkin wrote:
On Sun, 16 Sep 2007 22:07:42 +0200, David Brown

John Larkin wrote:
On Sep 15, 11:09 am, John Larkin
[....]
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.
I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point
ALUs than actual integer ALUs.

Sounds sort of like Sun's Niagra chips, which have (IIRC) 8 cores, each
with 4 threads, but only a few floating point units. For things like
web serving, it's ideal.

Yup. Low-horsepower tasks can just be a thread on a multithread core,
and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many,
many real or virtual CPUs are available. One CPU would be the manager,
and every task, process, or driver could have its own, totally
confined and protected, CPU, and there would be no context switching
ever, and few interrupts in fact.

That's not going to work for Linux, anyway - there is a utility thread
spawned per cpu at the moment (work is underway to avoid this, because
it is a bit of a pain when you have thousands of cpus in one box).

However, there is no point in having a cpu (or even a virtual cpu)
dedicated to each task. Many sorts of tasks spend a lot of time
sleeping while waiting for other events - a cpu in this state is a waste
of resources.
Only if you think of a CPU as a valuable resource. As silicon shrinks,
a CPU becomes a minor bit of real estate. It makes sense to use it
when there's something to do, and put it to sleep when there's not.
Lots of power gets saved by not doing context switches.

CPUs *are* a valuable resource - modern cpu cores take up a lot of
space, even when you exclude things like the cache (which take more
space, but cost less per mm^2 since you can design in a bit of
redundancy and thus tolerate some faults).

The more CPUs you have, the more time and space it costs to keep caches
and memory accesses coherent. There are some sorts of architectures
which work well with multiple CPU cores, but these are not suitable for
general purpose computing.

My point is that large numbers of CPU cores *will* become common and
cheap, and we need a new type of OS to take advantage of this new
reality. Done right, it could be simple and astoundingly secure and
reliable.

I would be very surprised to see a system where the number of CPU cores
was greater than the number of processes. I expect to see the number of
cores increase, especially for server systems, but I don't expect to see
systems where it is planned and expected that most cores will sleep most
of the time.

Well, I remember 64-bit static rams, and 256-bit DRAMS. I can't see
any reason we couldn't have 256 or 1024 cpu's on a chip, especially if
a lot of them are simple integer RISC machines.

You can certainly get 1024 CPUs on a chip - there are chips available
today with hundreds of cores. But there are big questions about what
you can do with such a device - they are specialised systems. To make
use of something like that - you'd need a highly parallel problem (most
desktop applications have trouble making good use of two cores - and it
takes a really big web site or mail gateway to scale well beyond about
16 cores). You also have to consider the bandwidth to feed these cores,
and be careful that there are no memory conflicts (since cache coherency
does not scale well enough).
No, no, NO. You seem to be assuming that we'd use multiple cores the
way Windows would use multiple cores. I'm not talking about solving
big math problems; I'm talking about assigning one core to be a disk
controller, one to do an Ethernet/stack interface, one to be a printer
driver, one to be the GUI, one to run each user application, and one
to be the system manager, the true tiny kernal and nothing else.
Everything is dynamically loadable, unloadable, and restartable. If a
core is underemployed, it sleeps or runs slower; who cares if
transistors are wasted? This would not be a specialized system, it
would be a perfectly general OS with applications, but no process
would hog the machine, no process could crash anything else, and it
would be fundamentally reliable.
That would be an absurd setup. There is some justification for wanting
multiple simple cores in server systems (hence the Sun Niagara chips),
but not for a desktop system. The requirements for a disk controller, a
browser, and Doom are totally different. With a few fast cores like
today's machines, combined with dedicated hardware (on the graphics
card), you get a pretty good system that can handle any of these. With
your system, you'd get a chip with a couple of cores running flat out
(but without a hope of competing with a ten year old PC, as they could
not have comparable bandwidth, cache, or computing resources in each
core), along with a few hundred cores doing practically nothing. In
fact, most of the cores would *never* be used - they are only there in
case someone wants to do a few extra things at the same time since you
need a core per process.
This is not about performance; hardly anybody needs gigaflops. It's
all about reliability.
Until you can come up with some sort of justification, however vague, as
to why you think one cpu per process is more reliable than context
switches, this whole discussion is useless.

You define yourself by the ideas you refuse to consider. So I suppose
you'll still be running Windows 20 years from now.

I run windows (on desktops) and Linux (on a desktop, a laptop, and a
bunch of servers, and on a fairly high-reliability automation system I
am working on), and I'd use something else if I needed an OS in my
embedded systems. If something better came along, I'd use that -
whatever is the right tool for the job.

The relevant saying is "keep an open mind, but not so open that your
brains fall out". I'm happy to accept that doing things in hardware is
often more reliable than doing things in software (I work with small
embedded systems - I know when reliability is important, and I know
about achieving it in practical systems). But what I am not willing to
accept is claims that you alone understand the way to make all computers
reliable, using a hardware design that is obviously (to me, anyway)
impractical, and you offer no justification beyond repeating claims that
"hardware is always more reliable than software", and therefore you can
practically guarantee that the future of computing will be dominated by
single task per core processors.

I believe I have been open minded - I've tried to point out the problems
with your ideas, and why I think it is impractical to design such chips,
and why they would be impractical for general purpose computing even if
they were made. I've repeatedly asked for justification for your
claims, and received none of relevance. I am more than willing to
discuss these ideas more if you can justify them - but until then, I'll
continue to view massively multi-core chips as useful for some
specialised tasks but inappropriate for general purpose (and desktop in
particular) computing.

I seem to remember previous discussions reaching similar conclusions -
you had a pretty way-out theory, leading to an interesting discussion
but ending with me giving up in frustration, and you calling me
closed-minded. These sorts of ideas are good for making people think,
but scientific minds are naturally sceptical until given solid evidence
and justification.

mvh.,

David
 
K

krw

The Cell BE processor beats it by a factor of at least three.

As if you'd know you ass from a hole in the ground, Dimbulb.

The Cell processor uses a rather simple PPC processor and attached
processors tuned specifically for FP performance. It's not a general
purpose processor. The M$ X-Box 360 uses what is essentially three
of the cores to eke out its performance.
 
K

krw

On Sep 15, 11:09 am, John Larkin
[....]
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.
I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.
Not register banks, just a couple of bits in the rename register
files.
I think you mistook my point. You would have as many set of registers
as there are virtual CPUs, perhaps plus some. When a task hits a
point where it needs to wait, its ALU section starts doing the work
for the lower priority task. This could be all hardware so no context
switching time other than perhaps a clock cycle would be needed.

I don't think I did. My point is that you don't need banks of
registers, simply use the renaming that's already there and a couple
of bits to mark which registers are renamed to which virtual CPUs.
No context switch and no bank switching. All the hardware is already
there. More registers are needed in the register files but multiple
copies of the unused ones aren't.

Yes, I think I mistook your point.


The way I had imagined it was that the registers of the virtual CPUs
that are not currently running would be in a different place than the
ones that are actually being used. My concern was not increasing the
fan in and out of the busses on the ALU so that there would be no
increase in the loading and hence delay in those circuits.

If they're "somewhere" else, they have to be un/re/loaded. That
takes substantial time. You're going to have to figure out which
registers to un/re/load at point. Remember, if you want to switch
virtual CPUs at any time, you're going to have to not only
save/re/load all architected registers but renamed registers unless
you plan on quiesceing/flushing the execution unit between virtual
CPU switches.
I also imagined the register exchanging having its own set of busses.
Perhaps I was too worried about bus times and not worried enough about
ALU times.

More busses => more register file ports, which is worse than adding
registers to the file.
You may have a point here. I've never actually measured the sizes of
such things. I was thinking back to the designs of bit slice
machines.

A lot of things change when transistors become less expensive than
the wires between them. ;-)
The throughput continues to grow fairly quickly but you end up with a
pipeline. When the circuit gets to a certain point, the stages become
equivelent to a multiplier circuit.

Pipelines lose quickly because you have to subtract (clock_jitter +
setup/hold) * pipe_stages from throughput. The P-IV is a good
example of this. ...about the only thing it's a good example of,
other than how *not* to architect a processor.
BTW:
There are four ways of getting to a sqrt() function. If you are doing
it on a micro controller or other machince where dividing is very
costly Newtons method is the slowest. If you have a fast multiply
finding 1/sqrt(X) is much quicker.

Like many problems, start with a lookup table.
 
N

Nobody

It didn't take me any time at all to see that this has no bounds checking;
what happens if someone passes X=1.42857 to it?

In a statically-typed language, that isn't an option (unless you're
seriously suggesting that X would be declared as a floating-point variable).
This little example-oid was clearly written by one of those lame
programmer-wannabees who keeps sniveling "All Software Has Bugs And
There's Nothing You Can Do About It!!!" as an excuse for his
incompetence/laziness.

No, it's a simple example of something which can't easily be proven to
terminate. Most static analysis tools don't even try to address
non-termination.

Pointing out that some bugs can't be eliminated by static analysis isn't
the same thing as suggesting that they can't be caught at all.

Having said that, most of the bugs which occur in the wild are of a kind
which could easily be caught using better tools. More powerful type
systems (e.g. those typically found in functional languages) would go a
long way, as would design-by-contract (as in Eiffel).
 
M

MooseFET

It didn't take me any time at all to see that this has no bounds checking;
what happens if someone passes X=1.42857 to it?

X is an integer. Most other readers understood what I meant by the
comment.

This little example-oid was clearly written by one of those lame
programmer-wannabees who keeps sniveling "All Software Has Bugs And
There's Nothing You Can Do About It!!!" as an excuse for his
incompetence/laziness.

No, it is an example of a proof that you can't check your code by
running it through a compiler, or static checker, no matter how clever
that code may be unless you are willing to have the compiler take
weeks to compile. Problems of this sort require either a new method
be found or that all cases be explored.
 
M

MooseFET

On Wed, 19 Sep 2007 05:04:34 -0700, Martin Brown


But in proper synchronous design, controlled by
state machines, immensely complex stuff just works. It's sort of
ironic that in a big logic design, 100K gates and maybe 100 state
machines, everything happens all at once, every clock, across the
entire chip, and it works. Whereas with software, there's only one PC,
only one thing happens, at a single location, at a time, and usually
nobody can predict the actual paths, or write truly reliable code.

4G of RAM * 8 Bits is a lot more bits than a 100K gates. You need to
keep your sizes equal if you want to make a fair comparison.
 
M

MooseFET

On Wed, 19 Sep 2007 05:04:34 -0700, Martin Brown


But in proper synchronous design, controlled by
state machines, immensely complex stuff just works. It's sort of
ironic that in a big logic design, 100K gates and maybe 100 state
machines, everything happens all at once, every clock, across the
entire chip, and it works. Whereas with software, there's only one PC,
only one thing happens, at a single location, at a time, and usually
nobody can predict the actual paths, or write truly reliable code.

4G of RAM * 8 Bits is a lot more bits than a 100K gates. You need to
keep your sizes equal if you want to make a fair comparison.




.]
Programmers have pretty much proven that they cannot write bug-free
large systems.
In every other area, humans make mistakes and yet we seem surprised
that programmers do too.
In most other areas of endeavour small tolerance errors do not so
often lead to disaster. Boolean logic is less forgiving. And fence
Software programming hasn't really had the true transition to a hard
engineering discipline yet. There hasn't been enough standardisation
Compare a software system to an FPGA. Both are complex, full of state
machines (implicit or explicit!), both are usually programmed in a
heirarichal language (C++ or VHDL) that has a library of available
modules, but the FPGAs rarely have bugs that get to the field, whereas
most software rarely is ever fully debugged.
I think that hardware engineers get a better grounding in logical
design (although I haven't looked at modern CS syllabuses so I may be
out of date).

Hardware can be spaghetti too, and can be buggy and nasty, if one does
asynchronous design. But in proper synchronous design, controlled by
state machines, immensely complex stuff just works. It's sort of
ironic that in a big logic design, 100K gates and maybe 100 state
machines, everything happens all at once, every clock, across the
entire chip, and it works. Whereas with software, there's only one PC,
only one thing happens, at a single location, at a time, and usually
nobody can predict the actual paths, or write truly reliable code.




But it is mostly a cultural thing. Software houses view minimum time
to market and first mover advantage to gain maximum market share as
more important than correct functionality. And it seems they are
right. Just look at Microsoft Windows vs IBMs OS/2 a triumph of superb
marketting over technical excellence!
And I have bought my fair share of hardware that made it onto the
market bugs and all too. My new fax machine caught fire. Early V90
modems that only half work etc.
You are treating the symptoms and not the disease. Strongly typed
languages already exist that would make most of the classical errors
of C/C++ programmers go away. Better tools would help in software
development, but until the true cost of delivering faulty software is
driven home the suits will always go for the quick buck.

No, I am making the true observation that complex digital logic
designs are usually bug-free, simple software systems have a chance of
being so, and complex software systems never are.

John
 
Top