Maker Pro
Maker Pro

How to develop a random number generation device

M

MooseFET

What's to re-read? Exploitation requires the write to succeed, which
requires that the overrun has to occur into memory which is writable by
the task.

It also requires that the write cause a bad thing(tm) to happen.

I, however, am now bored. I think I go read some eletronics related
ones.
 
J

John Larkin

You're probably better off going quite a bit longer than that between
replacements. Believe me, there are two ends of a bath tub. They
gave me a new Dell (hmm, maybe there is a common thread here) on my
first day. The disk drive died that afternoon and I lost a couple of
days work (couldn't get it replaced right away). It could have been
far worse though.


Actually, with RAID, I can just let one of the drives fail, and pop in
a new one when it does. No panic.

John
 
J

John Larkin

John said:
John Larkin wrote:
On Sun, 16 Sep 2007 22:07:42 +0200, David Brown

John Larkin wrote:
On Sep 15, 11:09 am, John Larkin
[....]
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.
I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point
ALUs than actual integer ALUs.

Sounds sort of like Sun's Niagra chips, which have (IIRC) 8 cores, each
with 4 threads, but only a few floating point units. For things like
web serving, it's ideal.

Yup. Low-horsepower tasks can just be a thread on a multithread core,
and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many,
many real or virtual CPUs are available. One CPU would be the manager,
and every task, process, or driver could have its own, totally
confined and protected, CPU, and there would be no context switching
ever, and few interrupts in fact.

That's not going to work for Linux, anyway - there is a utility thread
spawned per cpu at the moment (work is underway to avoid this, because
it is a bit of a pain when you have thousands of cpus in one box).

However, there is no point in having a cpu (or even a virtual cpu)
dedicated to each task. Many sorts of tasks spend a lot of time
sleeping while waiting for other events - a cpu in this state is a waste
of resources.
Only if you think of a CPU as a valuable resource. As silicon shrinks,
a CPU becomes a minor bit of real estate. It makes sense to use it
when there's something to do, and put it to sleep when there's not.
Lots of power gets saved by not doing context switches.

CPUs *are* a valuable resource - modern cpu cores take up a lot of
space, even when you exclude things like the cache (which take more
space, but cost less per mm^2 since you can design in a bit of
redundancy and thus tolerate some faults).

The more CPUs you have, the more time and space it costs to keep caches
and memory accesses coherent. There are some sorts of architectures
which work well with multiple CPU cores, but these are not suitable for
general purpose computing.

My point is that large numbers of CPU cores *will* become common and
cheap, and we need a new type of OS to take advantage of this new
reality. Done right, it could be simple and astoundingly secure and
reliable.

I would be very surprised to see a system where the number of CPU cores
was greater than the number of processes. I expect to see the number of
cores increase, especially for server systems, but I don't expect to see
systems where it is planned and expected that most cores will sleep most
of the time.

Well, I remember 64-bit static rams, and 256-bit DRAMS. I can't see
any reason we couldn't have 256 or 1024 cpu's on a chip, especially if
a lot of them are simple integer RISC machines.

You can certainly get 1024 CPUs on a chip - there are chips available
today with hundreds of cores. But there are big questions about what
you can do with such a device - they are specialised systems. To make
use of something like that - you'd need a highly parallel problem (most
desktop applications have trouble making good use of two cores - and it
takes a really big web site or mail gateway to scale well beyond about
16 cores). You also have to consider the bandwidth to feed these cores,
and be careful that there are no memory conflicts (since cache coherency
does not scale well enough).

No, no, NO. You seem to be assuming that we'd use multiple cores the
way Windows would use multiple cores. I'm not talking about solving
big math problems; I'm talking about assigning one core to be a disk
controller, one to do an Ethernet/stack interface, one to be a printer
driver, one to be the GUI, one to run each user application, and one
to be the system manager, the true tiny kernal and nothing else.
Everything is dynamically loadable, unloadable, and restartable. If a
core is underemployed, it sleeps or runs slower; who cares if
transistors are wasted? This would not be a specialized system, it
would be a perfectly general OS with applications, but no process
would hog the machine, no process could crash anything else, and it
would be fundamentally reliable.

This is not about performance; hardly anybody needs gigaflops. It's
all about reliability.
That's a conjecture plucked out of thin air. Of course a dedicated OS
designed to be limited but highly reliable is going to be more reliable
than a large general-purpose OS that must run on all hardware and
support all sorts of software - but that has absolutely nothing to do
with the number of cores!

Programmers have pretty much proven that they cannot write bug-free
large systems. Unless there's some serious breakthrough - which is
really prohibited by the culture - the answer is to have the hardware,
which people *do* routinely get right, take over most of the functions
that an OS now performs. One simple way to do that is to have a CPU
per process. It's going to happen.

When I was just a sprout, my old mentor Melvin Goldstein told me "in
these integrated circuit things, one day transistors could cost a
penny each." I thought he was crazy. OK, one day CPUs will cost 5
cents each, and Windows is not the ultimate destiny of computing.

Hey, he wrote a book!

http://www.amazon.com/Physics-Foibles-physics-computer-students/dp/1553957768


John
 
J

John Larkin

Have you actually tested this? What happens if you don't have the same
kind of drive available? I don't think it will work until you replace
the drive.

We bought a *lot* of identical drives when we bought the batch of PCs.
I don't want to worry about PCs for another 4 or 5 years maybe.

The hot-swap raid thing works great. Pull either of the C: drives, pop
in another drive, blank or not, and it begins automatically cloning
the live os to the "new" drive, online. It takes about an hour, after
which they are identical. We've tested it in all sorts of situations,
and it works.

I can also pull one of the c: drives from my work machine and take it
home, and run it as d:, or boot and run the whole OS as c:

John
 
S

Spehro Pefhany

Actually, with RAID, I can just let one of the drives fail, and pop in
a new one when it does. No panic.

John

Have you actually tested this? What happens if you don't have the same
kind of drive available? I don't think it will work until you replace
the drive.



Best regards,
Spehro Pefhany
 
N

Nobody

A messed up data segment is still the data segment. It shouldn't be
possible to execute it as a code.

Since 286 there were the goodies like 4 levels of priviledge, separate
LDTs for every process, different segment rights for code, data and
stack. In the theory, that should allow for a pretty solid protection,
however in the practice it was (and still is!) unused for the
simplicity, sw compatibility and performance reasons.

Agreed.

Some of it is dictated by the language: contrary to what used to be a
commonly-held belief amongst DOS programmers, C does not have any
concept of "near" and "far" pointers. If you want to use multiple data
segments, *all* data pointers have to be segment:eek:ffset (48 bits on 32-bit
CPUs). One data segment (data, bss, rodata, stack) and one code segment
wouldn't be a problem, though.

Some of it is dictated by portability: x86 has segmented memory; most
other CPUs don't. If you want a single code base to run on multiple
architectures, you can't assume segmented memory. This doesn't have much
impact upon user space, but the Linux kernel could get quite messy if it
had to allow for disjoint code and data spaces.

OTOH, segmentation doesn't necessarily get you all that much that you
don't get with page-level controls (on x86, the inability to map pages
write-only is a problem). On newer CPUs, you can implement W^X (write or
execute but not both) at page level. On older CPUs, you can put the code
first and make the code segment end immediately after the code (all
segments must have the same base address to get a single flat address
space), but this can cause problems for dynamically-mapped code (dlopen()
etc). A compromise is to make the code segment end before the bottom of
the stack, which protects against stack-based injection but not the heap
or data segment (an attacker would have to find some other vector to get
the code called, as you can't trash the return address with a heap overrun).

You could prevent heap overruns if malloc() used a separate segment for
every block, but there would be a significant performance hit (malloc()
would require a context switch), and you're limited to 8192 (IIRC) local
descriptors per process (16 bits for the selector minus 1 bit for
global/local and 2 bits for the privilege level leaves 13 bits).

Theoretically you could use the same approach for local (stack) arrays,
but the performance hit would be even worse.
 
N

Nobody

My God! You've got to quit using MICRO$~1 web servers!

Windows vs Linux doesn't come into it:

http://www.google.com/search?q=apache "buffer overflow"

C is C, whichever OS you run the program on.

Beyond that, the fact that the web is based around many "small"
transactions means that there is a significant performance gain to be had
from putting everything in one process (e.g. mod_php rather than spawning
an interpreter for each request), thereby eliminating process boundaries
which would otherwise provide some protection.
 
N

Nobody

It is possible to declare every data object in a program as a separate
segment. That is what LDT was intended for. Of course, there will be a
lot of overhead and the compatibility issues, too.

One problem with that is that you're limited to 8192 segments per process.

In theory, you could use segments only for "active" objects, and have
something like the Local{Lock,Unlock} of 8086-mode Windows. But apart from
producing really ugly code (and adding overhead), it only helps to the
extent that the code chooses to make use of it.

Some code can use a lot of arrays, e.g. an array of structures, each of
which contains an array of characters. Chances are that the programmer
will use a segment for the larger array and leave the character arrays as
just a range of bytes within the segment.

If you can accept mechanisms which impose significant constraints on
coding, you may as well just forbid the use of arrays in favour of an
opaque "vector" type whose accessor methods/functions perform bounds
checking.

Both methods work just as well (i.e. they work if you use them, and don't
work if you don't use them), but the OS-level option adds a lot more
overhead.

The realistic approach to eliminating buffer overruns is not to write word
processors and web browsers in a language which was designed for writing
an OS kernel and device drivers. If arrays are a distinct type, having
both a start and end (to allow bounds checking), and pointer arithmetic is
impossible (or at least not actively encouraged), buffer overruns would be
an obscure theoretical issue rather than an everyday occurrence.
 
N

Nobody

A decent OS, using decent hardware, should enforce isolation of code,
stack, and data, in itself and in all lower-priority processes. It
should be impossible for data to ever be executed, anywhere, or for
code segments to be altered,

I'm with you up to this point.
and buffer overflow exploits should be impossible.

But this is a separate issue.

If you have W^X (write or execute but not both), code injection is
impossible, but that isn't the only type of buffer overflow exploit
(although it's probably the most powerful).
This ain't even hard, except for the massive legacy
issues.

They aren't all that massive. Most programs don't need executable
stack/heap, and don't care about exactly where particular memory regions
are mapped.

Most of the code which does care tends to be in a handful of programs and
libraries. IIRC, implementing W^X on Linux required some changes to the
signal-handling code and hardly broke any binaries (except for emulators,
and code written in Objective-C, which uses thunks quite extensively).
Right. A good hardware+OS architecture should prevent this, too. Bad
code should crash, terminated by the OS, not take over the world, or
send all your email contacts to some guy in Poland.

The problem here is that the OS doesn't know where one buffer ends and
another begins. Intra-process buffer overruns are primarily an issue with
the language.

In C, an array is represented by its start address; bounds checking is the
responsibility of the programmer. That isn't necessarily a bad decision
for a language which was meant to be one step above assembler, but it
doesn't make sense for writing applications.
 
M

MooseFET

On Sep 17, 7:55 pm, John Larkin
Programmers have pretty much proven that they cannot write bug-free
large systems.

In every other area, humans make mistakes and yet we seem surprised
that programmers do too.
Unless there's some serious breakthrough - which is
really prohibited by the culture

I think there really is a fundamental limitation that makes it such
that the programming effort becomes infinite to make a bug free large
system. We do seem to be able to make bug free small systems,
however.

This suggests a rephrasing of your point as "it is better to use
multiple simple systems" connected in some way rather than just
calling it multiple cores or CPUs.
- the answer is to have the hardware,
which people *do* routinely get right, take over most of the functions
that an OS now performs.

Very complex hardware is likely to have the same problems as very
complex software. We need to link of ways to use many copies of a
much simpler hardware.
One simple way to do that is to have a CPU
per process. It's going to happen.

This is exactly the path or perhaps even N CPUs per process, where N
 
J

John Larkin

On Sep 17, 7:55 pm, John Larkin


In every other area, humans make mistakes and yet we seem surprised
that programmers do too.


I think there really is a fundamental limitation that makes it such
that the programming effort becomes infinite to make a bug free large
system. We do seem to be able to make bug free small systems,
however.

This suggests a rephrasing of your point as "it is better to use
multiple simple systems" connected in some way rather than just
calling it multiple cores or CPUs.

OK, rephrase it. Then start making the kinds of chips and OS's that I
suggest.
Very complex hardware is likely to have the same problems as very
complex software. We need to link of ways to use many copies of a
much simpler hardware.

That's what I proposed: arrays of simple RISC machines, a amattering
of more powerful cpu's or floating-point units, all on a chip around a
central cache.
This is exactly the path or perhaps even N CPUs per process, where N

But we don't need performance. We need simplicity and reliability.

John
 
M

Martin Brown

[....]
Programmers have pretty much proven that they cannot write bug-free
large systems.

In every other area, humans make mistakes and yet we seem surprised
that programmers do too.

In most other areas of endeavour small tolerance errors do not so
often lead to disaster. Boolean logic is less forgiving. And fence
post errors which even the best of us are inclined to make are very
hard to spot. You see what you intended to write and not what is
actually there. Walkthroughs and static analysis tools can find these
latent faults if budget permits.

Some practitioners, Donald Knuth for instance have managed to produce
virtually bug free non-trivial systems (TeX). OTOH the current
business paradigm is ship it and be damned. You can always sell
upgrades later. Excel 2007 is a pretty good current example of a
product shipped way before it was ready. Even Excel MVPs won't defend
it.
I think there really is a fundamental limitation that makes it such
that the programming effort becomes infinite to make a bug free large
system. We do seem to be able to make bug free small systems,
however.

Software programming hasn't really had the true transition to a hard
engineering discipline yet. There hasn't been enough standardisation
of reliable component software parts for sale off the shelf equivalent
in complexity to electronic ICs that really do what they say on the
tin and do it well.

By comparison thanks to Whitworth & Co mechanical engineering has
standardised nuts and bolts out of a huge arbitrary parameter space.
Nobody these days would make their own nuts and bolts from scratch
with randomly chosen pitch, depth and diameter. Alas they still do in
software:(
This suggests a rephrasing of your point as "it is better to use
multiple simple systems" connected in some way rather than just
calling it multiple cores or CPUs.

How about calling them modules. Re-usable components with a clearly
defined and documented external interface that do a particular job
extremely well. NAGLIB, the IJG JPEG library or FFTW are good examples
although arguably in some cases the external user interface is more
than a bit opaque.
Very complex hardware is likely to have the same problems as very
complex software. We need to link of ways to use many copies of a
much simpler hardware.

And the very complex hardware is invariably designed using software
tools. The problem is not that it is impossible to make reliable
software. The problem is that no-one wants to pay for it.

In hardware chip design the cost of fabricating a batch of total junk
is sufficiently high and painful that the suits will usually allow
sufficient time for testing before putting a new design into full
production. Not so for software where upgrade CDs and internet
downloads are seen as dirt cheap.
This is exactly the path or perhaps even N CPUs per process, where N

For N<=4 full multiprocessing is fairly tractable and after that it
suddenly gets a lot harder. And I have know one multiprocessing
mainframe box where N=3 was perfect and the advertised upgrade to N=4
was a dogs dinner. Worth pointing out here that very few software
programs these days are that heavily CPU bound.

Most CPUs in PCs these days are vastly overpowered for the normal day
to day work. Only a few power users and gamers really push them hard.

Regards,
Martin Brown
 
D

David Brown

John said:
Why would a *system* care about the latency of one processor accessing
memory? The system only cares about net performance. As it stands now,
only one *process* can access memory at a time (because all processes
share a single CPU) and they all suffer from context switching
overhead. Multiple CPUs never context switch, so *must* be overall
faster.

It is certainly true that it matters little if a process is delayed
because it is swapped out of the cpu, or because the cpu it is running
on has slow access to memory. But unless your new architecture is an
improvement in speed (since it is unlikely to be more power efficient,
or cheaper, and is not inherently more reliable), then there is no point
in making it.

There is no reason to suppose your massively multiple core will be
faster. Your shared memory will be a huge bottleneck in the system -
rather than processes being blocked by limited cpu resources, they will
be blocked by memory access.

Your also seem to be under the impression that context switches are a
major slowdown - in most cases, they are not significant. On server
systems with two or four cores, processes typically get to run until
they end or block for some other reason - context switches are a tiny
fraction of the time spent. If you want a faster system serving more
web pages or database queries, you have to boost the whole system - more
I/O bandwidth, more memory bandwidth (this is why AMD's devices scale
much better than Intel's), more memory, etc. Simply adding extra cpu
cores will make little difference beyond the first few. For desktops,
the key metric is the performance on a single thread - dual cores are
only useful (at the moment) to make sure that processor-intensive
threads are not interrupted by background tasks.

For almost every mainstream computing task, it is more efficient to use
fewer processors running faster (although it is seldom worth getting the
very fastest members of a particular cpu family) - you can get more work
out of 2 cores at 2 GHz than 4 cores at 1 GHz. In a chip designed
around many simple cores, each core is going to be a lot slower than a
few optimised fast cores can be.
It's the OS's that we have problems with. Hardware is cheap and
reliable; software is expensive and buggy. So we should shift more of
the burden to hardware.

If you shift the complexity to hardware, you'd get hardware that is
expensive and buggy.

Have you anything to back up this belief in cheap and reliable hardware?
Certainly some hardware is cheap and reliable, being relatively simple
- but the same applies to software.
The IBM Cell structure is a hint of the future.

The Cell is a specialised device - only the one "master" cpu can run
general tasks. The eight smaller cpu's are only useful for specialised
dedicated tasks (such as the graphics processing in games). This is
precisely as I have described - massively multi-core devices exist, but
they are only suitable for specialised tasks.
Sure, but Moore's Law keeps going, in spite of a pattern of people
refusing to accept its implications.

Moore's Law is not like the law of gravity, you know. You can't quote
it as "proof" that a simple solution to your shared cache problem will
be developed!
Far fewer than software bugs. Hardware design, parallel execution of
state machines, is mature, solid stuff. Software just keeps bloating.

It's just a matter of the costs of finding and fixing errors in hardware
are so much higher than for software, so that more effort goes into
getting it right in the first place. But the result is that a given
feature is vastly more expensive to develop in hardware than software.
A hardware version of the Vista kernel may well be more reliable than
the software version - but it would take several centuries to design,
simulate and test, and cost millions per device.
So let's make it simple, waste some billions of CMOS devices, and get
computers that always work. We'll save a fortune.





My XP is currently running about 30 processes, with a browser, mail
client, a PDF datasheet open, and Agent running. A number of them are
not really necessary.

How many on yours?

At the moment, I've got 51 processes and a total of about 476 threads
(it's the number of threads that's important here) on my XP-64 desktop,
excluding any that task manager is not showing. There is a svchost.exe
service with 80 threads on its own, and firefox has 24 threads.
 
J

John Larkin

[....]
Programmers have pretty much proven that they cannot write bug-free
large systems.

In every other area, humans make mistakes and yet we seem surprised
that programmers do too.

In most other areas of endeavour small tolerance errors do not so
often lead to disaster. Boolean logic is less forgiving. And fence
post errors which even the best of us are inclined to make are very
hard to spot. You see what you intended to write and not what is
actually there. Walkthroughs and static analysis tools can find these
latent faults if budget permits.

Some practitioners, Donald Knuth for instance have managed to produce
virtually bug free non-trivial systems (TeX). OTOH the current
business paradigm is ship it and be damned. You can always sell
upgrades later. Excel 2007 is a pretty good current example of a
product shipped way before it was ready. Even Excel MVPs won't defend
it.
I think there really is a fundamental limitation that makes it such
that the programming effort becomes infinite to make a bug free large
system. We do seem to be able to make bug free small systems,
however.

Software programming hasn't really had the true transition to a hard
engineering discipline yet. There hasn't been enough standardisation
of reliable component software parts for sale off the shelf equivalent
in complexity to electronic ICs that really do what they say on the
tin and do it well.

By comparison thanks to Whitworth & Co mechanical engineering has
standardised nuts and bolts out of a huge arbitrary parameter space.
Nobody these days would make their own nuts and bolts from scratch
with randomly chosen pitch, depth and diameter. Alas they still do in
software:(

Compare a software system to an FPGA. Both are complex, full of state
machines (implicit or explicit!), both are usually programmed in a
heirarichal language (C++ or VHDL) that has a library of available
modules, but the FPGAs rarely have bugs that get to the field, whereas
most software rarely is ever fully debugged.

So, computers should use more hardware and less software to manage
resources. In fact, the "OS kernal" of my multiple-CPU chip could be
entirely hardware. Should be, in fact.

How about calling them modules. Re-usable components with a clearly
defined and documented external interface that do a particular job
extremely well. NAGLIB, the IJG JPEG library or FFTW are good examples
although arguably in some cases the external user interface is more
than a bit opaque.


And the very complex hardware is invariably designed using software
tools. The problem is not that it is impossible to make reliable
software. The problem is that no-one wants to pay for it.

In hardware chip design the cost of fabricating a batch of total junk
is sufficiently high and painful that the suits will usually allow
sufficient time for testing before putting a new design into full
production. Not so for software where upgrade CDs and internet
downloads are seen as dirt cheap.

Yes. The bug level is proportional to the ease of making revisions.
That's why programmers type rapidly and debug, literally, forever.

Most CPUs in PCs these days are vastly overpowered for the normal day
to day work. Only a few power users and gamers really push them hard.

Yes. So let's use that horespower to buy reliability.

John
 
D

David Brown

John said:
John said:
On Mon, 17 Sep 2007 18:40:35 +0200, David Brown

John Larkin wrote:
On Sun, 16 Sep 2007 22:07:42 +0200, David Brown

John Larkin wrote:
On Sep 15, 11:09 am, John Larkin
[....]
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.
I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point
ALUs than actual integer ALUs.

Sounds sort of like Sun's Niagra chips, which have (IIRC) 8 cores, each
with 4 threads, but only a few floating point units. For things like
web serving, it's ideal.

Yup. Low-horsepower tasks can just be a thread on a multithread core,
and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many,
many real or virtual CPUs are available. One CPU would be the manager,
and every task, process, or driver could have its own, totally
confined and protected, CPU, and there would be no context switching
ever, and few interrupts in fact.

That's not going to work for Linux, anyway - there is a utility thread
spawned per cpu at the moment (work is underway to avoid this, because
it is a bit of a pain when you have thousands of cpus in one box).

However, there is no point in having a cpu (or even a virtual cpu)
dedicated to each task. Many sorts of tasks spend a lot of time
sleeping while waiting for other events - a cpu in this state is a waste
of resources.
Only if you think of a CPU as a valuable resource. As silicon shrinks,
a CPU becomes a minor bit of real estate. It makes sense to use it
when there's something to do, and put it to sleep when there's not.
Lots of power gets saved by not doing context switches.

CPUs *are* a valuable resource - modern cpu cores take up a lot of
space, even when you exclude things like the cache (which take more
space, but cost less per mm^2 since you can design in a bit of
redundancy and thus tolerate some faults).

The more CPUs you have, the more time and space it costs to keep caches
and memory accesses coherent. There are some sorts of architectures
which work well with multiple CPU cores, but these are not suitable for
general purpose computing.

My point is that large numbers of CPU cores *will* become common and
cheap, and we need a new type of OS to take advantage of this new
reality. Done right, it could be simple and astoundingly secure and
reliable.

I would be very surprised to see a system where the number of CPU cores
was greater than the number of processes. I expect to see the number of
cores increase, especially for server systems, but I don't expect to see
systems where it is planned and expected that most cores will sleep most
of the time.

Well, I remember 64-bit static rams, and 256-bit DRAMS. I can't see
any reason we couldn't have 256 or 1024 cpu's on a chip, especially if
a lot of them are simple integer RISC machines.
You can certainly get 1024 CPUs on a chip - there are chips available
today with hundreds of cores. But there are big questions about what
you can do with such a device - they are specialised systems. To make
use of something like that - you'd need a highly parallel problem (most
desktop applications have trouble making good use of two cores - and it
takes a really big web site or mail gateway to scale well beyond about
16 cores). You also have to consider the bandwidth to feed these cores,
and be careful that there are no memory conflicts (since cache coherency
does not scale well enough).

No, no, NO. You seem to be assuming that we'd use multiple cores the
way Windows would use multiple cores. I'm not talking about solving
big math problems; I'm talking about assigning one core to be a disk
controller, one to do an Ethernet/stack interface, one to be a printer
driver, one to be the GUI, one to run each user application, and one
to be the system manager, the true tiny kernal and nothing else.
Everything is dynamically loadable, unloadable, and restartable. If a
core is underemployed, it sleeps or runs slower; who cares if
transistors are wasted? This would not be a specialized system, it
would be a perfectly general OS with applications, but no process
would hog the machine, no process could crash anything else, and it
would be fundamentally reliable.

That would be an absurd setup. There is some justification for wanting
multiple simple cores in server systems (hence the Sun Niagara chips),
but not for a desktop system. The requirements for a disk controller, a
browser, and Doom are totally different. With a few fast cores like
today's machines, combined with dedicated hardware (on the graphics
card), you get a pretty good system that can handle any of these. With
your system, you'd get a chip with a couple of cores running flat out
(but without a hope of competing with a ten year old PC, as they could
not have comparable bandwidth, cache, or computing resources in each
core), along with a few hundred cores doing practically nothing. In
fact, most of the cores would *never* be used - they are only there in
case someone wants to do a few extra things at the same time since you
need a core per process.
This is not about performance; hardly anybody needs gigaflops. It's
all about reliability.

Until you can come up with some sort of justification, however vague, as
to why you think one cpu per process is more reliable than context
switches, this whole discussion is useless.
Programmers have pretty much proven that they cannot write bug-free
large systems. Unless there's some serious breakthrough - which is
really prohibited by the culture - the answer is to have the hardware,
which people *do* routinely get right, take over most of the functions
that an OS now performs. One simple way to do that is to have a CPU
per process. It's going to happen.

Do you have any hints of a suggestion that anyone else thinks this is
the case?
 
R

Rich Grise

Actually, with RAID, I can just let one of the drives fail, and pop in
a new one when it does. No panic.

How do you know which is the bad one?

Thanks,
Rich
 
J

John Larkin

How do you know which is the bad one?

If a drive is bad it complains at bios/boot time. We're running XP,
which unfortunately doesn't have the online RAID utilities, but the
bios seems to do everything we really need.

John
 
S

Spehro Pefhany

We bought a *lot* of identical drives when we bought the batch of PCs.
I don't want to worry about PCs for another 4 or 5 years maybe.

The hot-swap raid thing works great. Pull either of the C: drives, pop
in another drive, blank or not, and it begins automatically cloning
the live os to the "new" drive, online. It takes about an hour, after
which they are identical. We've tested it in all sorts of situations,
and it works.

I can also pull one of the c: drives from my work machine and take it
home, and run it as d:, or boot and run the whole OS as c:

John

Sounds like you have things under control. Just out of curiosity-- do
you have an "IT guy" or are things kept running by real engineers?


Best regards,
Spehro Pefhany
 
Y

YD

No, he is right. The Harvard architecture may be an exception because
"trying execute data" may be meaningless in that sort of processor.


Again what he said is true. The "DOS days" includes the early
versions of Windows. One version of the joke had a JPEG file that
really did have (what was claimed to be) a virus in it thanks to
someones electron microscope.

Here's a good example:
http://www.armageddononline.org/image/virus2.JPG

- YD.
 
R

Rich Grise

Yes. The bug level is proportional to the ease of making revisions.
That's why programmers type rapidly and debug, literally, forever.

I guess that makes me a Software Engineer. I design, write, and deliver
bug-free code that does exactly what it is instructed to do, no more,
no less.

Admittedly, I've never done a whole OS, but all one would need to do
is apply the same principle of "design it before you start to build
it". :)

I think part of the blame for crappy programmers these days can be
traced back to BASIC - the language where the scriptkiddie starts
entering his program before it's even written! :)

Cheers!
Rich
 
Top