John Larkin wrote:
John Larkin wrote:
Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700
If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.
He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.
When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?
Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.
Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.
Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.
Its MTBF, hardware and software, could
be a million hours.
That doesn't matter at all if the monitor is responsible for the
world power grid.
Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].
The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.
/BAH
Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
You seem to be confusing OSes with monitors.
The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.
Why not?
If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".
/BAH
All the kernal has to do is set up priviliges and load tasks into
processors. It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either. Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use, as long as access mapping is safe.
Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.
What's wrong with that?
Even Intel is headed in that direction: multiple (predicted to be
hundreds, maybe thousands) of multi-threaded cores on a chip,
surrounding a central L2 cache that connects to DRAM and stuff, each
with a bit of its own L1 cache to reduce bashing the shared L2 cache.
Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?
John