All the kernal has to do is set up priviliges and load tasks into
processors. It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either. Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use, as long as access mapping is safe.
Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.
What's wrong with that?
Even Intel is headed in that direction: multiple (predicted to be
hundreds, maybe thousands) of multi-threaded cores on a chip,
surrounding a central L2 cache that connects to DRAM and stuff, each
with a bit of its own L1 cache to reduce bashing the shared L2 cache.
What you're describing has been around for many years in some
form; OK, not from Intel and Microsoft, but industry has been
using multi-processor systems as standard for over 10 years,
actually nearer 20 years. Picking on an industrial OS such as
Solaris -- it was running 64 processor systems 15 years ago,
and 256 processor systems not long afterwards.
What's happened more recently is that those systems such as an
E10k which were the size of your garage have shrunk down to the
fit on a chip, but other than being faster and less power hungry,
they aren't really so different from their 15 year old multi-processor
systems. Can't recall exact dates, but Sun started shipping 8
core 64 thread chips around 4 years ago, and this brought the
cost of massive multi-processor systems down to 1/20th of what
they had been. Nothing particularly significant needed doing to
the OS to run on these -- they just looked rather like an E10k
on a chip, and the OS was supporting such systems 15 years ago.
However, the low price point of such a system makes it available
and attractive to vastly more apps than could afford an E10k,
and it's those apps that need to be rethought in many cases to
run well on a 64 or more processor system, but that's been done
now in most cases of Enterprise apps.
Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?
If you're talking about Windows, it needs to catch up with the
OS's which have been doing this for years, if it wants to play
in this space.
At a recent Intel presentation on Nehalem, Intel said Solaris
is currently the best performing OS on Nehalem processors, and
that's in a large part because it's been optimised for multi-
processor systems for donkey's years, and knows how to best
schedule threads on multi-processor systems and deal with
things like different cores having different memory locality,
different speeds (power-aware scheduler), etc. Redmond is
putting in a lot of work in this area now, and Windows is
catching up, now that these features are finally becoming
available in commodity PC hardware on which it runs, rather
than just the Enterprise computer systems of the last ~20
years.