J
jmfbahciv
JosephKK said:Alas, i am trying to get them up to the point where they have the
interest to even read such as this newsgroup.
Some days you win; some days you lose .
/BAH
JosephKK said:Alas, i am trying to get them up to the point where they have the
interest to even read such as this newsgroup.
JosephKK said:But then the master CPU is not a real master any more. Think itJohn Larkin wrote:
John Larkin wrote:
Walter Bushell wrote:
Walter Bushell wrote:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass
John Larkin wrote:
The ultimate OS should maybe be hardware, fpga probably, or an
entirely separate processor that runs nothing but the os.
CDC-6600.
In a few years, when most any decent CPU has 64 or so cores, I suspect
we'll have one of them run just the OS. But Microsoft will f*** that
up, too.
John
Why only one? Surely the kernel will be multithreaded.
You meant to say reentrant.
/BAH
Well that too.
Not "too" but first.
/BAH
An os nanokernal needs to be neither.
Wrong; then you don't have an OS.
You are playing with words. An OS should be hierarchial, with the
top-most thing (what I call the kernal) being absolutely in charge of
the system.
But it cannot have that kind of control with more than [number picked
out of the air as a guesstimate] 8 threads or processes or services.
It could, in my opinion should, run on a dedicated CPU.
It is impossible to do this without allowing the other CPUs to be
able to make their own decisions about what they're processing.
It's times like this that I regret that "duh" has fallen into disuse.
through.
Which way is it? Does the master CPU/Core control the resources or
not? When and where does it become throughput limited? Why? When
does latency become the problem? What is the acceptable response
delays for various cases? Think things through.
JosephKK said:Usenet normal. Propagation often causes such skews as well.
In your setup, is there anything that requires one CPUMorten said:Exactly what QNX does. Except it has a loose-coupled processor
model as the base. It works, and can exploit, tight-coupled machines
too. It runs a kernel process of 15 KILO bytes on each processor,
and this supports the message passing core, process/thread scheduling,
signals, memory management, and the bottom half of nameservice.
This nameservice is where you find where the file systems,
guis etc reside. You see, they can; with some support from the
process, migrate to different places.
The _only_ critical component is the nameservice, and this
can be backed up and migrated too. So, it is indeed possible
to have a QNX cluster ("OS") running for longer than any
component part has even existed.
A system that is to run on thousands of processors must
adhere to similar design principles. Microkernel by "need
to have" principles. Totally distributed, no core parts.
Able to work well without having shared memory access, but
also able to use it if available. No unreplacable "core"
or "Boss" cpu. And only a need-to-have loading of this
processor to bootstrap dictionaries of what services are
running on the cluster.
I wouldn't see it as unrealistic to run QNX clusters with
tens of thousands of processors, in clusters of 32/64 etc
with tight coupling; so they can run file system, network,
graphics etc sybsystems there.
I have even seen QNX workgroup clusters where all the
machines in the workgroup actually ran the same OS, and
the system looked to user processes as it it was a large
system with many graphics terminals.
Anne said:they let me play disk engineer in bldg 14&15 in the late 70s & early 80s
... some past posts
http://www.garlic.com/~lynn/subtopic.html#disk
there was joke that i worked 4-shift week, 1st shift in sjr/bldg.28, 2nd
shift in bldgs. 14&15, 3rd shift in stl/bldg.90, and 4th shift at HONE.
part of what kick it off were all the test cells were running
"stand-alone", dedicated machine time (one at a time). They had tried
MVS ... for possibly doing multiple testing concurrently ... but MVS
(at the time) MTBF was 15-minutes. Basically these were devices under
development and tended to have error rates that wouldn't be found in
normal business operation.
I sat a task to rewrite i/o supervisor so that it was completely bullet
proof and never fail ... allowing on-demand, concurrent/multiple testing
... significantly improving productivity. One of the problems was that I
happened to mention the MVS MTBF number in an internal report describing
the effort. Even tho it wasn't for public consumption ... it still
brought down the wrath of the MVS organization on me (informally I was
told that any corporate level awards or anything else at the corporate
level would be blocked by the MVS organization).
Another informal example (old email) of statements that the MVS
organization objected to (even when they were purely for internal
consumption):
http://www.garlic.com/~lynn/2007.html#email801015
... basically prior to product ship, a collection of 57 normally
expected 3380 errors were specified ... and with hardware aid ... they
could be generated on demand. All resulted in MVS crashing ... and in
65% of the cases there was no indication of what was the problem that
forced the re-IPL.
It contributed to being periodically being told that I didn't have a
career with the company.
Possibly the largest (virtual machine) time-sharing service during
the period was HONE. It had started out with cp67 for branch office
young SEs being able to work with operating systems after 23jun69
unbundling announcement. misc. past posts mentioning unbundling:
http://www.garlic.com/~lynn/submain.html#unbundle
It eventually transitioned to providing online world-wide sales &
marketing support. The multiple cp67 (in the US) transitioned to vm370
and clones started to be created at various places around the world. In
the late 70s, the various US HONE datacenters were consolidated in
single place (multiple loosely-coupled SMP processors). That HONE
operation had something approaching 40,000 defined users in the 1980
timeframe. misc. past posts mentioning HONE (&/or APL)
http://www.garlic.com/~lynn/subtopic.html#hone
Andrew said:It uses round-robin. The only priority interrupt on the OS core needs
is power on reset.
OS core tells process on core-A to pause.
HOW!!!!?
Any process on core-B is told to stop.
Sub-OS on core-B is told to accept new app.
Sub-OS on core-A is told to copy app's data and code to core-B's areas.
When copy complete sub-OS on core-B is told to run Appl.
Sub-OS on core-A is placed in waiting state or what ever is needed.
You have a main pool and sub-pools for each core. The cores only
request more memory from the main pool when their sub-pool is empty.
Main pool in OS core. Sub-pool one on each core.
Yes but humans do a lot of silly things.
JosephKK said:Let's see, gamers are always compute hungry, they will drive the
market.
Spice, 3D modeling and other engineering applications are
niche markets and will remain so. Office applications cannot use 1
GHz processors effectively, so multi GHz and Multicore are wasted on
them, just more idle time (Where do you think the impetus for all the
eye-heroin comes from?).
And did you notice academia's decade (and more) long lack of much of
anything useful to show for it? A lot like AI. Computing academia's
darling products just don't seem to be able to find usefulness in the
real world.
jmfbahciv said:Yep. That seemed to be true in our neck of the woods, too. The
problem was that a majority of people couldn't understand this.
jmfbahciv said:We didn't have to think about it; that was the situation with
our master/slave implementation.
There was a Datamation article written by Alan Wilson about
our SMP implementation. I have no idea how you could find
it online but it should be somewhere out there. My hardcopy
is packed in one of my unpacked boxes. It might be a good
idea to read it if you can find it.
Andrew said:Interupts are not required since you can do that communications by
round-robin. You just need a fixed shared memory area with a separate
word/block/register for each core. Some of requests could be
implemented by the cores setting a request-to-talk bit in a hardware
register. An alternative is serial links between the cores. This has
to be defined in the high level design of the hardware.
Anne said:old account about battling for a 30% raise so that I would be earning
the same as the offers to new hires that I was interviewing to work
under my direction.
jmfbahciv said:But your idea is to have the Boss CPU have control of the whole
system; if it does, then the other CPUs cannot run the device drivers
without the Boss knowing about it. Having the control of the
system means that the scheduling for I/O and memory management
has to be done by the Boss, not the other CPUs. Thus, when
a slave CPU needs any resources, it has to ask the Boss for
it. This will cause the system to grind down to almost a halt
because the other CPUs will be in a constant wait state waiting
for the Boss to service their requests.
jmfbahciv said:And those are applications because they don't really run at the
exec level of the computer system.
Yes it was a Sequent.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Nice stuff... I was at Pyramid where they were a major competitor... one of
my friends from my pre-Pyramid jobs was at Sequent.
Never got the time to get together and swap tech info on the
similarites/differences. A lot of the Pyramid stuff seems to have moved to
SunOS/Solaris like the disk suite stuff and cluster stuff.
Bill
jmfbahciv said:There is no reason to be insulting. You don't know what you'reJohn said:[snip...] [snip...] [snip...]
She said "really." She's being a timeshare snob just because I had a
PDP-11 and she had a VAX.
John
talking about now.
/BAH
Peter said:At least you weren't told they were moving your job to India, and, oh by
the way, you have to train the people that will be doing it.
jmfbahciv said:JosephKK said:[snip...] [snip...] [snip...]
And did you notice academia's decade (and more) long lack of much of
anything useful to show for it? A lot like AI. Computing academia's
darling products just don't seem to be able to find usefulness in the
real world.
I don't about that. Certainly the ones who keep touting trying to
emulate human thinking fail. I still don't understand why anybody
would want to build gear that emulated human-style thinking.
jmfbahciv said:JosephKK said:[snip...] [snip...] [snip...]
And did you notice academia's decade (and more) long lack of much of
anything useful to show for it? A lot like AI. Computing academia's
darling products just don't seem to be able to find usefulness in the
real world.
I don't about that. Certainly the ones who keep touting trying to
emulate human thinking fail. I still don't understand why anybody
would want to build gear that emulated human-style thinking.
Charles Richmond said:Hint: You do *not* have to train the people who will take your
job. Just refuse to do it. If they fire you, heck, you would be laid
off anyway!!!