Maker Pro
Maker Pro

My Vintage Dream PC

J

jmfbahciv

JosephKK said:
Alas, i am trying to get them up to the point where they have the
interest to even read such as this newsgroup.

Some days you win; some days you lose :).

/BAH
 
J

jmfbahciv

JosephKK said:
John Larkin wrote:

John Larkin wrote:

Walter Bushell wrote:
Walter Bushell wrote:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass

John Larkin wrote:
The ultimate OS should maybe be hardware, fpga probably, or an
entirely separate processor that runs nothing but the os.

CDC-6600.
In a few years, when most any decent CPU has 64 or so cores, I suspect
we'll have one of them run just the OS. But Microsoft will f*** that
up, too.

John
Why only one? Surely the kernel will be multithreaded.
You meant to say reentrant.

/BAH
Well that too.
Not "too" but first.

/BAH
An os nanokernal needs to be neither.
Wrong; then you don't have an OS.


You are playing with words. An OS should be hierarchial, with the
top-most thing (what I call the kernal) being absolutely in charge of
the system.
But it cannot have that kind of control with more than [number picked
out of the air as a guesstimate] 8 threads or processes or services.

It could, in my opinion should, run on a dedicated CPU.
It is impossible to do this without allowing the other CPUs to be
able to make their own decisions about what they're processing.

It's times like this that I regret that "duh" has fallen into disuse.
But then the master CPU is not a real master any more. Think it
through.

Oh, goody. You're getting it. I don't know how to explain it
any better to John.
Which way is it? Does the master CPU/Core control the resources or
not? When and where does it become throughput limited? Why? When
does latency become the problem? What is the acceptable response
delays for various cases? Think things through.

Thanks.

/BAH
 
J

jmfbahciv

JosephKK said:
Usenet normal. Propagation often causes such skews as well.

It strikes me that you would understand the stuff better if
you looked at a CREF listing of UUOSYM. In the left hand
side of each page, the actual bits each line generated are
displayed. We never shipped those kinds of listings so
I don't know if the listing is online at all. If you were
in my group, I'd send you to the machine room to look
at the listing...but then you may never come back because
it was in the cabinet with all the TOPS-10 monitor listings :).

/BAH
 
J

jmfbahciv

Morten said:
Exactly what QNX does. Except it has a loose-coupled processor
model as the base. It works, and can exploit, tight-coupled machines
too. It runs a kernel process of 15 KILO bytes on each processor,
and this supports the message passing core, process/thread scheduling,
signals, memory management, and the bottom half of nameservice.

This nameservice is where you find where the file systems,
guis etc reside. You see, they can; with some support from the
process, migrate to different places.

The _only_ critical component is the nameservice, and this
can be backed up and migrated too. So, it is indeed possible
to have a QNX cluster ("OS") running for longer than any
component part has even existed.

A system that is to run on thousands of processors must
adhere to similar design principles. Microkernel by "need
to have" principles. Totally distributed, no core parts.
Able to work well without having shared memory access, but
also able to use it if available. No unreplacable "core"
or "Boss" cpu. And only a need-to-have loading of this
processor to bootstrap dictionaries of what services are
running on the cluster.

I wouldn't see it as unrealistic to run QNX clusters with
tens of thousands of processors, in clusters of 32/64 etc
with tight coupling; so they can run file system, network,
graphics etc sybsystems there.

I have even seen QNX workgroup clusters where all the
machines in the workgroup actually ran the same OS, and
the system looked to user processes as it it was a large
system with many graphics terminals.
In your setup, is there anything that requires one CPU
to be the Boss? In our implementation, the wallclock
time was one datum.

/BAH
 
J

jmfbahciv

Anne said:
they let me play disk engineer in bldg 14&15 in the late 70s & early 80s
... some past posts
http://www.garlic.com/~lynn/subtopic.html#disk

there was joke that i worked 4-shift week, 1st shift in sjr/bldg.28, 2nd
shift in bldgs. 14&15, 3rd shift in stl/bldg.90, and 4th shift at HONE.

part of what kick it off were all the test cells were running
"stand-alone", dedicated machine time (one at a time). They had tried
MVS ... for possibly doing multiple testing concurrently ... but MVS
(at the time) MTBF was 15-minutes. Basically these were devices under
development and tended to have error rates that wouldn't be found in
normal business operation.

I sat a task to rewrite i/o supervisor so that it was completely bullet
proof and never fail ... allowing on-demand, concurrent/multiple testing
... significantly improving productivity. One of the problems was that I
happened to mention the MVS MTBF number in an internal report describing
the effort. Even tho it wasn't for public consumption ... it still
brought down the wrath of the MVS organization on me (informally I was
told that any corporate level awards or anything else at the corporate
level would be blocked by the MVS organization).

Another informal example (old email) of statements that the MVS
organization objected to (even when they were purely for internal
consumption):
http://www.garlic.com/~lynn/2007.html#email801015

... basically prior to product ship, a collection of 57 normally
expected 3380 errors were specified ... and with hardware aid ... they
could be generated on demand. All resulted in MVS crashing ... and in
65% of the cases there was no indication of what was the problem that
forced the re-IPL.

It contributed to being periodically being told that I didn't have a
career with the company.

Yup. You were trying to against that mindset; I didn't find this
work fun but it had to be done.
Possibly the largest (virtual machine) time-sharing service during
the period was HONE. It had started out with cp67 for branch office
young SEs being able to work with operating systems after 23jun69
unbundling announcement. misc. past posts mentioning unbundling:
http://www.garlic.com/~lynn/submain.html#unbundle

It eventually transitioned to providing online world-wide sales &
marketing support. The multiple cp67 (in the US) transitioned to vm370
and clones started to be created at various places around the world. In
the late 70s, the various US HONE datacenters were consolidated in
single place (multiple loosely-coupled SMP processors). That HONE
operation had something approaching 40,000 defined users in the 1980
timeframe. misc. past posts mentioning HONE (&/or APL)
http://www.garlic.com/~lynn/subtopic.html#hone

/BAH
 
J

jmfbahciv

Andrew said:
It uses round-robin. The only priority interrupt on the OS core needs
is power on reset.

NOpe. There also has to be a priority interrupt mechanism so that
the other CPUs can get the Boss' attention. There has to be a way
to do that. Usually it has to involve some kind of memory which
contains the data required to 1. identify who called 2. identify
what has to be done 3. have the address of where the caller expects
to have the answer and 5. ring the caller back to say actions
completed.
OS core tells process on core-A to pause.
HOW!!!!?

Any process on core-B is told to stop.
Sub-OS on core-B is told to accept new app.
Sub-OS on core-A is told to copy app's data and code to core-B's areas.
When copy complete sub-OS on core-B is told to run Appl.
Sub-OS on core-A is placed in waiting state or what ever is needed.

And how is all this communication between the CPUs done? Is there
a common memory? Where is that? Is way off in real core^Wdratmemory
or in some submemory system associated with all CPUs?
You have a main pool and sub-pools for each core. The cores only
request more memory from the main pool when their sub-pool is empty.

Man, now you have data base (we called them page maps and page map
pages) that has to be read/written by each and every CPU.
Main pool in OS core. Sub-pool one on each core.


Yes but humans do a lot of silly things.

The laws of physics don't change just because the new young snots
are beginning to learn them.

c is slow.

/BAH
 
J

jmfbahciv

JosephKK said:
Let's see, gamers are always compute hungry, they will drive the
market.

Acutally I consider them the test bed. They are wonderful about
that kind of work.
Spice, 3D modeling and other engineering applications are
niche markets and will remain so. Office applications cannot use 1
GHz processors effectively, so multi GHz and Multicore are wasted on
them, just more idle time (Where do you think the impetus for all the
eye-heroin comes from?).

Ptui. If those system owners were running an OS that gave
them the ability to do more than one thing at a time, they'ld figure
out ways to maximize their CPU resource usage.

And did you notice academia's decade (and more) long lack of much of
anything useful to show for it? A lot like AI. Computing academia's
darling products just don't seem to be able to find usefulness in the
real world.

I don't about that. Certainly the ones who keep touting trying to
emulate human thinking fail. I still don't understand why anybody
would want to build gear that emulated human-style thinking.

/BAH
 
A

Anne & Lynn Wheeler

jmfbahciv said:
Yep. That seemed to be true in our neck of the woods, too. The
problem was that a majority of people couldn't understand this.

re:
http://www.garlic.com/~lynn/2009h.html#71 My Vintage Dream PC

besides the lines about "having no career" and Boyd's reference
regarding deciding to "be or do" ... a couple of the other
lines that they used were

"the best I could hope for was to not be fired and allowed to do it
again"

"they would have forgiven me for being wrong, but they were never
going to forgive me for being right".

old account about battling for a 30% raise so that I would be earning
the same as the offers to new hires that I was interviewing to work
under my direction.
http://www.garlic.com/~lynn/2007j.html#75 IBM Unionization
http://www.garlic.com/~lynn/2007j.html#83 IBM Unionization
http://www.garlic.com/~lynn/2007j.html#94 IBM Unionization

misc past posts referencing the line about "forgive you for being wrong
but were never going to forgive for being right"
http://www.garlic.com/~lynn/2002k.html#61 arrogance metrics (Benoits) was: general networking
http://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2003i.html#71 Offshore IT
http://www.garlic.com/~lynn/2004k.html#14 I am an ageing techy, expert on everything. Let me explain the
http://www.garlic.com/~lynn/2007.html#26 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
http://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
http://www.garlic.com/~lynn/2007k.html#3 IBM Unionization
http://www.garlic.com/~lynn/2007r.html#6 The history of Structure capabilities
http://www.garlic.com/~lynn/2008c.html#34 was: 1975 movie "Three Days of the Condor" tech stuff
http://www.garlic.com/~lynn/2008m.html#30 Taxes
http://www.garlic.com/~lynn/2008m.html#41 IBM--disposition of clock business
http://www.garlic.com/~lynn/2009e.html#27 Microminiaturized Modules
http://www.garlic.com/~lynn/2009g.html#56 Old-school programming techniques you probably don't miss
 
P

Peter Flass

jmfbahciv said:
We didn't have to think about it; that was the situation with
our master/slave implementation.

There was a Datamation article written by Alan Wilson about
our SMP implementation. I have no idea how you could find
it online but it should be somewhere out there. My hardcopy
is packed in one of my unpacked boxes. It might be a good
idea to read it if you can find it.

Unfortunately, most Datamation stuff isn't online. A few articles have
been scanned and/or re-keyed by individuals, but Google hasn't yet
picked up the magazine. Sometimes the ads are more interesting than the
articles.
 
P

Peter Flass

Andrew said:
Interupts are not required since you can do that communications by
round-robin. You just need a fixed shared memory area with a separate
word/block/register for each core. Some of requests could be
implemented by the cores setting a request-to-talk bit in a hardware
register. An alternative is serial links between the cores. This has
to be defined in the high level design of the hardware.

I think you've just re-invented CICS.
 
P

Peter Flass

Anne said:
old account about battling for a 30% raise so that I would be earning
the same as the offers to new hires that I was interviewing to work
under my direction.

At least you weren't told they were moving your job to India, and, oh by
the way, you have to train the people that will be doing it.
 
J

Joe Pfeiffer

jmfbahciv said:
But your idea is to have the Boss CPU have control of the whole
system; if it does, then the other CPUs cannot run the device drivers
without the Boss knowing about it. Having the control of the
system means that the scheduling for I/O and memory management
has to be done by the Boss, not the other CPUs. Thus, when
a slave CPU needs any resources, it has to ask the Boss for
it. This will cause the system to grind down to almost a halt
because the other CPUs will be in a constant wait state waiting
for the Boss to service their requests.

No, the part he's right about is that it really is possible to
distribute a lot of the functionality of a conventional kernel (even if
he can't spell it) among several services, so the microkernel doesn't
have to be involved with the details of the IO. The slave does have to
ask the boss for the resources as you say, but only once at system
startup and never again.

The part he's missing is in thinking this is new, and thinking it'll
somehow work better for general computing now on a dozen cores or in ten
years on 1000 cores than it did in 1990 on one core.
 
J

Joe Pfeiffer

jmfbahciv said:
And those are applications because they don't really run at the
exec level of the computer system.

It is better to think of them as an OS that runs on hardware that
doesn't really exist.
 
J

Jeff Strickland

420 posts on Vintage Dream PC in 13 days. That averages out to 32 per day.
 
B

Bill Pechter

Yes it was a Sequent.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Nice stuff... I was at Pyramid where they were a major competitor... one of
my friends from my pre-Pyramid jobs was at Sequent.

Never got the time to get together and swap tech info on the
similarites/differences. A lot of the Pyramid stuff seems to have moved to
SunOS/Solaris like the disk suite stuff and cluster stuff.

Bill
 
C

Charles Richmond

jmfbahciv said:
John said:
[snip...] [snip...] [snip...]

She said "really." She's being a timeshare snob just because I had a
PDP-11 and she had a VAX.

John
There is no reason to be insulting. You don't know what you're
talking about now.

/BAH

It should be spelled "VAXX", because we all *know* it is a "four letter
word". ;-)
 
C

Charles Richmond

Peter said:
At least you weren't told they were moving your job to India, and, oh by
the way, you have to train the people that will be doing it.

Hint: You do *not* have to train the people who will take your job. Just
refuse to do it. If they fire you, heck, you would be laid off anyway!!!
 
C

Charles Richmond

jmfbahciv said:
JosephKK said:
[snip...] [snip...] [snip...]

And did you notice academia's decade (and more) long lack of much of
anything useful to show for it? A lot like AI. Computing academia's
darling products just don't seem to be able to find usefulness in the
real world.

I don't about that. Certainly the ones who keep touting trying to
emulate human thinking fail. I still don't understand why anybody
would want to build gear that emulated human-style thinking.

That reminds me of a story:

Once upon a time, scientists decided they would create a computer that
would think like a human. So they gathered to together thousands of
CPU's with local memory, and networked them into one giant machine. When
all of this was assembled and turned on, something immediately started
printing out on the printer. The operator ripped off the printout and read:

"That reminds me of a story."


;=)
 
C

Charles Richmond

jmfbahciv said:
JosephKK said:
[snip...] [snip...] [snip...]

And did you notice academia's decade (and more) long lack of much of
anything useful to show for it? A lot like AI. Computing academia's
darling products just don't seem to be able to find usefulness in the
real world.

I don't about that. Certainly the ones who keep touting trying to
emulate human thinking fail. I still don't understand why anybody
would want to build gear that emulated human-style thinking.

That reminds me of a story:

Once upon a time, scientists decided they would create a computer that
would think like a human. So they gathered to together thousands of
CPU's with local memory, and networked them into one giant machine. When
all of this was assembled and turned on, something immediately started
printing out on the printer. The operator ripped off the printout and read:

"That reminds me of a story."


;=)
 
J

Joe Pfeiffer

Charles Richmond said:
Hint: You do *not* have to train the people who will take your
job. Just refuse to do it. If they fire you, heck, you would be laid
off anyway!!!

Makes a difference to when you lose your job, and whether you're
eligible for unemployment while you look for the next one.
 
Top