On Wed, 03 Jun 2009 22:46:37 -0700,
JosephKK wrote:
JosephKK wrote:
JosephKK wrote:
JosephKK wrote:
On Fri, 29 May 2009 09:13:22 -0400, jmfbahciv
Peter Flass wrote:
Scott Lurndal wrote:
What you will see going forward is that the operating
sytsem(s) never really touch
the real hardware anymore and a VMM of some sort manages
and coordinates the hardware resources amongst the
"operating system(s)", while the operating systems are
blissfully unaware and run applications as they would
normally.
We've seen this since CP-67 in, what, 1968?. BTDT.
If the OS doesn't touch the hardware, then it's not the
monitor, but an app.
I think this one is currently an open ended argument. What
do you call an application that hosts hundreds of
dynamically loaded user applications?
A daemon.
That is way far from the usual definition of daemon. Check
your dictionaries.
Since we implemented a few of them, I know what the
functionality
of our daemons were. You asked me what I would have called
them. I told you.
Yes you have. I basically come from the nuxi model.
Particularly when that application used to be an OS in
its own right?
Which one are you talking about? The emulators are running
as an app.
You are missing the boat here, in the current world there are
several cases of things like virtualbox, which run things
like BSD, Solaris, MSwin XP, Freedos, (as applications) and
all their (sub)applications "simultaneously" (time sharing,
and supporting multiple CPU cores). This would place it at
the monitor level you have referenced.
No. Those are running as apps w.r.t. the computer system they
are
executing on. Those apps will never (or should never) be
running at the exec level (what the hell does Unix call "exec
level"?)
of the computer system. That is exclusively the address space
and instruction execution of the monitor (or kernal) running
on that system.
It is kernel space in the *nix world.
In olden unix' world. I'm beginning to have some doubts based
on what's been written here. It looks like a lot of things
get put into the kernel which shouldn't be there (if I believe
everything I've been told).
Terminology is failing here.
It's not a confusion of terminology. It's more a confusion
of
the software level a piece of code is executing. I run into
this confusion all the time. I think it's caused by people
assuming that Windows is the monitor. It never was.
MSwin never was much of a proper OS. Just remember that
there are more things claiming to be an OS besides Multics,
RSTS, TOPS-10, VMS, MVS, VM-CMS, Unix(es), and MSwin.
MS got Cutler's flavor of VMS and called NT. They started out
with a somewhat [emoticon's bias alert here] proper monitor
but spoiled it when Windows' developers had to have direct
access to the nether parts of the monitor.
/BAH
Yep, just like the ruined win 3.1 by insisting on embedding the
32-bit mode within the GUI, and insisting on internals access.
More yet of the tiny basic mentality.
Nah. It got started a long time ago, when the founders of MS
discovered a listing of DAEMON.MAC (of TOPS-10) in a dumpster
and believed they had a listing of the TOPS-10 monitor when they
read the comment
"Swappable part of TOPS-10 monitor". It was a user mode program
that ran with privs.
/BAH
Wishful thinking. They were not smart enough to recognize the
value of such a document, let alone understand it, even if they
did find such.
You are wrong. They were clever enough; they simply didn't take
enough time learning about what they were using. I guesstimate
that one more month of study and they would have learned about
how buffer mode I/O should work.
Are you so very sure? They used DMA to read/write to the floppy in
the original PC. Used DMA again for the XT fixed disk as well.
They got into the habit of mucking with BIOS and DOS from the
beginning, the hardware could not detect it let alone prevent it.
Then, when better hardware came along they would not give up the
foolish practices, and still haven't.
You don't know history.
/BAH
Which history do you know? I have been watching computers since the
core days and even worked with core computers.
Unless i am mistaken the DEC-10/PDP-10 used only silicon ram.
Early PDP-10's were core memory for sure. I don't know about later
ones... probably DRAM by that time.
I still have some core planes around here somewhere... They were
electronically nasty, amps of X-Y drive and millivolts of sense line
voltage, all tangled up, temperature and pattern sensitivity all over
the place. Some people did it right, some didn't.
John
I do not find your description credible.
The only explanation for that is that you didn't work much with fast
core memory at the electrical level. Did you?
Or rather absurdly and
Lots of minicomputers had flakey core memory. As I said, some people
got it right and some didn't.
I know damn well that drive currents were not
Half-select (X or Y drive, or inhibit winding) currents ranged up to
maybe half an amp each. A core stack might have many such drivers.
That's a lot of fast stuff to have woven into a sense winding that
might make 50 mV on a good day.
The temperature sensitivity is exaggerated
Exaggerated by whom? Not me. The cores were temperature sensitive, had
to be sensed and compensated, and banging on one address the right way
could seriously heat one core.
as is the
Well, pattern sensitivity diagnostics were standard. Keep your trimpot
tool handy.
in fact early silicon DRAM had far worse thermal
and pattern sensitivities.
The semis got fixed.
"Quirks"...
http://www.psych.usyd.edu.au/pdp-11/core.html
Yup. PDP-11 core memory was indeed quirky.
The worst core memories were dense and fast. At low density (big
cores, few cores per sense line) and slow access times (a few usec)
they could be very reliable.
One juke box that I know of stored user selections in _big_ cores...
one core per record (where "record" is a 45 rpm platter, not a C
struct.) Pressing the letter:number select buttons magnetized one core
- XY-select! - and the mechanical platter scanner selected and read
out the cores as it passed the record storage slots. If the power cord
was yanked, as tends to happen in certain bars, the paid-for
selections wouldn't be lost.
John