Maker Pro
Maker Pro

My Vintage Dream PC

J

jmfbahciv

JosephKK said:
Lucky bird.

<grin> Some days we thought we were privileged. Other days...
We were in the business to sell the hardware we manufactured,
not use it. We had to justify each exhale before we got the
gear we needed.
I had to develop software on every one of the various
systems i had to integrate (Except the Lexitron technical word
processing systems, BTW they used 8080s and dual daisy wheel
printers). Plus no kernal access (I could have on the PCs but could
see the maintenance nightmares it would bring).

That's why I eventually got myself into the monitor group.
Being able to poke the running monitor gave one a sense
of superiority.

/BAH
 
J

jmfbahciv

Archimedes' Lever said:
You are incorrect. If you attempt to test the logic state with
external gear or re-energize the device, you are no longer in "Pull Power
Cord" state. While it IS in "Pull Power Cord" state, the device is
non-functional, therefore all is zero for all intents and purposes.

You have to leave "Pull Power Cord" state to examine the logic state of
the memory, and it will be whatever it was last set to, but that is not
that state I put it in. When the power is gone, all readings are zero.

Even though they retain their condition, you can't read it without
removing the state.

I still win.

No, you lose. You have failed my test.

/BAH
 
J

jmfbahciv

FatBytestard said:
So... I still win.

The machine must remain in "Pull Power Cord" state. if you attempt to
read the logic level externally or re-energize it, it is no longer in
"Pull Power Cord" state. While it IS in "Pull Power Cord" state, one
cannot read the logic level, therefore all is zero, for all intents and
purposes.

I am still the winner.

No. You have not done the problem as requested.

/BAH
 
J

jmfbahciv

JosephKK said:
JosephKK said:
JosephKK wrote:

JosephKK wrote:

Peter Flass wrote:
Scott Lurndal wrote:
What you will see going forward is that the operating sytsem(s) never
really touch
the real hardware anymore and a VMM of some sort manages and coordinates
the hardware resources amongst the "operating system(s)", while the
operating systems are blissfully unaware and run applications as they
would
normally.
We've seen this since CP-67 in, what, 1968?. BTDT.

If the OS doesn't touch the hardware, then it's not the monitor,
but an app.

I think this one is currently an open ended argument. What do you
call an application that hosts hundreds of dynamically loaded user
applications?
A daemon.
That is way far from the usual definition of daemon. Check your
dictionaries.
Since we implemented a few of them, I know what the functionality
of our daemons were. You asked me what I would have called them.
I told you.
Yes you have. I basically come from the nuxi model.
Particularly when that application used to be an OS in
its own right?
Which one are you talking about? The emulators are running as
an app.
You are missing the boat here, in the current world there are several
cases of things like virtualbox, which run things like BSD, Solaris,
MSwin XP, Freedos, (as applications) and all their (sub)applications
"simultaneously" (time sharing, and supporting multiple CPU cores).
This would place it at the monitor level you have referenced.
No. Those are running as apps w.r.t. the computer system they are
executing on. Those apps will never (or should never) be running
at the exec level (what the hell does Unix call "exec level"?)
of the computer system. That is exclusively the address space
and instruction execution of the monitor (or kernal) running
on that system.
It is kernel space in the *nix world.
In olden unix' world. I'm beginning to have some doubts based
on what's been written here. It looks like a lot of things
get put into the kernel which shouldn't be there (if I believe
everything I've been told).
Terminology is failing here.
It's not a confusion of terminology. It's more a confusion of
the software level a piece of code is executing. I run into
this confusion all the time. I think it's caused by people
assuming that Windows is the monitor. It never was.

MSwin never was much of a proper OS. Just remember that there are
more things claiming to be an OS besides Multics, RSTS, TOPS-10, VMS,
MVS, VM-CMS, Unix(es), and MSwin.
MS got Cutler's flavor of VMS and called NT. They started out with
a somewhat [emoticon's bias alert here] proper monitor but spoiled
it when Windows' developers had to have direct access to the
nether parts of the monitor.

/BAH
Yep, just like the ruined win 3.1 by insisting on embedding the 32-bit
mode within the GUI, and insisting on internals access.
More yet of the tiny basic mentality.
Nah. It got started a long time ago, when the founders of MS discovered
a listing of DAEMON.MAC (of TOPS-10) in a dumpster and believed they
had a listing of the TOPS-10 monitor when they read the comment
"Swappable part of TOPS-10 monitor". It was a user mode program
that ran with privs.

/BAH

Wishful thinking. They were not smart enough to recognize the value
of such a document, let alone understand it, even if they did find
such.

You are wrong. They were clever enough; they simply didn't take
enough time learning about what they were using. I guesstimate
that one more month of study and they would have learned about
how buffer mode I/O should work.
They got into the habit of mucking with BIOS and DOS from the
beginning, the hardware could not detect it let alone prevent it.
Then, when better hardware came along they would not give up the
foolish practices, and still haven't.

You don't know history.

/BAH
 
J

jmfbahciv

Archimedes' Lever said:
For all intents and purposes, a computer that is OFF has zero potential
to compute, so zero is how its state can be described.

You solution does not satisfy the problem I presented.

/BAH
 
R

Richard Cranium

Right. The bitch lies, and I call her on her lie, and all you little
whimpy twits want to set your filters because the bad man called the poor
woman a ditz for making up one of her lies about him?
What about your lies Archie? Tell us about your statement of celibacy
- and do not ask for a citing without first denying that you said it.
Then tell us about your statement of how you satisfy women by "going
all night long". Tell us about your pre-teen accomplishments ; your
infancy deeds. Wow us with your success stories that occurred before
you were even born!

You're a buffoon Archie ... a phony and a liar. You run like the
coward you are from the puzzle challenge because you cannot fathom
failing to solve it in front of others. You profess to be an
all-knowing fountain of inexhaustible knowledge. The reality is that
you look up to pond scum.
 
A

Anne & Lynn Wheeler

jmfbahciv said:
Let me see if I can explain better. When we increased the CPU speed,
the system became I/O bound. When we increased the disk controller
speed, the same system would become CPU-bound. When we increased the
speed of the CPU, the same system became I/O bound... The same things
happen in today's biz. Hardware developers concentrate on the solving
the problem of today. So if the CPU needs to be speeded up, they'll
work on speeding up the CPU. Then, when that's done, the performance
lags show that the I/O needs to be sped up. So the next project is to
produce a faster peripheral. This gets out to the field and, all of a
sudden, the CPU performance sucks. It's a cycle.

in the mid-70s i started making comments about i/o slowing down
significantly ... part of being able to see this was possibly having
started (when I was undergraduate in the 60s) doing dynamic adaptive
resource management ... and something I called "scheduling to the
bottleneck". a recent reference (also mentions "re-releasing" a
"resource management" product in the mid-70s, SHARE having called for
making the cp67 "wheeler scheduler" available for vm370):
http://www.garlic.com/~lynn/2009h.html#76

there is reference to comparison that I did in the early 80s, between a
"current" system and a nearly 15yr early system doing essentially the
same type of workload. My comment was that the relative system thruput
of disks had declined by an order of magnitude of the period.
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

some disk division executives took exception and assigned the
performance group to refute the states ... but after a couple week they
came back and effectively said that i had slightly understated the
problem. the issue was that processor power had increased appox. 50
times ... but disk thruput had increased only by 3-5 times (resulting in
net relative system thruput decline of a factor of 10 times).

the performance group turned the study into a SHARE report recommending
disk configuration suggestions to improve system thruput ... references
to presentation B874 at SHARE 63, 8/18/84:
http://www.garlic.com/~lynn/2002i.html#18
http://www.garlic.com/~lynn/2006f.html#3
extract from the abstract for the presentation
http://www.garlic.com/~lynn/2006o.html#68

a little topic drift recent reference/post getting to play disk
engineer:
http://www.garlic.com/~lynn/2009h.html#68

As I've mentioned regarding relational databases ... the amount of real
storage started to dramatically increase in the late 70s ... and
systems started to leverage the additional real memory for caching and
other techniques as method for compensating for disk thruput bottleneck.

in the 70s ... there was a little contention between the '60s database
product group in STL (bldg 90) and the system/r (original
relational/sql) group ... misc. posts mentioning system/r
http://www.garlic.com/~lynn/subtopic.html#systemr

with the older style database group claiming that the "implicit" index
(for locate a record) in rdbms doubled the physical disk storage of
typical database and significantly increased the number of disk i/os (as
part of reading the index to find a record location). The system/r group
pointed at that the physical record pointers that were part of the data
significantly increased the manual management of "60s" databases.

going into the 80s ... the disk space significantly increased &
price/bit significantly decreased (mitigating rdbms disk space penalty),
available real memory significantly increased (allowing rdbms indexes to
be cached, significantly reducing the disk i/o penalty), and DBMS people
skill became relatively scarce and cost significantly increased. All of
this shifted various trade-offs vis-a-vis 60s DBMS and RDBMS.

Note however, there is still quite a bit of use of 60s DBMS technology
.... especially in various large financial and/or business critical
operations. a few recent references:
http://www.garlic.com/~lynn/2009g.html#15 Confessions of a Cobol programmer
http://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
http://www.garlic.com/~lynn/2009h.html#1 z/Journal Does it Again
http://www.garlic.com/~lynn/2009h.html#27 Natural keys vs Aritficial Keys

above also mentions that when Jim left for Tandem, he was handing a lot
of stuff off to me ... including consulting with STL 60s DBMS group and
talking to customers about System/R ... a couple old email references:
http://www.garlic.com/~lynn/2007.html#email801006
http://www.garlic.com/~lynn/2007.html#email801016

note measuring access latency in number of processor cycles ... number
of processor cycle latency to access real memory today is compareable to
60s number of processor cycle latency to access disk ... and today's
caches are larger than 60s total real memory.
 
A

Ahem A Rivet's Shot

You have no idea what you're talking about. Do you really believe
that a system cannot print a listing, retrieve mail, transfer a file,
allow the users to edit, play music, calculate a standard deviation,
and run a screen saver at the same time? To do all of this requires
sharing all system resources, including the CPU(s) during the same
wallclock time slice.

Sounds like a fairly normal light workload for one of my BSD boxe.
 
P

Patrick Scheible

jmfbahciv said:
I noticed. This is why your OS ideas have, as the top priority,
to have complete control of all aspects of system computing.


You have no idea what you're talking about. Do you really believe
that a system cannot print a listing, retrieve mail, transfer a file,
allow the users to edit, play music, calculate a standard deviation,
and run a screen saver at the same time? To do all of this requires
sharing all system resources, including the CPU(s) during the same
wallclock time slice.

This is not a particularly heavy system load. Even a Windows box
should be able to do this, except that it would be only one user
editing at a time. On my Windows XP box at work I'm often editing
several different Word and Excel documents, have a couple of web
browsers, a very large and demanding special-purpose database
application, and Google Earth running at once.

-- Patrick
 
S

Scott Lurndal

John Larkin said:
So if functions are not assigned to processors in, say a 256-cpu,
1024-thread chip, do we just run hundreds of copies of the OS?

No, one copy of the OS[*]. The OS may or may not have threads
of its own, but generally the OS can be considered a subroutine
of an application thread in terms of execution context.

[*] Since, as you point out, all the cpus see the same memory,
even in a ccNUMA box.

scott
 
W

Walter Bushell

John Larkin said:
Hard drives costing what they do, we could almost dump it all to a
hot-plug drive and physically archive them.

John

Oh, yes. Compare to the cost of having someone write them, I suppose you
_could_ give the job to an intern. And on the data is much easier to
verify latter if it's on a hard disk. Even USB drives can be bought for
around $100 a Terabyte and I AFAIK the data retention is better and
naked drives would be better.
 
W

Walter Bushell

John Larkin said:
Well, we've never had to do a full restore. I do occasionally dig into
The Cave and pull an old CD or DVD backup, like when I'm working at
home and want to see an older design, or the rare occasion when a file
is missing or otherwise wrong in the M:\LIB directory. The files are
always there, no problems so far.

They're just files burned into DVDs. Not a lot can go wrong. Doing
zero-based weekly backups onto write-once media gives us a lot of
redundancy.

John

I'd checksum them though. That way one could do a quick check now and
again. I remember on company that lost it's data even though it was
backed up to two drives, because the writing machine had a bad SCSI
board.
 
W

Walter Bushell

FatBytestard said:
With hard drives as cheap as they are these days, having shadow drives
are really the best way. eSATA and even USB drives both write data way
faster than a DVD writer does, and have ZERO likelihood of writing an
errant bit for the most part.

Chez Watt?

It's either zero or it's non zero. If you a one in a million guy in
China there are a thousand people just like you.
 
W

Walter Bushell

FatBytestard said:
Of course real companies have hundreds of servers and facilities in
several cities, and they have the right backup mediums being used at all
times, and no third parties are ever involved.

Few real companies around, I suppose.
 
W

Walter Bushell

Clifford Heath said:
Pyramid Computers (RIP) was a counterexample. We had clients
with 20-40 CPUs, and they scaled almost linearly.

The point here though is that serious parallel programs are very
expensive to write, so it only happens in infrastructure where the
payoff is large - DBMS, server farms, some graphics. Not "normal"
programs at any rate.

Teaching programmers to write in languages like Erlang is the only
possible path around the cost differential.

Clifford Heath.

A lot of pre paralleled APIs for handling sound, graphics, a general
math library and video goes a long way.
 
W

Walter Bushell

FatBytestard said:
No. The dope that compares a DOS based application's print job
compilation time with a graphical based, scaled font rendered graphical
print job compilation is the idiot. And NO, I am not talking about the
GUI you are in, I am talking about the utter crispness and registration
of what prints out.

You bend the brain of any kid trying to learn what is going on by
reading your crap as well.

Close to my pet comparison, the modern workstation cannot print out 600
lpm, but it can output graphic output displaying the same data faster
than could be produced in the 7090 days. Anybody remember the Calcomp
plotter? Or the Engineering Aide who spend days graphing from the
massive output to produce something humanly understandable from same?
 
W

Walter Bushell

Ahem A Rivet's Shot said:
Sounds like a fairly normal light workload for one of my BSD boxe.

This is laptop level work. Now if you have to convert video formats it
may load the system.
 
J

Joe Pfeiffer

Patrick Scheible said:
This is not a particularly heavy system load. Even a Windows box
should be able to do this, except that it would be only one user
editing at a time. On my Windows XP box at work I'm often editing
several different Word and Excel documents, have a couple of web
browsers, a very large and demanding special-purpose database
application, and Google Earth running at once.

See what she was responding to -- no, it isn't a heavy load, but it
demonstrates timesharing is alive and well. I suspect John meant
multi-user computers.
 
A

Ahem A Rivet's Shot

This is laptop level work. Now if you have to convert video formats it
may load the system.

Most recent laptops have more processing power than my BSD boxes
(AMD64 3200 and AMD XP 1700) but yes I'll happily switch calculating a
standard deviation for transcoding video without worrying about the
responsiveness of the system or interruptions to the music - I'd start to
get cautious about load if I must transcode the video in realtime.
 
Top