Maker Pro
Maker Pro

My Vintage Dream PC

J

jmfbahciv

Patrick said:
This is not a particularly heavy system load. Even a Windows box
should be able to do this, except that it would be only one user
editing at a time. On my Windows XP box at work I'm often editing
several different Word and Excel documents, have a couple of web
browsers, a very large and demanding special-purpose database
application, and Google Earth running at once.
Do you consider cut and pasting from many files to one a "single
edit"? I don't. Do you consider having more than one file
getting displayed (like you guys describe when looking at
listings) a single edit? I don't.

/BAH
 
J

jmfbahciv

John said:
Well, the opposite of control is no control.

Wrong! That's why you don't understand what I'm talking about.
That's not timesharing, that's a single-user multitasking OS.

Your small computer thinking is showing.
Timesharing is (was) using a central computer system to service
multiple users who used essentially dumb terminals. Now all the users
have monstrous amounts of local compute power and some to huge amounts
of local storage. They have their own printers, too, or networked
printers that don't go through a mainframe.

But if you want to use the word "timesharing system" for a single-user
PC, that's your definition.

It doesn't matter how many humans are connected. When I was working,
I'd have 5-10 jobs running; in certain cases, a lot more. I was
a single user but I had the machine humming doing 10 "users'" worth
of work.

I ran real timesharing systems, and hung around a lot more, even
PDP-10s.

But you didn't code them, design them, nor grasp any idea about
how they worked.
Windows ain't one, and cloud computing isn't either.

Windows is one app.
I once planned a timesharing system that had a CPU per user, a rack
full of LSI-11 boards, one per modem, doing the simple stuff ahead of
the Big Iron. That would have been cool, but then PCs came along.

The first PDP-10 I met, a 48K KA-10, had that. It was a PDP-8/E.
This was in 1969.

/BAH
 
J

jmfbahciv

John said:
I was interested in discussing (this is a discussion group) what
future OSs might look like, and how multicore chips (which are
absolutely on the silicon peoples' roadmaps) would affect OS and
systems design.

Not many people seem to want to think about that.

<snip>

What you are asking is the equivalent to asking what a wheel will look
like in 20 years. No matter what kind of pretty crepe paper you thread
through the spokes, a wheel's specification will remain the same...
because its function is a _wheel_!

/BAH
 
J

jmfbahciv

John said:
I guess my point, coming full circle, is that a "big" OS has millions
of lines of code, written by hundreds of people, running at the "ring
0" level where any error crashes the whole system. A secure OS must be
atiny chunk of code, absolutely insulated from damage by more complex,
less reliable stuff.

Why are you so intent on it being tiny? that has absolutely nothing
to do with security and bug-resistence. If your trade-off is favor
tiny, you are going to have lots of bugs and very bad performance.
Kernels have to be the size they need to be to get their work done.
Period.

/BAH
 
J

jmfbahciv

Kim said:
Xray uses quite low energy fields. Never had any problems with my mobile
for example while it has gone trough xray.

In satellites the energy of particles is much higher and they still
work just fine. It's just a question of design and compromises
between cost and reliability, nothing comes for free.


The EMC requirements are not defined by kiddies but standards
organizations. And the tests must be made in accredited labs
to get the paperwork done and permit to sell the products.
Especially for telecoms equipment the limits are not always easy
to achieve.

The kiddie might have been someone without only knowledge how to
connect the stove, and read from manual ready made answers, and
imagine somethingg if the manual has no answer. You don't put real
engineers to call to customers. They are shielded by many layers of
customer service and escalation protocols.

Was this the house you had all kinds of wacky problems with
the electical connections.

Nope. that's the house I've just moved into. And all that's been
fixed. 4 electrician mandays' worth of work.
Maybe the stove was just not correctly
connected (grounded for example), and the filtering did not work.

I never found out why the stove interfered. The previous one did
not interfere. One of the first things I had to do that house
when I moved in was redo all the wiring.

/BAH
 
J

jmfbahciv

Kim said:
Single user, single owner systems are the most common ones, but they
still use multiple tasks in parallel. But this is old thread, you just
don't belive that anything made during last 20 years can support many
users or tasks.

Wrong. I have no idea how you got this notion.
Don't bring windows 3.1 to the discussion like always,
it is ancient already...

I have used unix machines that have 500-1000 users in at the same time,
each user has had tens of tasks in parallel etc. But maybe I have been
dreaming. My XP in front of me has had few months of uptime, runs many
processes in parallel all the time. I have X emulation on the other
monitor that is connected to big linux cluster that runs all the time
hunderds of users doing parallel tasks etc.

It's you that is still thinking that single task is the norm, it is not.

Sigh! Are you confusing me with John?

/BAH
 
J

jmfbahciv

Kim said:
If two CPUs are used it needs internal diagnostics for the
detection. If Three CPUs are used they can vote who is incorrect.
In the most critical systems there are three parallel systems,
implemented to different platforms by multiple independent teams. The
results from the systems go trough the voting process.
Unfortunately, it's not quite that simple. The votes can only
be as good as the data used to make the decisions. Now think
of two CPUs who are voting based on bad data and the third who
is voting based on the correct data.

/BAH
 
J

jmfbahciv

JosephKK said:
JosephKK said:
JosephKK wrote:

JosephKK wrote:

JosephKK wrote:

Peter Flass wrote:
John Larkin wrote:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass

<snip>
Not very efficient, it would be a return to the bad old days of
master-slave
Which will cause every system to grind to a halt. think about
networking.

or asymmetric multiprocessing.
Which will still require one CPU to be "boss".

The problem usually isn't
the CPU anyhow, it's memory corruption, and multiple cores don't solve
this problem.
The problem is scheduling. Memory corruption would be isolated to an
app address space, not the monitor's.

IFF the monitor is well designed and written. If MS develops it the
corruption will be coming from the monitor.
MS DOESN'T KNOW HOW TO DEVELOP!! That's the point I've been trying
to make. It's a distribution business and that is rooted deep in
its folklore.

/BAH
I see your point. Get some crap going just well enough to be useful
and pretend it is the second coming.
No. YOu don't see my point. Distribution has a completely different
set of problems that need to be solved. I worked long and hard
to try to solve the simplest of them (this implies that the simplest
were extremely complicated). When your business is distribution,
then the tradeoffs of all design decisions will be made in favor
of distribution and not anything else.

On-line distribution and support means that backdoors have to be
wide open.

<snip snot>

/BAH
I suspect you and i are talking past each other. Expand some more on
the distribution thing. Especially, clarify how it is different from
the marketeering thing. I sure do not follow you yet.
If I'm doing OS development and I need a UUO which gives me access to
the nether regions of all customer systems so I can distribute a file
FOO.EXE, my proposal would be turned down by TOPS-10 but accepted by NT.
Assume that FOO.EXE is an executable that is required to run on
all systems.

NT has to make the tradeoff decision because their business is to
distribute FOO.EXE while TOPS-10's is to provide general timesharing to
users without allowing disasters to happen. Allowing someone outside
the customer site to copy any file at any time to any place was
considered a security risk.

/BAH
A bit clearer. Make that 2 bits.
[relieved emoticon here] Good.
The anti-security consequences of
M$ design decisions is quite well documented. This get closer to the
why.
If you look at them with the assumption of their primary business, it
gets even clearer.
And still they (M$) wonder why there is the likes of wine. And the
growing willingness to lock them up in a VM.
None of their work, AFAICT, has enough paranoia. Putting pieces
of an app's context directly into the kernel would have given
us the willies.

/BAH

Not to mention their persistent willingness to let almost any
application mess with system internals. Makes my skin crawl.

Yep. If one can do it, any old virus code can do it.

/BAH
 
J

jmfbahciv

JosephKK said:
jmfbahciv said:
Andrew Swallow wrote:
JosephKK wrote:
On Mon, 01 Jun 2009 19:29:38 +0100, Andrew Swallow

Andrew Swallow wrote:
<snip>

You can complete you memory cycle so the data and instruction busses
are clear. The normal action on the watch dog timer for the OS core
going off is to restart all the cores. Some debugging information
may be kept but that is it.

<snip>
Andrew Swallow
I will not ever let you design any life and safety critical stuff that
i have anything to do with.
If it is that safety critical you have at least two chips and the
second one takes over.
But the second one is the one who is doing the I/O that is required
to "save" the data. [I'm using John's proposal of master/slave].

After the restart the chip may recover.

Watchdog timer on the entire chip is total crash, infinity loop
or too small a timeout.

/BAH
High reliability = 2nd CPU is back up, not slave.

Andrew Swallow

And just exactly how do you decide which CPU is faulty?

<grin> I can't remember how many hours we spent talking
about just this problem.

/BAH
 
J

jmfbahciv

JosephKK said:
JosephKK said:
JosephKK wrote:

JosephKK wrote:

JosephKK wrote:

Peter Flass wrote:
Scott Lurndal wrote:
What you will see going forward is that the operating sytsem(s) never
really touch
the real hardware anymore and a VMM of some sort manages and coordinates
the hardware resources amongst the "operating system(s)", while the
operating systems are blissfully unaware and run applications as they
would
normally.
We've seen this since CP-67 in, what, 1968?. BTDT.

If the OS doesn't touch the hardware, then it's not the monitor,
but an app.

I think this one is currently an open ended argument. What do you
call an application that hosts hundreds of dynamically loaded user
applications?
A daemon.
That is way far from the usual definition of daemon. Check your
dictionaries.
Since we implemented a few of them, I know what the functionality
of our daemons were. You asked me what I would have called them.
I told you.
Yes you have. I basically come from the nuxi model.
Particularly when that application used to be an OS in
its own right?
Which one are you talking about? The emulators are running as
an app.
You are missing the boat here, in the current world there are several
cases of things like virtualbox, which run things like BSD, Solaris,
MSwin XP, Freedos, (as applications) and all their (sub)applications
"simultaneously" (time sharing, and supporting multiple CPU cores).
This would place it at the monitor level you have referenced.
No. Those are running as apps w.r.t. the computer system they are
executing on. Those apps will never (or should never) be running
at the exec level (what the hell does Unix call "exec level"?)
of the computer system. That is exclusively the address space
and instruction execution of the monitor (or kernal) running
on that system.
It is kernel space in the *nix world.
In olden unix' world. I'm beginning to have some doubts based
on what's been written here. It looks like a lot of things
get put into the kernel which shouldn't be there (if I believe
everything I've been told).

Terminology is failing here.
It's not a confusion of terminology. It's more a confusion of
the software level a piece of code is executing. I run into
this confusion all the time. I think it's caused by people
assuming that Windows is the monitor. It never was.

MSwin never was much of a proper OS. Just remember that there are
more things claiming to be an OS besides Multics, RSTS, TOPS-10, VMS,
MVS, VM-CMS, Unix(es), and MSwin.
MS got Cutler's flavor of VMS and called NT. They started out with
a somewhat [emoticon's bias alert here] proper monitor but spoiled
it when Windows' developers had to have direct access to the
nether parts of the monitor.

/BAH
Yep, just like the ruined win 3.1 by insisting on embedding the 32-bit
mode within the GUI, and insisting on internals access.
More yet of the tiny basic mentality.
Nah. It got started a long time ago, when the founders of MS discovered
a listing of DAEMON.MAC (of TOPS-10) in a dumpster and believed they
had a listing of the TOPS-10 monitor when they read the comment
"Swappable part of TOPS-10 monitor". It was a user mode program
that ran with privs.

/BAH
Wishful thinking. They were not smart enough to recognize the value
of such a document, let alone understand it, even if they did find
such.
You are wrong. They were clever enough; they simply didn't take
enough time learning about what they were using. I guesstimate
that one more month of study and they would have learned about
how buffer mode I/O should work.

Are you so very sure?

Yes, I'm sure.
They used DMA to read/write to the floppy in
the original PC. Used DMA again for the XT fixed disk as well.

Their training began before that.
Which history do you know? I have been watching computers since the
core days and even worked with core computers.
Unless i am mistaken the DEC-10/PDP-10 used only silicon ram.

Didn't matter. It is obvious that they didn't learn how the
-10 did buffered mode I/O. Even if these kids had only
written a MACRO-10 user mode program to read/write a disk
file using the arcane UUOs implemented back then, they would
have produced a much better DOS when they finally began to work.

/BAH
 
A

Anne & Lynn Wheeler

jmfbahciv said:
It doesn't matter how many humans are connected. When I was working,
I'd have 5-10 jobs running; in certain cases, a lot more. I was
a single user but I had the machine humming doing 10 "users'" worth
of work.

one of the traditional requirements for time-sharing system ... besides
juggling multiple different tasks/jobs concurrently ... was sufficient
security to keep the different "users" protected from each other (and
external forces).

some of the current desktop machines had design point for stand-alone
kitchen table operation ... where many of the applications took-over
complete control of the whole machine ... environment lacked any
defenses against hostile operation. some number of these evolved into
multi-machine networked environment ... but again with the orientation
that they were operating in a non-hostile environment. a recent posts on
the subject:
http://www.garlic.com/~lynn/2009h.html#28 Computer virus strikes US Marshals, FBI affected

above also mentions getting badgered into interviewing for
position of chief security architect in redmond.

lots of past posts mentioning (virtual machine based) commercial
timesharing service bureaus starting in the 60s
http://www.garlic.com/~lynn/submain.html#timeshare

one of the large such operation was the internal HONE system providing
online support for world-wide sales & marketing.
http://www.garlic.com/~lynn/subtopic.html#hone

not commercial operations ... but still from the 60s requiring
high-level of security and defenses from possible attacks
http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

while I was doing lots of work on the software as undergraduate in the
60s ... and may have even gotten some requests from the vendor for
particular kind of changes that could have originated from these
particular customers ... i didn't actually learn about them until much
later.
 
J

jmfbahciv

Anne said:
one of the traditional requirements for time-sharing system ... besides
juggling multiple different tasks/jobs concurrently ... was sufficient
security to keep the different "users" protected from each other (and
external forces).

This aspect was extremely important in my design to make release tapes.
some of the current desktop machines had design point for stand-alone
kitchen table operation ... where many of the applications took-over
complete control of the whole machine ... environment lacked any
defenses against hostile operation. some number of these evolved into
multi-machine networked environment ... but again with the orientation
that they were operating in a non-hostile environment. a recent posts on
the subject:
http://www.garlic.com/~lynn/2009h.html#28 Computer virus strikes US Marshals, FBI affected

I never understood why this asspect was even considered, let alone
shipped.


<snip>

/BAH
 
R

Roland Hutchinson

John said:
JosephKK said:
JosephKK wrote:

JosephKK wrote:

JosephKK wrote:
JosephKK wrote:
Peter Flass wrote:
John Larkin wrote:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass

<snip>
Not very efficient, it would be a return to the bad old days
of master-slave
Which will cause every system to grind to a halt. think about
networking.

or asymmetric multiprocessing.
Which will still require one CPU to be "boss".

The problem usually isn't
the CPU anyhow, it's memory corruption, and multiple cores
don't solve this problem.
The problem is scheduling. Memory corruption would be isolated
to an app address space, not the monitor's.

IFF the monitor is well designed and written. If MS develops it
the corruption will be coming from the monitor.
MS DOESN'T KNOW HOW TO DEVELOP!! That's the point I've been
trying
to make. It's a distribution business and that is rooted deep in
its folklore.

/BAH
I see your point. Get some crap going just well enough to be
useful and pretend it is the second coming.
No. YOu don't see my point. Distribution has a completely
different
set of problems that need to be solved. I worked long and hard
to try to solve the simplest of them (this implies that the
simplest
were extremely complicated). When your business is distribution,
then the tradeoffs of all design decisions will be made in favor
of distribution and not anything else.

On-line distribution and support means that backdoors have to be
wide open.

<snip snot>

/BAH
I suspect you and i are talking past each other. Expand some more
on
the distribution thing. Especially, clarify how it is different
from
the marketeering thing. I sure do not follow you yet.
If I'm doing OS development and I need a UUO which gives me access to
the nether regions of all customer systems so I can distribute a file
FOO.EXE, my proposal would be turned down by TOPS-10 but accepted by
NT. Assume that FOO.EXE is an executable that is required to run on
all systems.

NT has to make the tradeoff decision because their business is to
distribute FOO.EXE while TOPS-10's is to provide general timesharing
to
users without allowing disasters to happen. Allowing someone outside
the customer site to copy any file at any time to any place was
considered a security risk.

/BAH
A bit clearer. Make that 2 bits.
[relieved emoticon here] Good.

The anti-security consequences of
M$ design decisions is quite well documented. This get closer to the
why.
If you look at them with the assumption of their primary business, it
gets even clearer.

And still they (M$) wonder why there is the likes of wine. And the
growing willingness to lock them up in a VM.
None of their work, AFAICT, has enough paranoia. Putting pieces
of an app's context directly into the kernel would have given
us the willies.

/BAH

Not to mention their persistent willingness to let almost any
application mess with system internals. Makes my skin crawl.

Yep. If one can do it, any old virus code can do it.

/BAH

Or any device driver that comes with a $9 Chinese serial interface
board.

You are overpaying for your serial interface boards.

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
P

Patrick Scheible

jmfbahciv said:
Do you consider cut and pasting from many files to one a "single
edit"? I don't. Do you consider having more than one file
getting displayed (like you guys describe when looking at
listings) a single edit? I don't.

No, working on independent things at the same time or close to the
same time. For instance, to work on the budget, I have open the
report on what we've already spent, separate reports for what we have
various degrees of commitment to but not yet spent, and what we'll
probably be spending for the rest of the year. What we've already
spent is read-only and cut and pasted out of, but the others are
read-write. At the same time, I have open other applications that are
watching for things -- appointments, e-mail, etc. And still other
applications that are idle enough to swap out.

-- Patrick
 
R

Rostyslaw J. Lewyckyj

John said:
The hardware is exactly identical. HP ProLiant servers, hot-plug, ECC
ram, redundant bios, redundant power supplies and fans, hot-plug RAID.
I bought 16 of them so all of our machines are the same. With about 20
machine-years so far, absolutely nothing has gone wrong.
16 machines (all bought same time ?) & 20 machine-years so far
==> each machine is only just over a year old.
 
P

Peter Flass

John said:
I guess my point, coming full circle, is that a "big" OS has millions
of lines of code, written by hundreds of people, running at the "ring
0" level where any error crashes the whole system. A secure OS must be
atiny chunk of code, absolutely insulated from damage by more complex,
less reliable stuff.

(I should tell today's story, the incredible horror that Kontron BIOSs
turned out to be...)



We can only hope. In the long term, software just isn't worth what
Microsoft is charging for it. If two or three people can write a
good-enough word processor, it needn't be a billion-dollar-a-year
franchise.



Oh, the IBM thing happened half a century ago.


Yup. If some cores were device controllers, and only they could do,
say, DMA or tcp/ip, some things like system-takeover buffer overflow
exploits might be made impossible.

John

Plus, a buggy device-driver would only take down one core, not the whole
system.
 
P

Peter Flass

John said:
OK, explain it.



You keep saying that. You don't explain why.



I read the entire RSTS/E listing, understood it all (except the Basic+
compiler, which Dec didn't write) and added my own disk driver. Did
you do anything like that?

I wrote three RTOSs from the ground up. Ditto?

I can't snip this very well, sorry. The requirements for an RTOS are
very different from most other systems. The tasks run are carefully
designed to run in bounded-time, so that round-robin scheduling is
possible and even desirable. Other systems run a random collection of
interactive stuff and CPU-hogs, which are often buggy.
 
Top