Maker Pro
Maker Pro

My Vintage Dream PC

J

jmfbahciv

John said:
Why should it be? All it would do is set up memory management and
schedule tasks, so it wouldn't be very busy.
HUH?!

Things like file systems
and device drivers and network ports could have their own CPUs.

NONONONO. You are stuck in single-user/system thinking which
is never going to be equivalent to single-task/system.
If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.

Wrong. Actually, that is so wrong it's not even wrong.

think about email and networking. There shouldn't be 500
copied of that software running if there are 500 emails.

The simpler it is, the less likely that it can crash. In fact, the OS
core should never crash

that has always been the goal. However, when you have something
catastrophic happening, the sane thing to do is stop the whole
system before something really, really bad happens to the bits
on the disk.
and viruses should be flat impossible.

Windows has become a major threat to national security. We need a new
approach.

Getting rid of small computer thinking would be a beginning.

/BAH
 
J

jmfbahciv

Richard said:
Not sure what you're referring to. IIRC the overall machine was
simply called the Texas Instruments Silent 700 Electronic Data
Terminal. I don't recall the acoustic modem and thermal printer
having any special designation.

We had a one-word name for it. I simply can't recall the word.

/BAH
 
J

jmfbahciv

Peter said:
The other way 'round. If you dedicated a core to the OS, it would have
to single-thread. If any core can execute any thread the OS can get
whatever it needs. It's tempting to just dedicate something, but OS
developers decided years ago that the more scheduling flexibility you
have, the better.

Sigh! Now consider that the core containing the OS has a cosmic ray
hit it.

/BAH
 
J

jmfbahciv

Ahem said:
Wheee it's MP/M all over again.
He's reinventing what didn't work well at all. Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.

/BAH
 
R

Roland Hutchinson

Peter said:
As long as it isn't filled with Trollops booted from CraigsLiost.

My PeeCee won't boot from CraigsList.

Please what am I doing wrong?

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
R

Roland Hutchinson

jmfbahciv said:
NONONONO. You are stuck in single-user/system thinking which
is never going to be equivalent to single-task/system.


Wrong. Actually, that is so wrong it's not even wrong.

think about email and networking. There shouldn't be 500
copied of that software running if there are 500 emails.



that has always been the goal. However, when you have something
catastrophic happening, the sane thing to do is stop the whole
system before something really, really bad happens to the bits
on the disk.




Getting rid of small computer thinking would be a beginning.

How about a class action suit against Microsoft for fraudulently selling a
product unsuitable for the use advertised and trying to cover their
posteriors with an unconscionable contract (the EULA).

I don't understand the law sufficiently to know why this isn't feasible; for
surely if it were feasible someone would have tried by now.

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
P

Peter Flass

John said:
For Pete's sake, timeshare systems ran hundreds of sometimes-malicious
users, ran for weeks at a time, and did good stuff with a megabyte of
memory and one VAX MIP. It was flat impossible for something like a
user buffer overflow exploit to take over the system, because sensible
hardware and O protections walled off the applications.

That's because VMS wasn't written i C, or you'd have had more than a few
buffer overruns.
 
R

Rich Grise

I don't understand the law sufficiently to know why this isn't feasible; for
surely if it were feasible someone would have tried by now.

They tried it on IBM some decades ago - ISTR a similar deal vis-a-vis
MICRO$~1 - they simply buried the court in mountains of paperwork, which,
by the time that they'd read it all, the whole thing had become moot
anyway.

Cheers!
Rich
 
S

Scott Lurndal

Peter Flass said:
That's because VMS wasn't written i C, or you'd have had more than a few
buffer overruns.

VMS has this facility called 'install', that allowed an application to be
configured to run with privileges elevated beyond the users own privileges.

Early versions of the DEBUG utility didn't bother to reduce the privileges,
so debugging an application which was installed with change mode to KERNEL
privileges would allow anyone complete access to the operating system and
hardware.

So, not even VMS was immune to "exploits to take over the system".

scott
 
W

Walter Bushell

John Larkin said:
In the DOS days, there was a joke going around that some jpeg file
included a virus. Rising to the challenge, Microsoft actually made
that possible, and it really happened.

Not only _that_ but made it possible for word processing documents to
spread viruses. Truly Brilliant, giving Word macros the ability to do
low level IO.
 
C

Christopher C. Stacy

jmfbahciv said:
We had a one-word name for it. I simply can't recall the word.

Oh, you know what? This rings a bell.
Now I can be frustrated, too!
 
J

jmfbahciv

Walter said:
[QUOTE="jmfbahciv said:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass

John Larkin wrote:
The ultimate OS should maybe be hardware, fpga probably, or an
entirely separate processor that runs nothing but the os.

CDC-6600.
In a few years, when most any decent CPU has 64 or so cores, I suspect
we'll have one of them run just the OS. But Microsoft will f*** that
up, too.

John
Why only one? Surely the kernel will be multithreaded.
You meant to say reentrant.

/BAH

Well that too.[/QUOTE]

Not "too" but first.

/BAH
 
J

jmfbahciv

John said:
Times have changed, guys.

Not really. The proportions of speed seem to remain the same.
When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?

First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?
Furthermore,

Fewer points of failure must be better than many points.

You need to think some more. If the single-point failure is
the monitor, you have no security at all.
Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.

It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.
Its MTBF, hardware and software, could
be a million hours.


That doesn't matter at all if the monitor is responsible for the
world power grid.


Hardware basically doesn't break any more; software does.

That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].
The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?

HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.

/BAH
 
J

jmfbahciv

John said:
The timesharing folks fixed those sorts of problems decades ago.

I know. I be one of them.
The
file system prevented applivations from using tol much disk or the
wrong parts of disk.

For Pete's sake, timeshare systems ran hundreds of sometimes-malicious
users, ran for weeks at a time, and did good stuff with a megabyte of
memory and one VAX MIP. It was flat impossible for something like a
user buffer overflow exploit to take over the system, because sensible
hardware and O protections walled off the applications.

So that sort of stability is impossible, 20 years later?

Yes because all of that knowledge has been forgotten. These days
people think in single-user/owner terms.
See above. Windows and Intel are "small computer thinking."

You have some too. :)

/BAH
 
J

jmfbahciv

John said:
Why can't Wintel understand I and D space separation? DEC memory
management had page attributes that included execute-only, read-only
data, r/w data, and stack. Pages also had user and exec attributes. If
you did the wrong thing, it trapped. The compilers and loaders were
smart enough to put each thing in its own space.

Microsoft seems eager to execute anything. Why do their compilers
indiscriminately mix stacks, buffers, data, and executable code? Maybe
I'm missing something, but it seems to me that even a decent linker
strategy could keep buffers and stacks from growing on top of code
space.

In the DOS days, there was a joke going around that some jpeg file
included a virus. Rising to the challenge, Microsoft actually made
that possible, and it really happened.

Because it's the folklore to not do this. I already wrote about
this a couple days ago. It's easy for me to see why this stuff
happened (or doesn't happen). I don't understand why others here
don't seem to be able to see it.

/BAH
 
J

jmfbahciv

Scott said:
VMS has this facility called 'install', that allowed an application to be
configured to run with privileges elevated beyond the users own privileges.

Early versions of the DEBUG utility didn't bother to reduce the privileges,
so debugging an application which was installed with change mode to KERNEL
privileges would allow anyone complete access to the operating system and
hardware.

So, not even VMS was immune to "exploits to take over the system".

Have EDDT, will travel.

/BAH
 
J

jmfbahciv

JosephKK said:
IFF the monitor is well designed and written. If MS develops it the
corruption will be coming from the monitor.

MS DOESN'T KNOW HOW TO DEVELOP!! That's the point I've been trying
to make. It's a distribution business and that is rooted deep in
its folklore.

/BAH
 
J

jmfbahciv

JosephKK said:
Let's try this another way.

Good because I had no idea how to start. Thanks.
Each user program sees file IO and
usually has some kind of terminal IO.

Inside the OS there are usually a switcher, a scheduler, a process
manager, a file manager, an IO manager, and other basic parts.
Optional parts relate to hardware configuration and possibly dynamic
hardware configuration and temporary file systems.

Now how do UUOs and CALLIs relate to how the above mentioned
interfaces? (If at all)

My user mode code has some buffers I want to be written to the
disk. I do a series of UUOs and CALLIs to convey to the
monitor's file system handler, memory handler, device routines,
and controller device routines that I want my bits to be copied
from here to there and labelled XYZ. I ask the monitor what
date/time it thinks it is by doing a CALLI. I set up certain
rules about subsequent usage of the computer system by telling
the monitor through CALLIs and UUOs. These are the only
ways the monitor and I, the user, communicate.

How is that for a start?

/BAH
 
Top