Maker Pro
Maker Pro

My Vintage Dream PC

A

Ahem A Rivet's Shot

The current ones have sufficient problems running multiple processes
(users) simultaneously in the same program.

One current (and very popular) OS has this problem - most current
OSs do not.
 
P

Peter Flass

TheQuickBrownFox said:
My first scientific calculator was a Commodore. I never thought much
of the game console/computer however. Is that not what became the Amiga?
That actually was a nice piece of gear. The (live) music industry loved
it (so did the studios).

I believe at one point a company in Germany licensed it and sold some.
I also heard something about an upgraded version using a Power-PC chip.
 
P

Peter Flass

FatBytestard said:
That is the answer to "What have you heard?" You're an idiot.

I have been running it for nearly a year now, and I ran Vista trouble
free for the last three.

If by "trouble-free" you mean slow and annoying, then yes. I can't say
it has crashed on my Wife's laptop.
 
P

Peter Flass

FatBytestard said:
He was talking about their next OS, you dippy twit.

Too late, their "next" OS is already in the hands of the customers,
which means nothing in it will be fixed. It has ever been thus.
 
P

Peter Flass

Mensanator said:
Wanna see what happens when fucktards get
their hands on Linux? It's called Red Hat.
I can't wait to see what MS-Linux is like.

What makes you think they haven't got it working? They took a pretty
standard OS (NT) and added their GUI to it. Apple did the same with
Mach. With all the people working in Redmond, I wouldn't be surprised
if there isn't a running version in a lab somewhere. What advantage
would they get out of selling it?
 
J

jmfbahciv

John said:
On any solid CPU design, with ecc ram, you can run hardware
diagnostics for hundreds of unit-years and never see an error.

Those diags are software.
Try that same test running an OS.

The monitor will find some. Our monitors were the best diags we
had.
Engineers have figured out how to do their part right. Programmers, in
general, haven't.

And the monitor is software. It will have to be able to recover
from a blip, especially if its in a sensitive spot.

/BAH
 
J

jmfbahciv

John said:
There's nothing wrong with shared memory, as long as the kernal has
absolute control over mappings. That must of course include all DMA
transfer priviliges.

But the kernal CPU *knows* about everything; it just doesn't have to
*do* everything.

It cannot know about everything unless the other CPUs tell it. How
will they tell it? That's what I'm trying to get you to think about.

/BAH
 
J

jmfbahciv

John said:
John said:
John Larkin wrote:

Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700

If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.

He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.

When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?

Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.

Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.

Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.

Its MTBF, hardware and software, could
be a million hours.

That doesn't matter at all if the monitor is responsible for the
world power grid.



Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].

The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.

/BAH

Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
You seem to be confusing OSes with monitors.
The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.

Why not?
If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".

/BAH

All the kernal has to do is set up priviliges and load tasks into
processors.

How does it do that? and how will it schedule itself? What about the
priority interrupts that will interrupt its own processing?
It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either.

Who does the moving of an app and its data from one CPU to another?
Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use,

Which CPU will "own" that shared pool? What if the shared pool has
to be exclusive to a few app and kept from the other apps?
as long as access mapping is safe.

This is why your proposal won't work well. Making that assumption
is not a Good Idea. The access mapping has to be done by all
CPUs. So where does the code that does this reside? In your
proposal, it should be on the CPU where the kernal is running...but
it can't because the other CPUs have to be able to read/write the
common pool of memory. So do they have to submit a request to
the Boss CPU for the data? Now you have a situation where the whole
system is in wait mode for each and every bit of that common pool.
When you include networking and peripherals in the mix, you would
get better answers using a Chinese calculator (I cannot think of the
word for this one today).
Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.

You can trash the kernal if the data is just right.

/BAH
 
J

jmfbahciv

John said:
I once wrote an unauthorized disk driver for the RSTS/E timeshare
system.

RSTS wasn't really a timesharing system.
Booting was the really nasty part. The drives and controller
were a fraction of the price of equivalent DEC hardware.

I've written three RTOSs, two compilers, and a couple of hundred
realtime/embedded product apps, and designed the hardware too.

I even designed a CPU from the ground up, two boards of MSI TTL logic.
It was used in shipboard datalogging. It had a 20 KHz clock.

None of this made you intimate with a timesharing system that had
enormous demands of instant gratification. I wasn't trying to
denigrate your experience at the hardware level. I was trying
to point out that a real timesharing OS had different tradeoffs.
I was named in the source of FOCAL-11, for donating about 9 lines of
code. Top that!

One current project is an instrument controller, using a Knotron
Mini-ITX/Linux board talking PCIe to an FPGA and a heap of very fast
analog stuff managed by a VLIW microengine inside the FPGA. Well, the
instructions are long (48 bits) but we really only have two opcodes.

And I think Windows is a piece-o-crap.

For delivering computing services in a timely and secure manner, it
is. For distributing bits from a central place, it's not. However,
the functionality that its customers need and want swamp MS'
support and development infrastructure. So the distribution software
efforts are mainly spent trying to block up the holes created by
the last changes to block those holes. IOW, the efforts have to
made on distribution problems, not the other computing problems
that are normal to any OS.

/BAH
 
J

jmfbahciv

John said:
You are playing with words. An OS should be hierarchial, with the
top-most thing (what I call the kernal) being absolutely in charge of
the system.

But it cannot have that kind of control with more than [number picked
out of the air as a guesstimate] 8 threads or processes or services.
It could, in my opinion should, run on a dedicated CPU.

It is impossible to do this without allowing the other CPUs to be
able to make their own decisions about what they're processing.
It
should be as small as practical, because as programs get bigger they
get less reliable. The kernal could and should be written by one
person. Its only function is to manage resources and other processors.
It should not share time with applications, drivers, GUIs, or anything
else. It might be multithreaded but really has no need to be, because
it won't be working very hard.

then it won't be in complete control of the system (if it's not
working very hard).

Other CPUs can, under tight resource control,

You just told me that resources won't be visible to the kernal. So
the Boss CPU cannot be in control of the whole system.
do i/o, file systems,
TCP/IP stacks, GUIs, and user apps. The system design should
absolutely control their access to resources, and it should be
physically impossible for them to corrupt the kernal.

None of this is especially hard to so. It would take conscious design,
not the sort of evolutionary stumbling that has produced most modern
operating systems.

What don't you like about that?

It has nothing to do with my likes nor dislikes. What I'm talking
about is what can and cannot work well.
What do you admire about Windows?
[emoticon stops to think of something]

[20 minutes later, emoticon gives up]

There has to be something but I can't think of one.

/BAH
 
J

jmfbahciv

Scott said:
I believe in his fractured way, he was suggesting that with a single CPU,
only one thread can execute at a time (absent hyperthreading), so technically,
even with multiple processes, only one will execute at a time.

Oh, I see. why bother having even two, then.
A non sequitor, to be sure.

The terms task, thread, process et. al. also mean different things to
people from different OS backgrounds

I understand this. I have been doing the translations.
- many mainframe OS' didn't use
the term process as a "container for one or more executable threads",
for instance.

It depended on what the context of the conversation was.



/BAH
 
J

jmfbahciv

John said:
What's wrong with waiting, when you have 1024 CPU cores available, and
the limit is thermal? We've got to stop worshipping CPUs and start
worshiping reliability.

What is wrong with it is that 1023 CPUs are idle while waiting for the
1024th to give them something to do.

/BAH
 
J

jmfbahciv

John said:
John said:
John Larkin wrote:

John Larkin wrote:

Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700

If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.

He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.

When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?

Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.

Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.

Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.

Its MTBF, hardware and software, could
be a million hours.

That doesn't matter at all if the monitor is responsible for the
world power grid.



Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].

The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.

/BAH

Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
You seem to be confusing OSes with monitors.

The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.

Why not?

If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".

/BAH
All the kernal has to do is set up priviliges and load tasks into
processors. It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either. Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use, as long as access mapping is safe.

Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.

What's wrong with that?

Even Intel is headed in that direction: multiple (predicted to be
hundreds, maybe thousands) of multi-threaded cores on a chip,
surrounding a central L2 cache that connects to DRAM and stuff, each
with a bit of its own L1 cache to reduce bashing the shared L2 cache.

Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?
I, for one, think we are going to do just that -- which is not the same
thing as saying that I think it would be a good idea.

Maybe you're right. How depressing.
and then the same kind of OS evolution we did will happen again.

/BAH
 
J

jmfbahciv

TheQuickBrownFox said:
Those two parameters are mutually exclusive


This one excludes the first two entirely.


NOTHING is totally protected.


Because folks and bad guys hack at systems. But you already knew that.

The OS should be AWAY from the user.

You just got through posting that this is an old-fogey concept.

/BAH
 
J

jmfbahciv

FatBytestard said:
The things you post here prove that 1960 is about when your education in
computer science ended as well.

You are petty, and pathetic at the same time.

Again, you demonstrate that you don't know what you're talking about.

/BAH
 
J

jmfbahciv

JosephKK said:
No, not really. But there is a lot of old cluster technology that is
about to become very useful again. Then again why bother with tens,
let alone hundreds of processors, the humans using then cannot keep up
with them. Why is merely more wasted CPU cycles better? A core2duo
can support all the eye-heroin available.

You are thinking about PC owners. Now think about all the systems whose
functionality purposely excludes the touch of human hands. Morten's
talking about those.

/BAH
 
J

jmfbahciv

JosephKK said:
It was the early 1990's i got to make the tradeoffs between both
types. I had to deal with a mix of platforms and users on the
Corporate net. Very instructive.

Yep :). Now think about providing it all with one, and only
one, system. :)

/BAH
 
J

jmfbahciv

John said:
My PC is currently running 399 threads, under Win XP. The CPU manages
to manage all 399, and run them, too. So why wouldn't one CPU be able
to manage hundreds of processors when that's all it has to do?
Because your PC isn't actively running all of those threads; those
thread are not simultaneously asking the Boss CPU for attention.

/BAH
 
J

jmfbahciv

JosephKK said:
I think this one is currently an open ended argument. What do you
call an application that hosts hundreds of dynamically loaded user
applications?

A daemon.
Particularly when that application used to be an OS in
its own right?

Which one are you talking about? The emulators are running as
an app.
Terminology is failing here.

It's not a confusion of terminology. It's more a confusion of
the software level a piece of code is executing. I run into
this confusion all the time. I think it's caused by people
assuming that Windows is the monitor. It never was.


/BAH
 
Top