A
Ahem A Rivet's Shot
Can they bore an 8 inch hole through a 3 inch thick steel plate?
Thermite is probably the cheapest way of doing that.
Can they bore an 8 inch hole through a 3 inch thick steel plate?
The current ones have sufficient problems running multiple processes
(users) simultaneously in the same program.
TheQuickBrownFox said:My first scientific calculator was a Commodore. I never thought much
of the game console/computer however. Is that not what became the Amiga?
That actually was a nice piece of gear. The (live) music industry loved
it (so did the studios).
FatBytestard said:That is the answer to "What have you heard?" You're an idiot.
I have been running it for nearly a year now, and I ran Vista trouble
free for the last three.
FatBytestard said:He was talking about their next OS, you dippy twit.
Mensanator said:Wanna see what happens when fucktards get
their hands on Linux? It's called Red Hat.
I can't wait to see what MS-Linux is like.
John said:On any solid CPU design, with ecc ram, you can run hardware
diagnostics for hundreds of unit-years and never see an error.
Try that same test running an OS.
Engineers have figured out how to do their part right. Programmers, in
general, haven't.
John said:There's nothing wrong with shared memory, as long as the kernal has
absolute control over mappings. That must of course include all DMA
transfer priviliges.
But the kernal CPU *knows* about everything; it just doesn't have to
*do* everything.
John said:You did not read what I wrote. Those virtual OS spaces you were talkingJohn said:John Larkin wrote:
Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700
If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.
He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.
When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?
Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.
Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.
Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.
Its MTBF, hardware and software, could
be a million hours.
That doesn't matter at all if the monitor is responsible for the
world power grid.
Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].
The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.
/BAH
Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
about are applications w.r.t. monitor which is running.You seem to be confusing OSes with monitors.How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
If it's absolutely in charge of the entire system, then it has toThe top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.
Why not?
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".
/BAH
All the kernal has to do is set up priviliges and load tasks into
processors.
It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either.
Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use,
as long as access mapping is safe.
Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.
John said:I once wrote an unauthorized disk driver for the RSTS/E timeshare
system.
Booting was the really nasty part. The drives and controller
were a fraction of the price of equivalent DEC hardware.
I've written three RTOSs, two compilers, and a couple of hundred
realtime/embedded product apps, and designed the hardware too.
I even designed a CPU from the ground up, two boards of MSI TTL logic.
It was used in shipboard datalogging. It had a 20 KHz clock.
I was named in the source of FOCAL-11, for donating about 9 lines of
code. Top that!
One current project is an instrument controller, using a Knotron
Mini-ITX/Linux board talking PCIe to an FPGA and a heap of very fast
analog stuff managed by a VLIW microengine inside the FPGA. Well, the
instructions are long (48 bits) but we really only have two opcodes.
And I think Windows is a piece-o-crap.
John said:You are playing with words. An OS should be hierarchial, with the
top-most thing (what I call the kernal) being absolutely in charge of
the system.
It could, in my opinion should, run on a dedicated CPU.
It
should be as small as practical, because as programs get bigger they
get less reliable. The kernal could and should be written by one
person. Its only function is to manage resources and other processors.
It should not share time with applications, drivers, GUIs, or anything
else. It might be multithreaded but really has no need to be, because
it won't be working very hard.
Other CPUs can, under tight resource control,
do i/o, file systems,
TCP/IP stacks, GUIs, and user apps. The system design should
absolutely control their access to resources, and it should be
physically impossible for them to corrupt the kernal.
None of this is especially hard to so. It would take conscious design,
not the sort of evolutionary stumbling that has produced most modern
operating systems.
What don't you like about that?
[emoticon stops to think of something]What do you admire about Windows?
Scott said:I believe in his fractured way, he was suggesting that with a single CPU,
only one thread can execute at a time (absent hyperthreading), so technically,
even with multiple processes, only one will execute at a time.
A non sequitor, to be sure.
The terms task, thread, process et. al. also mean different things to
people from different OS backgrounds
- many mainframe OS' didn't use
the term process as a "container for one or more executable threads",
for instance.
John said:What's wrong with waiting, when you have 1024 CPU cores available, and
the limit is thermal? We've got to stop worshipping CPUs and start
worshiping reliability.
and then the same kind of OS evolution we did will happen again.John said:I, for one, think we are going to do just that -- which is not the sameJohn said:John Larkin wrote:
John Larkin wrote:
Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700
If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.
He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.
When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?
Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.
Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.
Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.
Its MTBF, hardware and software, could
be a million hours.
That doesn't matter at all if the monitor is responsible for the
world power grid.
Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].
The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.
/BAH
Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
You seem to be confusing OSes with monitors.
The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.
Why not?
If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".
/BAH
All the kernal has to do is set up priviliges and load tasks into
processors. It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either. Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use, as long as access mapping is safe.
Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.
What's wrong with that?
Even Intel is headed in that direction: multiple (predicted to be
hundreds, maybe thousands) of multi-threaded cores on a chip,
surrounding a central L2 cache that connects to DRAM and stuff, each
with a bit of its own L1 cache to reduce bashing the shared L2 cache.
Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?
thing as saying that I think it would be a good idea.
Maybe you're right. How depressing.
TheQuickBrownFox said:Those two parameters are mutually exclusive
This one excludes the first two entirely.
NOTHING is totally protected.
Because folks and bad guys hack at systems. But you already knew that.
The OS should be AWAY from the user.
FatBytestard said:The things you post here prove that 1960 is about when your education in
computer science ended as well.
You are petty, and pathetic at the same time.
JosephKK said:No, not really. But there is a lot of old cluster technology that is
about to become very useful again. Then again why bother with tens,
let alone hundreds of processors, the humans using then cannot keep up
with them. Why is merely more wasted CPU cycles better? A core2duo
can support all the eye-heroin available.
JosephKK said:It was the early 1990's i got to make the tradeoffs between both
types. I had to deal with a mix of platforms and users on the
Corporate net. Very instructive.
Because your PC isn't actively running all of those threads; thoseJohn said:My PC is currently running 399 threads, under Win XP. The CPU manages
to manage all 399, and run them, too. So why wouldn't one CPU be able
to manage hundreds of processors when that's all it has to do?
JosephKK said:I think this one is currently an open ended argument. What do you
call an application that hosts hundreds of dynamically loaded user
applications?
Particularly when that application used to be an OS in
its own right?
Terminology is failing here.