Maker Pro
Maker Pro

My Vintage Dream PC

J

jmfbahciv

Ahem said:
Pretty clear to me -

Great! Now you can help when I get trapped in a word salad. :)
what we unixy folks would call a syscall
interface.

Sure. But haven't you noticed that these kids don't know
about how the monitor of any flavor get asked for computing
services? They think it just happens as inline EXE code
with no branching.

/BAH
 
J

jmfbahciv

Peter said:
We've seen this since CP-67 in, what, 1968?. BTDT.
If the OS doesn't touch the hardware, then it's not the monitor,
but an app.

I've been talking about the monitor piece of an OS.

/BAH
 
R

Richard Cranium

Sounds like I was moving bits before your Daddy's sperm started
swimming uphill.

/BAH


Just a minor correction. In Archie's case, the sperm fell backwards
downhill.
 
P

Patrick Scheible

John Larkin said:
Thousands of points of failure are better?

If it's designed correctly, the failure of any of the thousands of
CPUs does not cause the system to go down or even any work to be
interrupted. This is not new technology.

-- Patrick
 
S

Scott Lurndal

jmfbahciv said:
Oh, good grief. You are confused. It sounds like you've changed the
meanings of all those terms.

I believe in his fractured way, he was suggesting that with a single CPU,
only one thread can execute at a time (absent hyperthreading), so technically,
even with multiple processes, only one will execute at a time.

A non sequitor, to be sure.

The terms task, thread, process et. al. also mean different things to
people from different OS backgrounds - many mainframe OS' didn't use
the term process as a "container for one or more executable threads",
for instance.

scott
 
S

Spehro Pefhany

Why should 20K lines of code be less reliable than 20M lines of code?

It is also untractable since the nanokernel

If only the kernal runs on the kernal CPU, all those objections go
away.

Start thinking multi-CPU. The day of the
single-CPU-that-runs-everything are numbered, even in all but the
simplest embedded systems.

Imagine if your TCP/IP stack ran in its own processor, and never
interrupted your realtime stuff. Ditto another CPU scanning and
linearizing all the analog inputs. All the stuff the main program
needs just magically appears in memory. Wouldn't that make life
simpler?

You must be thinking of some sophisticated mechanisms for memory
allocation, access locking and so on, so that interprocessor
communications via shared memory look atomic from the programmer's
perspective.
 
R

Roland Hutchinson

John said:
John said:
John Larkin wrote:

Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700

If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.

He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.

When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?

Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.

Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.

Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.

Its MTBF, hardware and software, could
be a million hours.


That doesn't matter at all if the monitor is responsible for the
world power grid.



Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].

The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.

/BAH


Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.

You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?

You seem to be confusing OSes with monitors.
The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.

Why not?
If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".

/BAH

All the kernal has to do is set up priviliges and load tasks into
processors. It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either. Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use, as long as access mapping is safe.

Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.

What's wrong with that?

Even Intel is headed in that direction: multiple (predicted to be
hundreds, maybe thousands) of multi-threaded cores on a chip,
surrounding a central L2 cache that connects to DRAM and stuff, each
with a bit of its own L1 cache to reduce bashing the shared L2 cache.

Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?

I, for one, think we are going to do just that -- which is not the same
thing as saying that I think it would be a good idea.

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
R

Roland Hutchinson

jmfbahciv said:
But Microsoft is going to fix that with their next OS.

Nyah, they can fix it with a service pack.
Well, there are degrees of "most secure". ;-)

As in: The Right Way (Better is Better), the Wrong Way (Worse is Better --
unless it's the other way around), and the Microsoft Way (If it boots, ship
it!).

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
R

Rostyslaw J. Lewyckyj

John said:
..........................


Thousands of points of failure are better?
It depends on how often that single point fails. :)
 
R

Rich Grise

Engineers have figured out how to do their part right. Programmers, in
general, haven't.

I consider myself a good programmer; interestingly, when I start to design
some program using pseudocode, it (my pseudocode) always seems to turn to
C. ;-)

Cheers!
Rich
 
P

Peter Flass

John said:
John said:
John Larkin wrote:

Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700

If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.

He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.

When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?

Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.

Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.

Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.

Its MTBF, hardware and software, could
be a million hours.

That doesn't matter at all if the monitor is responsible for the
world power grid.



Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].

The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.

/BAH

Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
You seem to be confusing OSes with monitors.
The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.

Why not?
If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".

/BAH

All the kernal has to do is set up priviliges and load tasks into
processors. It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either. Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use, as long as access mapping is safe.

Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.

What's wrong with that?

Even Intel is headed in that direction: multiple (predicted to be
hundreds, maybe thousands) of multi-threaded cores on a chip,
surrounding a central L2 cache that connects to DRAM and stuff, each
with a bit of its own L1 cache to reduce bashing the shared L2 cache.

Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?

John

Whatever the OS of 10,560 is, it will probably be called OS/360;-)
 
T

TheQuickBrownFox

You seem to be deluded by the belief that BAH will listen to reality.
Only those systems that were designed in the 60s to run on hardware of
the 60s are acceptable to her.

--L

Batch process runtime mentality.

She and her 'grasp' of computer science has some serious "been out of
the mainstream for too many decades" issues.
 
T

TheQuickBrownFox

The top-level OS should be small, simple,

Those two parameters are mutually exclusive
absolutely in charge of the
entire system,

This one excludes the first two entirely.
totally protected, and never crash.

NOTHING is totally protected.

Because folks and bad guys hack at systems. But you already knew that.

The OS should be AWAY from the user. An Administrator (even if that is
the user) should be required to make system level changes or allow any
application that does on a per event, single event at a time basis.

This way, the OS CAN be large. Just be sure to make it secure from ANY
changes that are not done in an administration specific setting (and
login) Remote administration should go away until systems become less
vulnerable. Then we can start setting things up the way we were headed
before all these gang boy retard virus authors and system hackers made
the scene.



The ACCESS interface is all the user should get access to, and he should
get no utilities that allow further access.
 
F

FatBytestard

Indeed, Windows 7 (of which you can download the final beta and run it for
free for the next year or so)

Yes. I have several iterations of it, including the RC (build 7100).
Last night, I DL'd the RTM (bld 7137).
is widely held to be, as advertised, the most
secure Microsoft operating system ever.

Like that would be a hard feat to manage. That could be stated as being
true, and it could still be very bad at security. They likely mean that
they have nearly plugged most of the holes though. I lean toward belief.
Just remember that damnation with faint praise is still damnation.

The CEO of LORAL made a remark about how their next satellite was going
to bring in millions. He was consoling an investor about why another
320M was needed to finish the project.

Loral went under, and that satellite system failure was the reason.

One should never project based on what one "knows" as the market is a
very strange animal.

I do believe, however, that this time Billy's boys may have made
something nice. If it fails, they'll have to create a Linux based
implementation to keep up. :) I think they should have developed one
all along anyway.

This one might work pretty good though.
 
F

FatBytestard

Both are correct.

Absolutely not.
Similar to "tomato,"

No similarity whatsoever. This is NOT a dialectal thing.
preferring one or the other is just
cultural background.

Preferring the wrong one is just flawed education, and standing behind
it is just stubborn stupidity.

Yeah,they included it because of retarded, morphing dopes. What would
be interesting would be to find out how long ago the term datum began
being incorrectly pronounced. I'd bet that it coincides with when idiots
in the computer science realm began pronouncing "data" incorrectly.


Does this mean that m-w will begin showing the word "Aks" when someone
looks up the word ask? I mean, over half the idiots outs there say it
the gang boy retard way, shouldn't M-W "recognize" their stupidity and
include it as a "proper" way to say "ask"?

Same fucking logic. The way we are heading, Obama will have the nation
operating in a socialistic manner before his second term with all the
dipshits around that are willing to accept all the bullshit this
country's "men" have generated in the last 3 decades. I saw this shit
coming, back in the "Cadillac/platform shoe/pimp/retard" days in the
seventies.

Show me Obama's bank accounts in the three years after he got his 49M
Annenburg grant.


Absolutely not (both are not correct). Just because some dope said
that both were correct, and his entire class, and all other that followed
believed the fucking idiot because he had the title "instructor", doesn't
mean that it is true.

It is a HARD "A", as in DAY-TA. It was not EVER "DAT-A" with the "a"
pronounced like that in the word "that". NEVER. ALL idiots that EVER
thought it was, are IDIOTS.

It is just like Bush and several other idiots that pronounce Nuclear
like "Nuke-U-Lar", when it is now, and has ALWAYS been "New-Clear", as it
refers to the nucleus "New-Clee-Us" of an atom.

So, yeah... I KNOW that it is written that way, and the dictionary
claims it as such, but anyone with half a brain hearing both would call
"Dhata" a retarded pronunciation, which it is.
 
F

FatBytestard

You clearly

Dumbfucks that spout out "clearly" every other time they speak with
someone OBVIOUSLY has serious self impotence issues. You are one such
dumbfuck.
do not have a clue as to what you are talking about.
PKB.

Please leave or self-destruct.

Go blow your brains out, Joe. Bio fuel is all your carcass would be
good for. Stop telling others what to do, jackass. You have zero
authority, here or anywhere else.
 
F

FatBytestard

No problem. If core in this context means memory for you, then ECC fill
fix that problem. Most of the mission critical computers use ECC memory.
You will just get report to a log that bit was flipped and was fixed.

If the core means one processor core, then the story is more difficult,
but usually caches are protected by ECC, datapaths have different forms
of protection. Also if cosmic ray hits regular control logic, the
propability for something bad happening is quite low (10% derating is
quite often used, because not all logic nodes are relevant for each
cycle).

If I remember correctly also the cosmic neutron/proton effects are not
as bad as the alpha radiation caused by the semiconductor itself and the
packages.

--Kim

Bingo!
 
T

TheQuickBrownFox

Maybe he's a Commodorian.


My first scientific calculator was a Commodore. I never thought much
of the game console/computer however. Is that not what became the Amiga?
That actually was a nice piece of gear. The (live) music industry loved
it (so did the studios).


My calculator used 100mA with a 1 in the mantissa and both memories. It
burned at 800mA with all 8s in the three locations. Thank God for LCD
technologies.
 
Top