Maker Pro
Maker Pro

My Vintage Dream PC

F

FatBytestard

<snort> You don't know what you're talking about.

That is the answer to "What have you heard?" You're an idiot.

I have been running it for nearly a year now, and I ran Vista trouble
free for the last three.

Just like I said, Miss PKB retard, YOU do not know what is going on,
UNLESS you have it right there in front of you.

How often do you have these bouts of massive skull thickness? Does the
pea that is your brain ever really get a word in edgewise?
 
F

FatBytestard

Sounds like I was moving bits before your Daddy's sperm started
swimming uphill.

/BAH

The things you post here prove that 1960 is about when your education in
computer science ended as well.

You are petty, and pathetic at the same time.
 
F

FatBytestard

Oh, good grief. You are confused. It sounds like you've changed the
meanings of all those terms.

/BAH

As if a timeshare twit like you with no modern education could even
tell the difference if your hand was being held through it all.
 
F

FatBytestard

Sigh! There will have to be communication between cores.

No, there doesn't. A piece of hardware can have several independently
operating cores on it. There is no data passage requisite between them.


A perfect example is your PC's CPU (well not yours anyway) and your
graphics card's GPU.

There are GPUs out there running distributed computing processes on
them for "folding at home" for example, and the entire block of data gets
processed without ANY need for the system CPU, except to grab another
chunk and start on it, and pass the finished one out to the net.

Your paradigm is no longer valid, Missy.
 
R

Rostyslaw J. Lewyckyj

John Larkin wrote:
............. muliple back and forth snipped................
Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?

John
Unless and until something demonstrably better comes along. :)
 
F

FatBytestard

If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".

You're an idiot. I suggest you examine the way the CELL BE CPU
operates its nine cores.

Speed wise, it beats nearly everything out there for broadbanded
data moving. So NO, it is NOT tied to the master core speed, and all data
does NOT have to pass by the master core.
So, AGAIN you are wrong!

Now, if you really do have a brain, and you really do have computer
science knowledge, you will examine this page, and come back here
apologizing to everyone for wearing horse blinders for the last 30 years.

Note that it moves 96B per cycle across its main bus. Each core moves
16 Bytes per cycle into and across that bus. No waiting for the master
CPU to manage the traffic. Can you really be that far removed from
reality?


http://domino.research.ibm.com/comm/research.nsf/pages/r.arch.innovation.html
 
T

TheQuickBrownFox

How many CPUs are being shipped that use CML? Hint: it's in the very,
very low single digits.

John


The point was that she has not been "current" for decades, and it is
pretty obvious.
 
F

FatBytestard

The only good low-endian is a dead low-endian.

John

I thought it was big and little, not low and high.

Fact is, in this realm, we are ALL "from the endian tribe".

OR "species" should have been the word for "endian", with "tribe" being
about "big or little" and "low or high".
 
T

TheQuickBrownFox

Another nym of the dimbulb troll:


Another pussy post from the dude who's manhood is waning so badly in
his late years, that he has to stab out at others that are not brain
dead.

Stab out, Michael! Stab out. Just point the damned thing at YOUR
chest next time.

We don' need no stinkin' badgers here. Not that you could ever measure
up to even a lowly badger.

You are, however, nothing more than the lazy dog I jumped over to get
here.

Bwuahahahahahaha!
 
C

Charles Richmond

Greegor said:
John said:
[snip...] [snip...] [snip...]
Simple: use one CPU out of 256 as the ultimate manager, and waste as
much of its cycles as you like. CPUs will soon be free.
So instead of carrying placards that say "FREE THE CHICAGO SEVEN",
we'll be carrying placards that say "FREE THE INTEL 256"??? ;-)

Will one of them be actually GAGGED in the courtroom by Judges order?

Instead of "gagged by the judge", you get "deadlocked by the scheduler"
or "blocked by the arbitrator". ;-)

(If the I/O is blocked, the cpu goes to the bottom of the "ready" list.)
 
T

TheQuickBrownFox

John said:
John Larkin wrote:

John Larkin wrote:

Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700

If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.

He's reinventing what didn't work well at all.
Times have changed, guys.
Not really. The proportions of speed seem to remain the same.

When cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
First of all, the virtual OSes will merely be apps and be treated
that way. Why in the world would you equate 4 threads/core?

Furthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.

Fewer points of failure must be better than many points.
You need to think some more. If the single-point failure is
the monitor, you have no security at all.

Few security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
It isn't going to be tiny. YOu are now thinking about size of the
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.

Its MTBF, hardware and software, could
be a million hours.

That doesn't matter at all if the monitor is responsible for the
world power grid.



Hardware basically doesn't break any more; software does.
That is a very bad assumption. You need soft failovers.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].

The virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
HOney, you still need an OS to deal with the virtuals. Virtuals are
applications.

/BAH

Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
You did not read what I wrote. Those virtual OS spaces you were talking
about are applications w.r.t. monitor which is running.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
You seem to be confusing OSes with monitors.

The top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.

Why not?

If it's absolutely in charge of the entire system, then it has to
be able to access all of hardware, including the other cores. This
implies that some kind of comm protocol and/or pathway has to go
from all those other cores to the master core. This sucks w.r.t.
performance. The system will run only as fast as the master core
can send/receive bits. All other cores will constantly be in
"master-core I/O wait".

/BAH

All the kernal has to do is set up priviliges and load tasks into
processors. It should of course do no applications processing itself
(which would imply running dangerous user code) and has no reason to
move application data either. Stack driver CPUs, device driver CPUs,
file system CPUs do the grunt work, in their own protected
environments. There can be one shared pool of main memory for the apps
to use, as long as access mapping is safe.

Memory can be shared as needed for processes to communicate. But
nobody can trash the kernal.

What's wrong with that?

Even Intel is headed in that direction: multiple (predicted to be
hundreds, maybe thousands) of multi-threaded cores on a chip,
surrounding a central L2 cache that connects to DRAM and stuff, each
with a bit of its own L1 cache to reduce bashing the shared L2 cache.

Does anybody think we're going to keep running thirty year old OS
architectures on a beast like this?

John

Whatever the OS of 10,560 is, it will probably be called OS/360;-)


For FOOL circle power computing!
 
T

TheQuickBrownFox

Greegor said:
John Larkin wrote:

[snip...] [snip...] [snip...]
Simple: use one CPU out of 256 as the ultimate manager, and waste as
much of its cycles as you like. CPUs will soon be free.
So instead of carrying placards that say "FREE THE CHICAGO SEVEN",
we'll be carrying placards that say "FREE THE INTEL 256"??? ;-)

Will one of them be actually GAGGED in the courtroom by Judges order?

Instead of "gagged by the judge", you get "deadlocked by the scheduler"
or "blocked by the arbitrator". ;-)

(If the I/O is blocked, the cpu goes to the bottom of the "ready" list.)


Kind of like some of the events that occur in this group.
 
F

FatBytestard

Wanna see what happens when fucktards get
their hands on Linux? It's called Red Hat.
I can't wait to see what MS-Linux is like.


You just proved that your nym does not match the person.

Red Hat does suck.... now. It was fine before they let bean counters
in with the programmers.

MS-Linux would be far better than fucking "Lindows" was.

Talk about snake oil!
 
R

Roland Hutchinson

John said:
Start thinking multi-CPU. The day of the
single-CPU-that-runs-everything are numbered, even in all but the
simplest embedded systems.

Imagine if your TCP/IP stack ran in its own processor, and never
interrupted your realtime stuff. Ditto another CPU scanning and
linearizing all the analog inputs. All the stuff the main program
needs just magically appears in memory. Wouldn't that make life
simpler?

This starts to sound like another turn of the venerable Wheel of
Reincarnation ( http://catb.org/~esr/jargon/html/W/wheel-of-
reincarnation.html ), only implemented in general-purpose hardware.

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
A

Ahem A Rivet's Shot

Indeed, Windows 7 (of which you can download the final beta and run it
for free for the next year or so) is widely held to be, as advertised,
the most secure Microsoft operating system ever.

ISTR hearing XP decribed in exactly those terms around the time it
was being launched.
 
Top