P
Patrick Scheible
John Larkin said:Which doesn't even try to answer the question.
The answer is that the question contains assumptions that are probably
incorrect.
-- Patrick
John Larkin said:Which doesn't even try to answer the question.
John Larkin said:I learned a lot from what was successful. I've run timeshare systems,
written RTOSs and compilers and a million or two lines of code, run
all sorts of pc's from Commodore to WinXP, and designed a lot of
computer systems and interfaces and electronics. I even interfaced an
ADC (that I designed) to an IBM 1401, and this week I'm interfacing an
ADC (that I didn't design) to a Linux board via PCI Express.
What have you done?
The question I raised was merely, what do you think OSs will be like
in 10 or 20 years, given that computer chips are headed to multiple
CPU cores (tens, hundreds) surrounding a central cache?
Patrick Scheible said:Much like racing stripes on a 1980s Pontiac?
Actually, I don't think even C++'s proponents say it's faster to
compile or run. They hope it's faster or easier to write.
Mensanator said:Really? What state do you think it would be?
Archimedes' Lever said:One instruction? - Pull Power Cord.
ALL ZEROS.
I win.
John Larkin said:We do nightly backups to local hard drives and zero-based weekly
backups of *everything* to DVDs. The backups get stashed in three
different locations in two different cities. Seems to work so far.
John
Patrick said:I'm not sure more than 4 cores or so will really catch on. The
benefits seem so marginal, for ordinary use. For some games, maybe.
To achieve reliability for a 24x7 application, yes. For certain
applications that parallelize well, yes. But ordinary work isn't
waiting on CPU power much anyway, and typically doesn't parallelize
well anyway. Maybe you have a couple of large applications each
sitting on a core, another one for the OS and GUI, but after that I'm
not sure what there is for them to do.
Peter Flass said:One of the goals of AMD when they bought ATI was to use their designs
to incorporate a video controller on the CPU chip. Maybe we will wind
up with a lot of special-purpose processors on one chip, in addition
to video, maybe networking, possibly controllers for other I/O?
That's one heck of a Linux box.
You've never actually worked with actual core, have you, Grasshopper?
On Mon, 01 Jun 2009 19:45:50 -0700, Archimedes' Lever
Nope.
True CORE memory holds its state across power failures.
And RAM memory may well come up in a random state.
You always make claims to know stuff,
and chicken out when asked for
simple details.
Like the CML weaseling.
Your technical knowledge is
pretty much word salad.
And even in semiconductor memory, I don't think zero voltage means a
logical state of zero.
-- Patrick
20+ years ago we were doing real time NTSC DVE with 768 KB of 12 bit
RAM, on a Z80B. It was a 'Vital Industries Squeeze Zoom' It filled a
full relay rack, and had a 1000 amp 5 volt power supply, along with
multiple supplies for the analog circuits. It was an amazing design,
for the mid '80s. It could also take non broadcast quality video and
time base correct it for broadcast.
You haven't zeroed anything. Try again.
/BAH
Actually, he doesn't know what a zero bit is.
/BAH
If you want to crash this stuff, play dumb and try to
find the hardware configurations. I'm a complete
dufus w.r.t. using GUI shit. So I try clicking here
and there just to get a listing of the hardware the
system has. Crashes every time I try to do this.
I still have no idea how I crash it (which bugs
me...I was paid to notice how to reproduce crashes).
/BAH
Even that is hyperthreaded, is it not? -- so that the processor appears to
the operating system as two processors even though it has a single physical
core.
JosephKK said:Maybe, maybe not, traditional big iron hit the wall at 4 to 6 GP CPUs
and i see no change in the fundamentals that will enable kilocores
that do not end up like MP supercomputers.