Maker Pro
Maker Pro

My Vintage Dream PC

P

Patrick Scheible

John Larkin said:
Which doesn't even try to answer the question.

The answer is that the question contains assumptions that are probably
incorrect.

-- Patrick
 
S

Scott Lurndal

John Larkin said:
I learned a lot from what was successful. I've run timeshare systems,
written RTOSs and compilers and a million or two lines of code, run
all sorts of pc's from Commodore to WinXP, and designed a lot of
computer systems and interfaces and electronics. I even interfaced an
ADC (that I designed) to an IBM 1401, and this week I'm interfacing an
ADC (that I didn't design) to a Linux board via PCI Express.

None if this experience seems to be reflected in your writing.
Until you've written a couple of general purpose multiprocessor
operating systems, you should actually try some of your ideas out before
espousing them.
What have you done?

A multiprocessor mainframe OS, a distributed unix-based OS on an MPP box,
bits of linux development (kdb, raw scsi devices, et. al.) and two SMP
hypervisors supporting systems with 256 processors and 1TB of memory.

The mainframe OS has been continuously processing ATM transactions and
sorting checks since 1984.
The question I raised was merely, what do you think OSs will be like
in 10 or 20 years, given that computer chips are headed to multiple
CPU cores (tens, hundreds) surrounding a central cache?

The fundamental problem with the idea that an Operating System runs only on
a dedicated core ignores the realities of current computer architectures
such as x86 where the interactions of the operating system and applications
occur via hardware-based privilege transfer mechanisms (call gates, interrupt
gates). By definition, the operating system must run on the same core
as the transfer mechanism, and once it is running, transfering control to
another processor just adds latency and overhead.

Now, various operating system implementations (mostly microkernel based)
such as Chorus and Mach would eliminate that by passing messages between
subsystems, but that behavior wasn't exported to legacy operating environments
(such as a Unix-like environment) where system calls are still used to
request service from the OS.

Message passing systems aren't known for speed.

scott
 
S

Scott Lurndal

Patrick Scheible said:
Much like racing stripes on a 1980s Pontiac?

Actually, I don't think even C++'s proponents say it's faster to
compile or run. They hope it's faster or easier to write.

Properly written C++ is no slower to run than C, either. However,
properly written C++ became impossible after C++ 2.1 (i.e. when
templates, exceptions and the STL came about).

scott
 
J

Joe Pfeiffer

Mensanator said:
Really? What state do you think it would be?

Back when I was more current on DRAM, half the bits would use 0V to
represent 0 and half would use it to represent 1 to better balance power
requirements.
 
P

Peter Flass

Archimedes' Lever said:
One instruction? - Pull Power Cord.

ALL ZEROS.

I win.

No, you lose. Remember, this is *core*, not that newfangled
semiconductor stuff. When you plug it back in, the memory is still
there. ISTR some people running a standalone dump after a power-up to
see what was happening when it shut down.
 
W

Walter Bushell

John Larkin said:
We do nightly backups to local hard drives and zero-based weekly
backups of *everything* to DVDs. The backups get stashed in three
different locations in two different cities. Seems to work so far.

John

A mere terabyte of data takes up like 200 DVDs, do you have an automatic
writer?
 
P

Peter Flass

Patrick said:
I'm not sure more than 4 cores or so will really catch on. The
benefits seem so marginal, for ordinary use. For some games, maybe.
To achieve reliability for a 24x7 application, yes. For certain
applications that parallelize well, yes. But ordinary work isn't
waiting on CPU power much anyway, and typically doesn't parallelize
well anyway. Maybe you have a couple of large applications each
sitting on a core, another one for the OS and GUI, but after that I'm
not sure what there is for them to do.

One of the goals of AMD when they bought ATI was to use their designs to
incorporate a video controller on the CPU chip. Maybe we will wind up
with a lot of special-purpose processors on one chip, in addition to
video, maybe networking, possibly controllers for other I/O?
 
J

Joe Pfeiffer

Peter Flass said:
One of the goals of AMD when they bought ATI was to use their designs
to incorporate a video controller on the CPU chip. Maybe we will wind
up with a lot of special-purpose processors on one chip, in addition
to video, maybe networking, possibly controllers for other I/O?

That seems like a really good guess -- increasing integration has been
the trend ever since they first put two transistors in a Darlington
configuration. We'll be seeing more and more capable SoCs coming along
for the desktop market all the time.
 
A

Archimedes' Lever

You've never actually worked with actual core, have you, Grasshopper?

I have a photo here somewhere where I unpotted a hand loomed array
stuffed into a small case that actually stored an event on a ruggedized
power supply.

My remark holds. With the power cord pulled, you will not be able to
read any logic, therefore all is zero. If you apply a device to read the
logic level, or re-energize the unit, you are no longer in the "Pull
Power Cord" state.
 
A

Archimedes' Lever

On Mon, 01 Jun 2009 19:45:50 -0700, Archimedes' Lever


Nope.
True CORE memory holds its state across power failures.
And RAM memory may well come up in a random state.


While it is in "PullPower Cord" state, there will be no logic levels.
If you apply a device to read them or re-energize the unit, you are no
longer in "Pull Power Cord" state. My sess still sucs.
 
A

Archimedes' Lever

You always make claims to know stuff,

I do know stuff. I saw the movie. I wish idiots like you, and the
RichTard would eat some.
and chicken out when asked for
simple details.

You're an idiot.
Like the CML weaseling.

I read about CML years ago in EE Times. I constantly follow emerging
technologies, and even occasionally get to incorporate some in our
designs. Usually keep to current gear though, as that what is on order.

Get off your little high horse, Johnny.
Your technical knowledge is
pretty much word salad.

You have no basis for your claim, since you do not know me, idiot. That
is why you are on the idiot list.
 
A

Archimedes' Lever

And even in semiconductor memory, I don't think zero voltage means a
logical state of zero.

-- Patrick


Of course it does. TTL thresholds are a window of acceptable voltage
values. That is why things like "false highs" and "false lows" are
possible, and were more prevalent with 5 volt logic.

Not nearly as hard to keep false values out at lower voltage logic
levels, and the transition can occur faster too, so 3.3 volt logic is
faster than 5 volt due to slew rate alone.

Pretty sure 'zero' is low voltage, and "one" is a gain in voltage over
the low logic state. I suppose flipped logic is used as well though.
 
A

Archimedes' Lever

20+ years ago we were doing real time NTSC DVE with 768 KB of 12 bit
RAM, on a Z80B. It was a 'Vital Industries Squeeze Zoom' It filled a
full relay rack, and had a 1000 amp 5 volt power supply, along with
multiple supplies for the analog circuits. It was an amazing design,
for the mid '80s. It could also take non broadcast quality video and
time base correct it for broadcast.


But you stored it on tape, not hard drives.
 
A

Archimedes' Lever

You haven't zeroed anything. Try again.

/BAH


With the machine in "Pull Power Cord state, all are at zero. If you
attach a device to read the logic levels, or re-energize the unit, you
are no longer in "PullPower Cord" state.

Bah indeed. Bah ha ha, in fact.
 
A

Archimedes' Lever

If you want to crash this stuff, play dumb and try to
find the hardware configurations. I'm a complete
dufus w.r.t. using GUI shit. So I try clicking here
and there just to get a listing of the hardware the
system has. Crashes every time I try to do this.
I still have no idea how I crash it (which bugs
me...I was paid to notice how to reproduce crashes).

/BAH


You are so full of shit, your eyes are brown, and there is a foul stench
emanating from your mouth, nose, and ears.
 
A

Archimedes' Lever

Even that is hyperthreaded, is it not? -- so that the processor appears to
the operating system as two processors even though it has a single physical
core.

I don't think so (the appears as thing).


One has two physical cores, the other only one. Both hyperthread.

OS likely only sees one (the dually) as a multi-processor installation.
One never knows, however.
 
C

Clifford Heath

JosephKK said:
Maybe, maybe not, traditional big iron hit the wall at 4 to 6 GP CPUs
and i see no change in the fundamentals that will enable kilocores
that do not end up like MP supercomputers.

Pyramid Computers (RIP) was a counterexample. We had clients
with 20-40 CPUs, and they scaled almost linearly.

The point here though is that serious parallel programs are very
expensive to write, so it only happens in infrastructure where the
payoff is large - DBMS, server farms, some graphics. Not "normal"
programs at any rate.

Teaching programmers to write in languages like Erlang is the only
possible path around the cost differential.

Clifford Heath.
 
Top