Maker Pro
Maker Pro

OT Dual core CPUs versus faster single core CPUs?

I use Foxit and CutePDF. Both fast, bulletproof, and free.



Yes. So make the software, especially the OS, simpler. Vista tried
going partly towards the "small kernal" approach for reliability, but
took a big hit on performance by moving the graphics stuff out of
kernal space. If it ran in its own CPU, there would be no penalty. Any
"big kernal" OS (like Windows or Linux) will spend a lot of time
context switching, stack swapping, reloading memory management
hardware, doing interrupts, all the junk you'd not have to do if there
were a CPU per process.

Microsoft's approach to multicore is to make things more complex, not
less. Hell, Microsoft's approach to everything is to make it more
complex. Ironic that the biggest software company on the planet writes
garbage software.



XP seems fairly solid, in most installations. Being a Microsoft
product, there are occasional systems that crash often, for no obvious
reason. XP does boot up a lot faster than 2K.

John


The word is KERNEL.
 
The supervisory CPU can certainly detect some kinds of malfunction of
the program running in the secondary cores... but the ability to
detect _all_ sorts of software malfunction, accurately, and shut down
just those instances which are truly malfunctioning, without
occasionally slaughtering ones which are workign correctly, simply
does not and apparently cannot ever exist.


You have the same processes running on a redundant system, tick for
tick. Then, a third oversight code loop looks for stopped processes on
paired sets of computers. When and if it finds one computer that still
is running or still wants to step forward, and the other is frozen or
latched or broken or GONE, the "still running" computer is allowed to
continue. This redundancy works, of course, through only one single
catastrophic failure iteration. If the failure was not catastrophic, the
now fixed computer could be brought back up, and "given" a mirror of the
computer state that it was paired with, and try to re-integrate it as the
redundant set for that pair.

When I used to build simulator racks for MD, we did 17 racks (times two
actually so 34) for their considered to be mission critical computers on
the C-17 (17 out of 54) we made the cables that interconnect the systems
pipe up into the rack and through a peg board that allows them to create
any short or open combination on a system. The front of each rack had
Delrin slide mounts for two of the computers to be used to slide in and
the peg board. So we had a data recorder rack, a HUD rack, etc., etc.
They made sure that their master software would kick in the good computer
in the event of a failure of any one of any given pair.

So it can be done. The problem is with the master control software one
must write to manage the decision making process as to whether a given
process or computer has stopped running.

That is why I think my distributed computing network scenario would be
better.Then the entire process tree can be assured that it will be
running somewhere, and continue to run somewhere should the current
machine stop working.

Catching a single bit latch lockup is hard to do in a timely manner
however. One reason why we have radiation hardened devices.

When you absolutely, positively must have the bits... all of the bits...
and only that set of bits... delivered.
 
I've never taken a programming course. Or one in digital logic.

John


OK. Then my response to the prior horseshit post is this:


In that case, you saw it spelled INCORRECTLY two out of the three
"ways" you claim to have "seen it spelled".
 
R

rickman

I use Foxit and CutePDF. Both fast, bulletproof, and free.

I assume you are aware that neither of these programs are the same as
Acrobat. CutePDF is just a reader. It is also not open source, it is
just free. Likewise Foxit is not open source, it has more features,
but they put "evaluation" stamps on pages you edit. I wonder if I can
remove the stamps with Acrobat... that would be ironic, not to mention
moronic.

I guess the point is that you have a choice of free PDF readers.
Yes. So make the software, especially the OS, simpler. Vista tried
going partly towards the "small kernal" approach for reliability, but
took a big hit on performance by moving the graphics stuff out of
kernal space. If it ran in its own CPU, there would be no penalty. Any
"big kernal" OS (like Windows or Linux) will spend a lot of time
context switching, stack swapping, reloading memory management
hardware, doing interrupts, all the junk you'd not have to do if there
were a CPU per process.

You still have not explained why multiple CPUs are required to make
software more robust. I don't see anything you've said that does not
apply equally to software running on a single CPU. As to the
"penalty", in the case of multi CPUs, the penalty is that each of
these CPUs will run an order of magnitude slower than the single CPU,
perhaps even slower than that. I guess that if we are hitting a wall
for processor performance, then multi processors can be made nearly as
fast a single processor no matter how much room you have on the die.
But I say there is no need to make CPUs faster. Criminy! These
things do literally ***billions*** of things per second. Just how
much does it take to put images on a CRT or to recalculate a spread
sheet or to display a web page??? I seem to recall that all of these
things ran perfectly well on a 200 MHz CPU 10 years ago. Do I really
have *ANY* need for 10 CPUs each running at 3 GHz??? Not if the
software were written to run better.

I find it funny when a PC takes some seconds to do something and when
I ask one of the other engineers what the heck is the thing doing, I
get answers like, "It takes a lot to do X". Does it really take
****BILLIONS**** of steps to do anything that is not supercomputer
stuff??? I seem to recall that Cray super computers were not as fast
as today's PCs. I know I worked on an array processor that was second
only to the Cray in speed at the time and it did 100 MFLOPS. Now CPUs
exceed that by an order of magitude and they still trip over their own
feet when displaying a web page!!! No, it's not the hardware, it is
the software. We could all live rich, full lives with 100 MHz 486s if
they would just write the software to run efficiently. Well, maybe
not 486s, but you get the idea. As a case in point, I am still using
a machine I built some 6 years ago and it was a budget build with all
the cheapest and pretty much slowest components of the time. It seems
to do the job just fine even now. The only issue is when various
software bogs it down and sucks the CPU cycles dry... like Acrobat!

Ok, rant over. But you have to admit that this is a little more
interesting than arguing over how to spell chernaell.
 
R

rickman

For which I am extremely grateful.


CutePDF is a pdf creator; it works as a printer driver. It's so fast
that, first few times you run it, you think it's not working. It is.

Sorry, I was looking at Foxit when I wrote that. My point is that
none of the open source PDF programs actually do what Acrobat does.
The commercial programs that do are paid for programs and I have no
idea if they work any better. Foxit puts stamps on each page it
modifies. I also see similar crappy behavior to Acrobat. For
example, every time I switch focus away from the program and back, the
toolbars become disorganized and take up four rows instead of two. If
the bleeding program can't even remember where I put the toolbars, how
well is it going to handle the hard stuff?

So for the short term, I am stuck with Acrobat. As crappy as it is,
it is the devil I know.

I didn't say it was required. I said that it would, if properly done,
simplify the overall structure and make it easier to build a
crash-proof OS. Of course, anybody with enough programmers and
advanced tools can screw up any architecture.

I don't mean to be rude, but that is not saying anything. "IF
properly done" is a catch phrase that eliminates any meaning. If
operating systems were "properly done" we wouldn't even be having this
discussion. But if you have some basis for saying that multi-
processor actually facilitate correctly working software, I would like
to hear it. So far everything you have posted applies to any
software, on single processors as well as multi-processors.

Why? Transistors keep getting faster, and the real limit to cpu power
is thermal. A CPU doesn't take many transistors, so doing X amount of
computing dissipates about the same power whether it's done on 1 CPU
or 256. Less, if you avoid context switching overhead.

Transistors may get faster, but systems don't. The critical limits
are being reached where logic built of smaller geometries do *NOT* get
faster. I thought this was discussed already. The smaller geometries
require lower voltages. The lower voltages are reaching the point
where they turn on and off fully. This is limiting the improvements
they can make in speed. Why else have CPUs been limited to about 3
GHz for the last 5 or 6 years?

Transistors are getting smaller, but the circuits they are in are not
getting significantly faster.

I guess that if we are hitting a wall


Exactly my point. We don't need more speed, and we sure don't need
more complexity. Let's use all those transistors to make things
simpler and more reliable. WIntel was and still is decades behind
other architectures in applying fundamental hardware protections to
computing, hence atrocities like buffer overflow exploits.

I hate to tell you, but using hundreds of processors *is* complexity
compared to using one processor.

A request to print should start printing NOW. It would if a CPU was
assigned to do nothing but print your file.

Why do you say that? Printing on my 1.4 GHz machine often is limited
by the CPU speed. On a chip with hundreds of CPUs, each one will be
slower than a single CPU using the same technology. So that print job
will take longer to process.

Does it really take


Acrobat and Windows are salient examples of what's broken about
computing. Imagine grannie trying to zap a trojan by editing the
registry!

I agree 100% with that statement. But nothing you have said relates
to multi-processors.

Your point about Granny is very valid. I have a friend who is in his
70's. He is the sort that is self taught and been a welder, AC
repairman, electrician and general all around mechanic and machinist.
So the man is not at all dumb. But he has no interest in learning
about computers. They are basically wicked, cantankerous, malodorous
piles of scrap metal. They don't follow any line of normal reasoning
and can only be handled in unique ways that require you to learn a new
mindset. There are some thing he would like to do with a PC, such as
record his cassette tapes on to CD. But I can't figure out a way to
educate him on how to do this sort of stuff.

Rick
 
R

rickman

I use Acrobat v4 for creating PDF's and v7 for reading new stuff that
balks at v4. I also use v7 for "signing".

If your toolbars are moving it's most likely cockpit error. Set them
the way you want, then exit and restart so that your settings are
preserved.

Yes, of course, I should have known that software was not capable of
having problems. The source of the trouble has to be me! I guess
that also applies to Microsoft programs!
 
J

JosephKK

My understanding is that these days, it's possible to get more
computing power per watt using a multicore approach. Going to higher
and higher speeds (per core) requires the use of a smaller and smaller
feature size on the chip, and this can increase static power losses.
Lower operating voltages are required to keep from popping through the
thinner insulating layers on the chip, and this generally means that
higher currents are required, which means that I^2*R losses go up and
overall power efficiency goes down.

Using a somewhat lower-technology silicon process with lower losses,
and replicating it several times, can yield the same amount of
computing power at a lower level of energy consumption.

For desktop consumers this may not be all that significant an issue.
For server farms, where the electric bill can be a major portion of
the total expense over time, it can make a big difference. For laptop
owners, it may extend battery run-time or reduce battery weight
significantly.

There would be additional benefits if the chip (or the ACPI BIOS or
the operating system) can halt or even power down the extra core(s)
except when their services are needed.


Any application whose processing can be partitioned, and run in
phases, is a candidate. Image processing (e.g. tweaking photos or
artwork) or audio digital signal processing (e.g. running streaming
MP3 or Ogg Vorbis encoders for a Shoutcast/Icecast stream server)
would be candidates. Cryptographic acceleration (e.g. SSL
connections) might be another.

I think that a significant benefit can arise if the code being run on
any given processor, will fit well into a single core's instruction
cache. If the OS is smart enough to "lock" the thread in question
into a single core, you can avoid a whole lot of icache misses and
reloading which would occur on a single-core processor, and this can
allow the core to run at closer to its theoretical limit.

Data is also cached, and then there is also total memory bandwidth
which makes doing video so hard. See Amdahl's Law.
 
J

JosephKK

Does anyone know what the difference is between an Intel Dual-Core and
the Core 2 Duo? Is one 32bit and the other 64?

This here machine has a dual core and it really shows up as two separate
CPUs in the control panel.

Actually, it seems to be power dissipation.
 
J

Joerg

JosephKK said:
]
Does anyone know what the difference is between an Intel Dual-Core and
the Core 2 Duo? Is one 32bit and the other 64?

This here machine has a dual core and it really shows up as two separate
CPUs in the control panel.

Actually, it seems to be power dissipation.


In that respect I was positively surprised. Did a few >1/2h heavy duty
SPICE sims lately and the CPU fan barely picked up speed. It's very quiet.
 
T

Tom Del Rosso

John Doe said:
Not trying to decide which is better for everybody. Just interested
in the difference between multiple core CPUs and faster single core
CPUs.

Are there any mainstream applications that would benefit from one
and the other?

My wild guess. Continuous multitasking versus intermittent bursts
(if the bursts usually do not coincide). But I don't know of any
applications to example that.

I have seen cases where a process takes 100% of both CPUs and you can't kill
the process. In those cases you can set the affinity to only one CPU, so
the other CPU is free to let you work on fixing the problem.
 
T

Tom Del Rosso

John Larkin said:
The real use for multiple cores will to be to assign one function per
core. One would be the OS kernal, and only that, and would be entirely
protected from other processes. Other cores could be assigned to be
specific device drivers, file managers, TCP/IP socket managers, things
like that. Then one core could be assigned to each application process
or thread. Only a few cores need floating-point horespower, which
might be hardware shared. Power down idle cores. Voila, no context
switching, no memory corruption, and an OS that never crashes.

Microsoft won't like it at all.

John

They will find a way to crash it. The processes will need some transfer of
data and control.
 
T

Tom Del Rosso

John Larkin said:
What else are you going to do with 1024 CPU's on a chip?

Run increasingly bloated programs on severely hardware-protected processors,
whether you want to or not.
 
J

Joerg

Jim said:
1/2 hour is NOT "heavy-duty" ;-)

Ok, it's not like your chip designs. Just came across a puzzler. A
circuit works on the screen, puts out lots of energy but doesn't consume
any. I can set the voltage source series resistance to 1M or whatever
and nothing changes. Hey, I might really be on to something here,
solving global warming and all that.

Guess I'll waltz over to the lab and hit that red button on the Weller.
I do not trust sims all that much.
 
J

Joerg

Jim said:
Jim said:
On Mon, 05 May 2008 07:26:19 -0700, Joerg

JosephKK wrote:
]

Does anyone know what the difference is between an Intel Dual-Core and
the Core 2 Duo? Is one 32bit and the other 64?

This here machine has a dual core and it really shows up as two separate
CPUs in the control panel.
Actually, it seems to be power dissipation.
In that respect I was positively surprised. Did a few >1/2h heavy duty
SPICE sims lately and the CPU fan barely picked up speed. It's very quiet.
1/2 hour is NOT "heavy-duty" ;-)
Ok, it's not like your chip designs. Just came across a puzzler. A
circuit works on the screen, puts out lots of energy but doesn't consume
any. I can set the voltage source series resistance to 1M or whatever
and nothing changes. Hey, I might really be on to something here,
solving global warming and all that.

Guess I'll waltz over to the lab and hit that red button on the Weller.
I do not trust sims all that much.

Probably have an extra source in there somewhere, or you AC source is
providing the power. There are no such things as "puzzlers", just
cockpit errors ;-)

No other sources except one low voltage to the small stuff. It cannot
generate the levels coming out. Yeah, it might be a cockpit error but
there is nothing obvious and mostly it's faster to lash up something in
the lab.

The only time that's a pain is when it entails waiting for a Digikey
order to arrive. Then I tell the shepherd to "keep a good watch". She
immediately understands and moves over to a place where she can see the
road, then almost explodes when the Fedex truck comes around the corner.
This avoids a packages sitting there at the entrance without me knowing
it has arrived.
 
J

Joerg

Jim said:
Jim said:
On Mon, 05 May 2008 09:24:30 -0700, Joerg

Jim Thompson wrote:
On Mon, 05 May 2008 07:26:19 -0700, Joerg

JosephKK wrote:
]

Does anyone know what the difference is between an Intel Dual-Core and
the Core 2 Duo? Is one 32bit and the other 64?

This here machine has a dual core and it really shows up as two separate
CPUs in the control panel.
Actually, it seems to be power dissipation.
In that respect I was positively surprised. Did a few >1/2h heavy duty
SPICE sims lately and the CPU fan barely picked up speed. It's very quiet.
1/2 hour is NOT "heavy-duty" ;-)

Ok, it's not like your chip designs. Just came across a puzzler. A
circuit works on the screen, puts out lots of energy but doesn't consume
any. I can set the voltage source series resistance to 1M or whatever
and nothing changes. Hey, I might really be on to something here,
solving global warming and all that.

Guess I'll waltz over to the lab and hit that red button on the Weller.
I do not trust sims all that much.
Probably have an extra source in there somewhere, or you AC source is
providing the power. There are no such things as "puzzlers", just
cockpit errors ;-)
No other sources except one low voltage to the small stuff. It cannot
generate the levels coming out. Yeah, it might be a cockpit error but
there is nothing obvious and mostly it's faster to lash up something in
the lab.

Probably something modeled with a macro rather than devices?

It's all transistors, discretes, transformers and such. No chips.

My dog barks also, but FedEx and UPS always ring the doorbell here,
whether a signature is needed or not.

I wish they would always do that out here. UPS can be a real issue so
when I have a choice I prefer not to use them. Several times they
selected the backyard entrance (closer to the road ...) to drop off
packages and then it started to rain. Other times they claimed that our
address did not exist. Yet it does since almost 40 years.
 
J

Joerg

Jim said:
Jim said:
On Mon, 05 May 2008 09:53:42 -0700, Joerg

Jim Thompson wrote: [snip]
Probably something modeled with a macro rather than devices?
It's all transistors, discretes, transformers and such. No chips.

Round up the usual suspects... the transformer, driven by an ideal
source ??

It's all real parts around there. FETs, resistors, snubbers and such.
And they react the way I expect them to. Darn, I wish I could plug a fat
extension cord into the output of that sim and connect it at our breaker
panel. Anyway, in the lab the dream of free energy was shattered. Worked
but not quite there yet in efficiency. I initially thought the smell of
"amperage" came from the kitchen where my wife was baking the bread for
lunch. It didn't ...

WRT that bread, we successfully have baked bread in a regular 26" Weber
charcoal barbie. Nice thick and crunchy crust just like bread from a
German bakery (Steinofenbrot). Tonight I am going to bake another one in
there. Same as my wife baked in the oven just now. She split the dough
in half so we'd have an honest side-by-side comparison this time.

[...]
 
R

rickman

Jim said:
On Mon, 05 May 2008 09:24:30 -0700, Joerg
Jim Thompson wrote:
]
Does anyone know what the difference is between an Intel Dual-Core and
the Core 2 Duo? Is one 32bit and the other 64?
This here machine has a dual core and it really shows up as two separate
CPUs in the control panel.
Actually, it seems to be power dissipation.
In that respect I was positively surprised. Did a few >1/2h heavy duty
SPICE sims lately and the CPU fan barely picked up speed. It's very quiet.
1/2 hour is NOT "heavy-duty" ;-)
Ok, it's not like your chip designs. Just came across a puzzler. A
circuit works on the screen, puts out lots of energy but doesn't consume
any. I can set the voltage source series resistance to 1M or whatever
and nothing changes. Hey, I might really be on to something here,
solving global warming and all that.
Guess I'll waltz over to the lab and hit that red button on the Weller.
I do not trust sims all that much.
Probably have an extra source in there somewhere, or you AC source is
providing the power. There are no such things as "puzzlers", just
cockpit errors ;-)

No other sources except one low voltage to the small stuff. It cannot
generate the levels coming out. Yeah, it might be a cockpit error but
there is nothing obvious and mostly it's faster to lash up something in
the lab.

The only time that's a pain is when it entails waiting for a Digikey
order to arrive. Then I tell the shepherd to "keep a good watch". She
immediately understands and moves over to a place where she can see the
road, then almost explodes when the Fedex truck comes around the corner.
This avoids a packages sitting there at the entrance without me knowing
it has arrived.

I'm usually happy with FedEx ringing the doorbell...

Not that they always do it :^(
 
R

rickman

[snip]
Probably something modeled with a macro rather than devices?
It's all transistors, discretes, transformers and such. No chips.

Round up the usual suspects... the transformer, driven by an ideal
source ??




I wish they would always do that out here. UPS can be a real issue so
when I have a choice I prefer not to use them. Several times they
selected the backyard entrance (closer to the road ...) to drop off
packages and then it started to rain. Other times they claimed that our
address did not exist. Yet it does since almost 40 years.

Ha! The USPS regularly declared me as non-existent, until I gave them
a ration of shit in a letter-to-the-editor... then the postman came to
the door, "It wasn't me, it was a substitute"... sure ;-)

But I agree, only use UPS _and_ USPS when you can afford to have it
get lost ;-)

Just to be fair and give due when it is due... I ordered some parts
from Digikey last Friday. UPS and FedEx are both 4 days by ground at
about $5 for the first pound. I know because I misread the map and
thought it was 3 days only to have a package come a day later than I
wanted. Seems Digikey is up in that Northwest corner of Minn that is
in the 4 day delivery zone for me. Anyway, I wanted it sooner than 4
days and the faster shipping by UPS and FedEx gets expensive real
fast. So I asked for my last shipment to be sent Priority. Today I
asked about a tracking number and got the response, USPS don't give no
stinkin' tracking numbers! I was thinking I had made a mistake since
I realized that now it might take more than 4 days and I wouldn't have
any way to tell where it was. Just after that I went out to get the
mail and found the package in the mailbox! It came in one business
day!!!

So you can't say the PO is all bad.
 
Top