Maker Pro
Maker Pro

OT Dual core CPUs versus faster single core CPUs?

J

John Doe

Not trying to decide which is better for everybody. Just interested
in the difference between multiple core CPUs and faster single core
CPUs.

Are there any mainstream applications that would benefit from one
and the other?

My wild guess. Continuous multitasking versus intermittent bursts
(if the bursts usually do not coincide). But I don't know of any
applications to example that.

Thanks.
 
R

Rich Webb

Not trying to decide which is better for everybody. Just interested
in the difference between multiple core CPUs and faster single core
CPUs.

Are there any mainstream applications that would benefit from one
and the other?

I do run into cases where an app totally pegs one of the cores,
sometimes due to a crash, sometimes due to a not particularly well
written application that is a resource hog. Regardless, even with one
core completely occupied, the other is free to respond to user and O/S
events whereas a single core setup, even a very fast one, would
probably be quite unresponsive and might require hitting The Big Red
Switch (or the equivalent nowadays) to regain control.
 
J

John Doe

My understanding is that these days, it's possible to get more
computing power per watt using a multicore approach. Going to
higher and higher speeds (per core) requires the use of a smaller
and smaller feature size on the chip, and this can increase static
power losses. Lower operating voltages are required to keep from
popping through the thinner insulating layers on the chip, and
this generally means that higher currents are required, which
means that I^2*R losses go up and overall power efficiency goes
down.

Using a somewhat lower-technology silicon process with lower
losses, and replicating it several times, can yield the same
amount of computing power ...

If you're talking about replicating the same speed, yes that's very
easy to believe.
 
P

PeterD

Not trying to decide which is better for everybody. Just interested
in the difference between multiple core CPUs and faster single core
CPUs.

Are there any mainstream applications that would benefit from one
and the other?

My wild guess. Continuous multitasking versus intermittent bursts
(if the bursts usually do not coincide). But I don't know of any
applications to example that.

Thanks.

It is not a simple comparison, many different factors to consider.
But, overall, it is the total MIPS/MFLOPS that really count.
 
H

hrh1818

Not trying to decide which is better for everybody. Just interested
in the difference between multiple core CPUs and faster single core
CPUs.

Are there any mainstream applications that would benefit from one
and the other?

My wild guess. Continuous multitasking versus intermittent bursts
(if the bursts usually do not coincide). But I don't know of any
applications to example that.

Thanks.

Matlab can benefit from multiple cores but with LTspice being single
threaded you get the most bang for your buck by going to the fastest
CPU you can afford.

Howard
 
J

Joerg

John said:
Dual core is just a hint of what's happening. There are already chips
with hundreds of cores.

Most people don't need huge number-crunching ability; they need
reliable, low-power computing. If current programming methods are
extended into multicore - parallelism, virtualization - we'll just
compound the mess that computing is today.

The real use for multiple cores will to be to assign one function per
core. One would be the OS kernal, and only that, and would be entirely
protected from other processes. Other cores could be assigned to be
specific device drivers, file managers, TCP/IP socket managers, things
like that. Then one core could be assigned to each application process
or thread. Only a few cores need floating-point horespower, which
might be hardware shared. Power down idle cores. Voila, no context
switching, no memory corruption, and an OS that never crashes.

Microsoft won't like it at all.

Does anyone know what the difference is between an Intel Dual-Core and
the Core 2 Duo? Is one 32bit and the other 64?

This here machine has a dual core and it really shows up as two separate
CPUs in the control panel.
 
J

John Doe

Joerg said:
Does anyone know what the difference is between an Intel Dual-Core
and the Core 2 Duo?

I've read but haven't researched that the Core 2 Duo is 2-4 times
faster than an equivalent Dual-Core. They said it manages CPU usage
much better. I look forward to seeing multiple CPU core usage graphs
in Performance Monitor, if Windows XP will do that.
 
J

Jamie

John said:
I've read but haven't researched that the Core 2 Duo is 2-4 times
faster than an equivalent Dual-Core. They said it manages CPU usage
much better. I look forward to seeing multiple CPU core usage graphs
in Performance Monitor, if Windows XP will do that.
MS has doomed it self..
they have removed XP from the stores as of today unless they changed
their mind since then.



http://webpages.charter.net/jamie_5"
 
Does anyone know what the difference is between an Intel Dual-Core and
the Core 2 Duo? Is one 32bit and the other 64?

This here machine has a dual core and it really shows up as two separate
CPUs in the control panel.


You should boot up the new Knoppix DVD (finally, a new release 5.3.1).
It sees all cores too... and bus widths.


ftp://ftp.kernel.org/pub/dist/knoppix-dvd

ftp://ftp.kernel.org/pub/dist/knoppix-dvd/KNOPPIX_V5.3.1DVD-2008-03-26-EN.iso
 
J

John Doe

My understanding is that these days, it's possible to get more
computing power per watt using a multicore approach. Going to
higher and higher speeds (per core) requires the use of a smaller
and smaller feature size on the chip, and this can increase static
power losses. Lower operating voltages are required to keep from
popping through the thinner insulating layers on the chip, and
this generally means that higher currents are required, which
means that I^2*R losses go up and overall power efficiency goes
down.

I see, same reason for high-voltage power lines. Does the lower
voltage also have anything to do with the fact electrons travel
shorter distances? Thanks.
 
I see, same reason for high-voltage power lines. Does the lower
voltage also have anything to do with the fact electrons travel
shorter distances? Thanks.


The term for today is

SLEW RATE

Do you think we could get anywhere near the speeds we run at trying to
get up to the original TTL voltage thresholds?

The heat product and power need would be far higher as well.
 
J

Joerg

You should boot up the new Knoppix DVD (finally, a new release 5.3.1).
It sees all cores too... and bus widths.


ftp://ftp.kernel.org/pub/dist/knoppix-dvd

ftp://ftp.kernel.org/pub/dist/knoppix-dvd/KNOPPIX_V5.3.1DVD-2008-03-26-EN.iso


Thanks, that's a good idea.
 
J

Joerg

Jamie said:
MS has doomed it self..
they have removed XP from the stores as of today unless they changed
their mind since then.

If really they did it that would be a major corporate blunder.
 
R

rickman

My understanding is that these days, it's possible to get more
computing power per watt using a multicore approach. Going to higher
and higher speeds (per core) requires the use of a smaller and smaller
feature size on the chip, and this can increase static power losses.
Lower operating voltages are required to keep from popping through the
thinner insulating layers on the chip, and this generally means that
higher currents are required, which means that I^2*R losses go up and
overall power efficiency goes down.

I'm not sure why you say the currents have to increase as feature
sizes decrease. The big speed improvement from smaller feature sizes
is the reduced capacitance. So the RC times (assuming the R is
constant) go down and the clock can run faster. Or the current can
decrease yielding much lower power consumption. This is what has been
fueling improvements in IC fabrication for the last several decades.
Where this breaks is when the voltages get so low that the transistors
don't actually shut off entirely resulting in much higher quiescent
currents... AND... the R in the RC starts to increase limiting speed
improvements.

Over the last four or five years processors have gained *nothing* in
clock speed. When I bought my last PC in 2002, the fastest chips from
Intel were clocking at 3 GHz+. The fastest clock speed today??? about
3 GHz! This is a little bit apples and oranges because the
architectures have changed a bit. The P4 in use 6 years ago was
optimized for raw clock speed at the expense of longer pipelines
causing more delays on branches. But the point is still that in 6
years the speed of CPUs due to clock increases is nearly zip.

With ever increasing density from the smaller process geometries, the
question becomes, if you can't speed up the clock, what can you do to
make the CPU faster? They have already added every optimization
possible to speed up a CPU so that just adding transistors won't
achieve much. So the only other way to get more speed is to add more
CPUs!

So that is why we have dual, triple and quad core CPUs now instead of
just making the CPUs run faster.

Using a somewhat lower-technology silicon process with lower losses,
and replicating it several times, can yield the same amount of
computing power at a lower level of energy consumption.

The goal is speed vs. cost. If we are talking about PC type CPUs, it
is never an advantage to use older technology as long as it is not so
bleeding edge that you can't get decent yields. Adding multiple CPUs
is a significant step because of the increase in die area increasing
the cost. But as the process improvements provide more transistors on
the same size die, the only useful way to take advantage of the gates
is to add more CPUs.
For desktop consumers this may not be all that significant an issue.
For server farms, where the electric bill can be a major portion of
the total expense over time, it can make a big difference. For laptop
owners, it may extend battery run-time or reduce battery weight
significantly.

Actually, they have turned the power curve around by using different
structures for the transistors. They are harder to make, but the
added cost is justified by the reduced power consumption. In 2002
CPUs were reaching the 100 W level. None were below about 60 W. Now
you can find CPUs that are only 35 or even 25 W without turning
performance back to the stone age like the Via chips.
 
R

rickman

The real use for multiple cores will to be to assign one function per
core. One would be the OS kernal, and only that, and would be entirely
protected from other processes. Other cores could be assigned to be
specific device drivers, file managers, TCP/IP socket managers, things
like that. Then one core could be assigned to each application process
or thread. Only a few cores need floating-point horespower, which
might be hardware shared. Power down idle cores. Voila, no context
switching, no memory corruption, and an OS that never crashes.

Microsoft won't like it at all.

John

Good luck on that "never crashes" thing. How do you think a given CPU
will get its program? How does it know it is a device driver rather
than an application? How does memory *not* get shared? Main memory
will never be on the CPU chip, at least not as long as they want to
keep increasing performance at a low cost.

Multi CPUs is nothing new. It has been done for ages. In fact one of
the very first computers was a multi-processor machine. The problem
is that it is very hard to use many CPUs efficiently. We are bumping
up against some real limitations in CPU speed improvements. So we
have to start using multiple CPUs. But these are also inefficient.
We are reaching an age where progress will be coming slower and with
more cost and pain.
 
You apparently haven't been reading how nanotubes are going to save us all!

(Ducking... :) )

Nanowires will put hard drive capacities onto a chip die. All with
actual magnetic domains too.

Getteth thy selfeth a clueeth.
 
R

rickman

The OS cpu will assign it a task, create its memory image, set up its
privileges, and kick it off. And snoop it regularly to make sure it's
behaving.

You are missing my point. The fact that tasks run on separate
hardware does not mean they don't share memory and they don't
communicate. You still have all the same issues that a single
processor system has. It is **very** infrequent on my system that it
hangs in a mode where I can't get control of the CPU. I am running
Win2k and I let it run for a month or more between reboots. Usually
the issue that makes me reboot is that the system just doesn't behave
correctly, not that it is stuck in a tight loop with no interrupts
enabled. So multiple processors will do nothing to improve my
reliability.

How does it know it is a device driver rather


See above.

How does memory *not* get shared? Main memory


Hardware memory management can keep any given CPU from damaging
anything but its own application space. Intel has just begun to
recognize - duh - that it's not a good idea to execute data or stacks,
or to allow apps to punch holes in their own code. Allowing things
like buffer overflow exploits is simply criminal.

So this indicates that multiple processors don't fix the problem. The
proper use of hardware memory management fixes the problem. No?

But speed is no longer the issue for most users. Reliability is. We
need to get past worrying about using every transistor, or even every
CPU core, 100%, and start making systems manageable and reliable.
Since nobody seem able to build reliability or security into complex
software systems, and the mess isn't getting any better (anybody for
Vista?) we need to let hardware - the thing that *does* work - assume
more responsibility for system integrity.

What else are we going to do with 256 CPUs on one chip?

That is the big question. I like the idea of having at least two
processors. I remember some 6 years ago when I was building my
current computers that dual CPUs on a mother board were available.
People who do the kind of work that I do said they could start an HDL
compile and still use the PC since they each had a processor. I am
tired of my CPU being sucked dry by my tools or even by Adobe Acrobat
during a download and the CPU nearly halting all other tasks. Of
course, another solution is to ditch the Adobe tools. Next to
Microsoft, they are one of the very worst software makers.

Personally, I don't think we need to continue to increase processing
at this geometric rate. Since we can't, I guess I won't be
disappointed. I see the processor market as maturing in a way that
will result in price becoming dominant and "speed" being relegated to
the same category as horsepower. The numbers don't need to keep
increasing all the time, they just need to match the size of the car
(or use for PCs). The makers are not ready to give up the march of
progress just yet, but they have to listen to the market. It will be
within 5 years that nobody cares about the processor speed or how many
CPUs your computer has. It will be about the utility. At that point
the software will become the focus as the "bottleneck" in speed,
reliability and functionality. Why else does my 1.4 GHz CPU seem to
respond to my keystrokes at the same (or slower) speed than my 12 MHz
286 from over 10 years ago? "It's the software, stupid!" :^)

who just rebooted a hung XP. Had to power cycle the stupid thing. But
I'm grateful that it, at least, came back up.

I am ready to buy a laptop and I am going to get a Dell because they
will sell an XP system rather than Vista. Everything I have heard is
that XP is at least as good as Win2K. No?
 
Logic in FPGA's can be incredibly complex, things like gigantic state
machines, filters, FFTs - and they run for millions of unit-years
without dropping a bit. Even Intel CPUs, which are absolute horrors,
are reliable. Big software systems are unreliable, as a power function
of the size of the program. The obvious path to reliability is to run
smaller programs on severely hardware-protected processors,

What else are you going to do with 1024 CPU's on a chip?

John


Actually, it is better to not have them all on the same chip. A bullet
or fragment passes through that PWB or chip, and the entire system is
down. Better to have computer "cites" placed throughout the airframe or
system in question, and run code that in a worst case failure, can "seek
out" another computer and get time slices on that machine to keep from
losing the process itself at any time.

Instead of mere redundancy, one would have a huge distributed computing
network where no process ever gets lost because a piece of physical gear
has gone down.

There are nine cores on a cell CPU. IBM, however, sells "blades" that
start out with two Cells on each. So even the local redundancy is not on
the same chip. If each CPU had 1024 sub-processors in it then one could
code an OS that insures that any broken sub-processor's running code
would get passed off onto another working sub-segment.

I'd bet that IBM is going to play a bigger part in our future than we
night guess. You should examine how instructions work through the pipes
on the cell. They are getting 10x the performance of a PC for some
things. Your FF transforms, for example, would run on the cell far
faster than any FPGA. They could get processed as a part of the overall
data traffic handling. Essentially the same as having the FPGA do it, but
here it can be coded in software, and changed far easier. Even than the
burn on the fly gate arrays. Chips cost money. software changes are
cheap.
 
Top