Jack said:
I just came across an article the other day which said the speed and
latency of a processor would eventually stop increasing when
approaching the speed of light. I forgot where I found it
But I think this is why Intel is taking Core approach rather than
increasing the new processor's frequencies. Any comments?
The point in valid, but even well before the speed of light limits what you
can do, you start having to make transistors so thin to further increase clock
rates that leakage currents become a significant problem; this translates
directly into wasted energy, which is annoying (big fans required) for desktop
PCs and even more debilitating for laptops.
Another problem that Intel (and everyone else making CPUs) has is that the
only way to increase throughput once you've hit the fastest frequency your
process allows is to process multiple instructions in parallel (this is called
"instruction level parallelism" and has been gong on for decades now), but
processors are good enough today that often there simply aren't any more
instructions within a single thread of execution to be had, because they're
all blocked waiting on, e.g., main memory access, which is literally hundreds
of times slower than cache memory access. (Entertaining exercise: Turn off
all the caches in your CPU. It will litterally *crawl*, operating at least
10x slower than usual.) So... where can you look for more instructions to
execute simultaneously? How about another thread? That's the ticket -- this
gets you "thread level parallelism," and it's exactly what Intel's dual-core
CPUs do.
---Joel Kolstad