jmfbahciv said:
Let me see if I can explain better. When we increased the CPU speed,
the system became I/O bound. When we increased the disk controller
speed, the same system would become CPU-bound. When we increased the
speed of the CPU, the same system became I/O bound... The same things
happen in today's biz. Hardware developers concentrate on the solving
the problem of today. So if the CPU needs to be speeded up, they'll
work on speeding up the CPU. Then, when that's done, the performance
lags show that the I/O needs to be sped up. So the next project is to
produce a faster peripheral. This gets out to the field and, all of a
sudden, the CPU performance sucks. It's a cycle.
in the mid-70s i started making comments about i/o slowing down
significantly ... part of being able to see this was possibly having
started (when I was undergraduate in the 60s) doing dynamic adaptive
resource management ... and something I called "scheduling to the
bottleneck". a recent reference (also mentions "re-releasing" a
"resource management" product in the mid-70s, SHARE having called for
making the cp67 "wheeler scheduler" available for vm370):
http://www.garlic.com/~lynn/2009h.html#76
there is reference to comparison that I did in the early 80s, between a
"current" system and a nearly 15yr early system doing essentially the
same type of workload. My comment was that the relative system thruput
of disks had declined by an order of magnitude of the period.
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
some disk division executives took exception and assigned the
performance group to refute the states ... but after a couple week they
came back and effectively said that i had slightly understated the
problem. the issue was that processor power had increased appox. 50
times ... but disk thruput had increased only by 3-5 times (resulting in
net relative system thruput decline of a factor of 10 times).
the performance group turned the study into a SHARE report recommending
disk configuration suggestions to improve system thruput ... references
to presentation B874 at SHARE 63, 8/18/84:
http://www.garlic.com/~lynn/2002i.html#18
http://www.garlic.com/~lynn/2006f.html#3
extract from the abstract for the presentation
http://www.garlic.com/~lynn/2006o.html#68
a little topic drift recent reference/post getting to play disk
engineer:
http://www.garlic.com/~lynn/2009h.html#68
As I've mentioned regarding relational databases ... the amount of real
storage started to dramatically increase in the late 70s ... and
systems started to leverage the additional real memory for caching and
other techniques as method for compensating for disk thruput bottleneck.
in the 70s ... there was a little contention between the '60s database
product group in STL (bldg 90) and the system/r (original
relational/sql) group ... misc. posts mentioning system/r
http://www.garlic.com/~lynn/subtopic.html#systemr
with the older style database group claiming that the "implicit" index
(for locate a record) in rdbms doubled the physical disk storage of
typical database and significantly increased the number of disk i/os (as
part of reading the index to find a record location). The system/r group
pointed at that the physical record pointers that were part of the data
significantly increased the manual management of "60s" databases.
going into the 80s ... the disk space significantly increased &
price/bit significantly decreased (mitigating rdbms disk space penalty),
available real memory significantly increased (allowing rdbms indexes to
be cached, significantly reducing the disk i/o penalty), and DBMS people
skill became relatively scarce and cost significantly increased. All of
this shifted various trade-offs vis-a-vis 60s DBMS and RDBMS.
Note however, there is still quite a bit of use of 60s DBMS technology
.... especially in various large financial and/or business critical
operations. a few recent references:
http://www.garlic.com/~lynn/2009g.html#15 Confessions of a Cobol programmer
http://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
http://www.garlic.com/~lynn/2009h.html#1 z/Journal Does it Again
http://www.garlic.com/~lynn/2009h.html#27 Natural keys vs Aritficial Keys
above also mentions that when Jim left for Tandem, he was handing a lot
of stuff off to me ... including consulting with STL 60s DBMS group and
talking to customers about System/R ... a couple old email references:
http://www.garlic.com/~lynn/2007.html#email801006
http://www.garlic.com/~lynn/2007.html#email801016
note measuring access latency in number of processor cycles ... number
of processor cycle latency to access real memory today is compareable to
60s number of processor cycle latency to access disk ... and today's
caches are larger than 60s total real memory.