Maker Pro
Maker Pro

My Vintage Dream PC

R

Roland Hutchinson

Walter said:
Close to my pet comparison, the modern workstation cannot print out 600
lpm,

It can't? Lessee... with a laser printer that does 10 pages per minute or
more continuously (can you still buy one slower than that?) at sixty lines
per page, I think we're covered.

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
M

Martin Brown

The ring and call gate model can be made to work perfectly well. Just
because MickeySoft compromised it to make retro games run a bit faster
doesn't condemn all similar OS's to be necessarily weak and fragile.

We are at the point where trading a modest amount of speed for making a
small ring0 microkernel that is absolutely robust against all forms of
attack would be a reasonable trade. The silicon is fast enough to
support it now. In the past it wasn't considered an acceptable trade.

We are sort of stuck with the historic Wintel monopoly. There is not a
lot of software about written for the native Itanium instruction set.

The ship it and be damned model of software releases doesn't help
either. Vista remains resource hungry and very unpopular with corporate
buyers - most I know are running XP on new boxes and holding out for
Win7 (which so far looks better behaved).
We have been over this ground before. Every computing problem looks to
him like a nail since his only tool is a hammer.
I've written enough RTOSs already. Future embedded systems will
probably go multicore... four sounds like a good number. One hard
realtime acquisition/control cpu, one tcp/ip, one DSP, one user
interface.

A general CPU will not be particularly good at performing fast i/o or
signal processing. It may still be fast enough to be useful though.
No single mortal is going to "try out" a desktop OS on his own. It
takes thousands of people to put together Windows or Linux with all
the trimmings.

I've just been speculating on what people are going to do with all
that silicon. Clearly Microsoft and Intel won't decide that computers
is good enough as-is.

Actually they may if Moores law starts to break down. The performance of
the current Quad cores is more than enough for realtime video editing
and gaming (which are among the most power hungry consumer apps today).
And until 3D viewing comes of age the average user is not going to want
or need a significantly faster PC. The PC making industry could get a
nasty surprise during this recession. Smaller portables are selling well.

Until there is a killer app that makes lots of consumers want to buy
supercomputer class home boxes they will always remain more expensive.
The lithography will continue, into the 15 nm sort of range, better if
something new and fundamental pops up. So Intel et al will probably
integrate all the cpu+peripherial functions into one low power chip
for laptop and netbook machines, and go multicore for higher-end
desktop and server machines.

I was just speculating on what OSs might look on multicore processors.
Almost everybody else here can only say "no."

The OS can manage it by placing threads into suitable sandboxes with
appropriate resources. The existing Pentium architecture can already
create pretty watertight virtual machine environments if you ask it to
do so.
If the cpu's share cache and memory, there's no data movement involved
in splitting functions. A file processor can DMA disk data directly
into a region of main mamory that belongs to the caller. Request
handling could use a bit of hardware help, but that's not a big deal.

But the fastest multicore CPUs have their own private cache and memory -
going through shared cache and memory arbitration makes things a lot
slower. And it doesn't scale at all well to large numbers of CPUs.

Almost all the successful multiprocessor systems farm out tasks to CPUs
with the management task taking highest priority to keep the rest busy.
I suspect that even the current crop of multicores spend a lot of time
waiting for keyboard input and throttled back to low power mode.
Tight shared-memory systems don't have to pass messages any more than
a uniprocessor system does.

So if functions are not assigned to processors in, say a 256-cpu,
1024-thread chip, do we just run hundreds of copies of the OS?

The OS need not be that heavyweight on the majority of the CPU's. Most
graphics rendering these days is handled by very dedicated external
parallel multicore hardware (which can itself be subverted for password
cracking). I expect to see similar integrated dedicated hardware
accelerators for peripherals moving closer to the core. RAID disk
support is becoming standard on some motherboards for instance.

I expect to see higher level specialist I/O functions moved back onto
the core rather than huge numbers of CPUs. There are not enough real
world applications that need this sort of computing power.

Regards,
Martin Brown
 
S

Scott Lurndal

Joe Pfeiffer said:
See what she was responding to -- no, it isn't a heavy load, but it
demonstrates timesharing is alive and well. I suspect John meant
multi-user computers.

Which still exist as well. For example, we have a very powerful build
system that is shared by many developers. Obviously not windows-based.

scott
 
S

Scott Lurndal

John Larkin said:
I was interested in discussing (this is a discussion group) what
future OSs might look like, and how multicore chips (which are
absolutely on the silicon peoples' roadmaps) would affect OS and
systems design.

Not many people seem to want to think about that.

I believe that this statement is not necessarily accurate. There are
many folks thinking quite a bit about how to leverage multiple
core processors. With a single socket Nahalem soon with 8 cores
(each with 2 threads), you're looking at 32-thread systems first
part of next year.

Linux, of course, has scaled well on systems containing up to 512
cores (and higher at SGI). Dynix/PTX scaled to 32 in the early
90's. Unisys had a 128 processor single system image in 1996
running a microkernel-based implementation of SVR4/ESMP.

AMD's Istanbul processor has 6 cores per socket today.

Much of what is done in the OS field is predicated on the
capabilities of the hardware being used. As X86-32 and X86-64
are by far the most prevalent architectures, that's where the
bulk of the attention is being paid.

SVM mode on AMD (and the VT-X mode on Intel) are quite exciting
for OS researchers as they provide the ability to really do a
"microkernel" operating environment; albeit it's now generally
refered to as a Hypervisor.

My strong belief is that you'll see in the near future every x86 system
from the desktop to the largest servers using a hypervisor as the
"security kernel" for the hardware, running COTS operating systems
such as Windows, Linux, OpenSolaris86 as guests. Even to the point where
invoking an application will automatically create and execute a new
instance of the operating system, just for that application.

The problems around leveraging the capabilities of high core counts
are currently in the application space, not the operating system
space, with the bulk of current applications being of the single
thread variety, although even legacy operating systems that don't
support multiple processing elements can still take advantage of
large core count systems by running multiple copies using a
hypervisor.

Meanwhile, you'll see the off-cpu I/O hardware becoming smarter
and (hopefully) requiring less from the operating system driver
(or even not requiring a driver at all). To a certain extent, EHCI,
UHCI and AHCI technologies are leading in this direction (at the
very least, a single driver can accomodate devices from multiple
suppliers, albeit still using a traditional memory-mapped I/O
mode).

I'd like to see a true off-load I/O processor such as many mainframes
used to have. Perhaps something based around SCSI Command Data Blocks
or the like (which have already been leveraged by USB, ATAPI et. al).

scott
 
W

Walter Bushell

Peter Flass said:
Nothing would stop them from giving you a bad reference, even by way of
strategic pauses in a conversation so you couldn't pin anything on them.

Perhaps you can just leave out a few "fine points". If they are
sufficiently obscure no one will trace it to you.
 
W

Walter Bushell

John Larkin said:
It's shocking how many companies can't find source files, or can no
longer assemble the tools to edit and regen products. I've shipped
over 3000 temperature controllers because a biggish British company
lost the business; they refused to modify some code, because they
couldn't modify the code. Well, their thermocouple front-end sucked,
too.

John

Proper file maintenance and backup won't (probably) affect *this*
quarter or the next, so it gets put under the VP of blue sky which is a
position generated mainly to satisfy equal opportunity laws.
 
W

Walter Bushell

Roland Hutchinson said:
It can't? Lessee... with a laser printer that does 10 pages per minute or
more continuously (can you still buy one slower than that?) at sixty lines
per page, I think we're covered.

I was thinking ink jet and yes I do believe there are slow lasers. But
I'll have to concede the point.
 
R

Rich Grise

I was interested in discussing (this is a discussion group) what
future OSs might look like, and how multicore chips (which are
absolutely on the silicon peoples' roadmaps) would affect OS and
systems design.

Not many people seem to want to think about that.

It sounds like you want to either speculate, or try to do soothsaying.
The ones that would know this stuff are the ones on the front lines,
the bleeding edge, so to speak.

Do the colleges and universities teach anything real any more, or just
self-esteem and political correctness? ;-)

Thanks,
Rich
 
P

Peter Flass

On Tue, 02 Jun 2009 18:18:16 -0400, Peter Flass



I already said most of that. :)

You and several others. Apparently I'm running a bit late.
 
P

Peter Flass

John said:
That seems to be the next step. The hypervisor is sort of my
microkernel... provided it can really limit the ability of
processes/CPUs from doing damage.

But it's such a waste. M$oft can't get away from writing cr@p, so the
best way to accommodate it is to isolate it? VM runs numerous copies of
Linux, but that's a little silly, too. Why not just write one OS that
permorms well and is secure?
If CPUs are cheap, that's OK. It's sort of silly to run a full copy of
Windows or Linux on each CPU, but it's not bad as an evolutionary
step.

As long as it's evolving toward running full copies of windows on no CPUs.
IBM started developing their channel-controller concept to reduce the
workload on the CPU, even for things like printers. That seesaw will
probably continue.

I'm really surprised this didn't happen long ago. You could dedicate
cores as I/O channels. You wouldn't even need to change the hardware,
just define a standard set of "CCWs" and have the "channel" execute the
appropriate commands for the device.
 
A

Archimedes' Lever

Do you know what an instruction is?

/BAH

Do you know what assembler is?

Since I do, you *should* be able to draw a conclusion to your answer. I
have doubts, however, as to your capacity to discern given your absolute
Luddite CS demeanor.
 
A

Archimedes' Lever

No. You have not done the problem as requested.

/BAH

One instruction, zero the core. My one instruction, de-energize.

All reads zero. ONLY if you remove my instruction or add another, such
as re-energize will you get an alternate result.

I did the problem exactly as it was stated.
 
R

Richard Cranium

On Wed, 03 Jun 2009 17:21:33 -0700, Archimedes' Lever

I did the problem exactly as it was stated.


No wonder you refuse to take the puzzle challenge. You only want to
have the answer before you try. No risk. No guts. No brains. All
full of it.
 
R

Richard Cranium

FOAD, Stalking retard.


Why not answer the questions you phony? What about your celibacy?
C'mon Archie. Show your true colors. Oops - don't take that as a
racially insensitive remark.
 
F

FatBytestard

They're not cheaper unless you reuse them,

There is no such thing as a write once hard drive, idiot.

That stupidity is pretty damned pathetic coming from someone claiming
to be an engineer.
and that has the
propagating-corruption problem as noted.

No, it does not. You are supposed to check your system for errors
BEFORE performing the backup, ass.

If you performed it correctly NO files will be corrupt. If they were
written as a valid file, and the app put corrupt data in there, that is
another thing entirely, and that will still propagate onto any backup
media or methodology you are planning on using.

You are obviously clueless about it, and not caring to reduce the cost
of your belabored method of choice.

Also, if you had any brains at all, you would simply create a RAID
solution with bit striped, hot swap drives in place, and then you only
need to do an incremental off site-over-the-net backup each day to
another HARD DRIVE device at the remote location. Your in house data is
100% safe short of a fire or Earthquake which causes something to take
out more than two drives in the array.

That is where things are right now. You go out and get an ultra SCSI
SAS solution that incorporates a RAID array, and a chassis with hot swap
drive bays and power supply bays.

Once a file is (successfully) written, it is impossible for it to
become corrupted short of the entire bank of drives taking a dump all at
once. Even then, it is technically recoverable, just a bit more costly
to do so.

The data is more secure (absolutely secure, in fact), and it reads and
writes faster by way of both the interface as well as the array schema.

SAS is the way to go. You can even get 15,000 rpm mini drives and make
small arrays that you could slip in and out of a system as an entire
array. No need for drive bays with those little critters.
 
F

FatBytestard

Thank Goodness I don't have a real company.


Especially thankful that it is not "large or heavy" as well.

Buy some hard drives and an NAS chassis John. It is cheaper, faster,
more reliable, and you can stop pissing and moaning about your engineers'
private directory sizes.
 
Top