Maker Pro
Maker Pro

My Vintage Dream PC

A

Archimedes' Lever

Just challenge him with a modestly difficult puzzle, he will run fast
enough.


You're an idiot. I find it funny that you claim to be an adult man in
a civil society, yet you expound baseless bullshit on a daily basis.
 
A

Archimedes' Lever

OK. Here is one. Zero all of core with one instruction, including
the ACs. On the PDP-10, one guy managed to get them all zeroed with
the exception of one bit. I can no longer remember which bit....
IIRC, bit 0 of word 0 but I'm not sure.

and I can't remember the guy's name. Gruen?

/BAH

One instruction? - Pull Power Cord.

ALL ZEROS.

I win.
 
F

FatBytestard

Spend a little time in England and see how they really should be
pronounced.

Sorry, but you guys need to catch up with what the English language has
evolved into as well. Hint: It ain't Brit mode English.
 
R

Roland Hutchinson

Archimedes' Lever said:
One instruction? - Pull Power Cord.

ALL ZEROS.

You've never actually worked with actual core, have you, Grasshopper?

--
Roland Hutchinson

He calls himself "the Garden State's leading violist da gamba,"
.... comparable to being ruler of an exceptionally small duchy.
--Newark (NJ) Star Ledger ( http://tinyurl.com/RolandIsNJ )
 
On Mon, 01 Jun 2009 19:45:50 -0700, Archimedes' Lever

One instruction? - Pull Power Cord.

ALL ZEROS.

I win.
Nope.
True CORE memory holds its state across power failures.
And RAM memory may well come up in a random state.
 
R

Richard Cranium

On Mon, 01 Jun 2009 19:44:36 -0700, Archimedes' Lever

You're an idiot.


Pot ... Kettle ... Black - and don't take the black color remark as
anything more.
 
R

Richard Cranium

Sorry, but you guys need to catch up with what the English language has
evolved into as well. Hint: It ain't Brit mode English.


An old joke punchline is appropriate for you here Archie. To wit:


"**** You!" ... Tennessee Williams
 
P

Patrick Scheible

Roland Hutchinson said:
You've never actually worked with actual core, have you, Grasshopper?

And even in semiconductor memory, I don't think zero voltage means a
logical state of zero.

-- Patrick
 
K

Kim Enkovaara

jmfbahciv said:
No. The context of term core in this discussion has been CPU.
These CPUs will be more sensitive than the old-fasioned memory
doughnuts.

Are you certain about that? Today we have possibility to add millions
of gates just to protect the chips from unexpected events, radiation
for example. Altough the chips contain more and more transistors, the
FIT rates have not exploded. This has been achieved by designing
redundancy to the chips and also by fine-tuning the silicon process,
and better materials control (less alpha radiation).

The only catastrophic event from cosmic radiation is a latchup in the
cells, but that is very rare event, or nonexistent depending on the
process, wafer type and chip design.

And of course you could run two kernels in parallel, and switch from
active to passive if the active one notices problems in its environment.
This has been done for decades in telecoms equipment. Often it is
easier to notice the problem early enough to recover from it by doing
active/passive switchover than to fix the problem in HW for the active
kernel.

--Kim
 
On Mon, 01 Jun 2009 22:00:14 -0500,
On Mon, 01 Jun 2009 19:45:50 -0700, Archimedes' Lever


Nope.
True CORE memory holds its state across power failures.
And RAM memory may well come up in a random state.
Should have said:
And SEMICONDUCTOR memory may well come up in a random state.


RAM is just a bit generic. :)
 
J

jmfbahciv

Dave said:
Of course. But I said "the average" user. Of the computers that I
see, not one in twenty has Photoshop. (If there is a high-power app,
it's likely to be a game.) Sure, some people do video editing or CAD,
but not that many (audio recording and editing doesn't require all
that much power, a 378M P3-500 is adequate for that, although some
transformation operations are opportunities to go get another cup of
coffee).

When enough people have enough compute power, the software they use
will begin to use that extra CPU resource.

<snip>

/BAH
 
J

jmfbahciv

John said:
Does C++ mean "faster than light"?
Nope. The extra pluses are to fool you into thinking it's faster...
than molasses in January in the Northern Hemisphere.

/BAH
 
J

jmfbahciv

John said:
Still waiting for Intel to start making a 36-bit Pentium?

Nope. You are still assuming I'm longing for the old gear
which I am not.

/BAH
 
J

jmfbahciv

John said:
Because this isn't the auld days, and because I'd design it that way.
I do design computer systems, hardware and software.

But not OSes which need to supply computing resources for general
timesharing.
Do you ever have ideas any more?

Lots of them..
Have you ever written an OS?
No.

Do you
think things won't change in, say, 20 years?
My observation is that the computing biz has cycles. What
I'm seeing now is what we saw in the 70s. The hardware
is too slow for customer needs so the software has to
compensate. Since hardware speeds were increased over
the last two decades to supply adequate computing power,
nobody had to write software well. Now that the hardware
is hitting a silicon ceiling, the focus is slowly, IMO,
going back to using software solutions to squeeze out
the extra performance.

/BAH


/BAH
 
J

jmfbahciv

John said:
John said:
John Larkin wrote:

John Larkin wrote:

John Larkin wrote:

Walter Bushell wrote:
Walter Bushell wrote:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass

John Larkin wrote:
The ultimate OS should maybe be hardware, fpga probably, or an
entirely separate processor that runs nothing but the os.

CDC-6600.
In a few years, when most any decent CPU has 64 or so cores, I suspect
we'll have one of them run just the OS. But Microsoft will f*** that
up, too.

John
Why only one? Surely the kernel will be multithreaded.
You meant to say reentrant.

/BAH
Well that too.
Not "too" but first.

/BAH
An os nanokernal needs to be neither.
Wrong; then you don't have an OS.


You are playing with words. An OS should be hierarchial, with the
top-most thing (what I call the kernal) being absolutely in charge of
the system.
But it cannot have that kind of control with more than [number picked
out of the air as a guesstimate] 8 threads or processes or services.

It could, in my opinion should, run on a dedicated CPU.
It is impossible to do this without allowing the other CPUs to be
able to make their own decisions about what they're processing.
It's times like this that I regret that "duh" has fallen into disuse.
But you are the one who insisted that the Boss CPU have control
of the whole system and what it does. You cannot have it
both ways.
Your boss controls what you do. No.

I sure hope she doesn't have to
constantly help you do it.
If the boss had your implementation, it would.


It
should be as small as practical, because as programs get bigger they
get less reliable. The kernal could and should be written by one
person. Its only function is to manage resources and other processors.
It should not share time with applications, drivers, GUIs, or anything
else. It might be multithreaded but really has no need to be, because
it won't be working very hard.

then it won't be in complete control of the system (if it's not
working very hard).


Other CPUs can, under tight resource control,
You just told me that resources won't be visible to the kernal. So
the Boss CPU cannot be in control of the whole system.
Of course the kernal has visibility to resources; it manages them. It
just doesn't share its cozy protected CPU with lower-level stuff like
device drivers, file systems, apps. Why is that such a hard thing to
understand?
It isn't hard for me to understand. What you don't know is
the problems of communicating between CPUs. If the Boss CPU
has to have complete control, then the other CPUs cannot
do any resource management; they have to gain permission
with the accompanying data from the Boss CPU before they
can resume execution.
With decent memory management and sensible request protocols,
DONE BY WHO?!!!! Some CPU has to do this.

The Boss could assign memory regions and priviliges when it launches
programs into worker-bee CPUs. That's part of the responsibility of an
OS now. May as well, since it doesn't have much else to do. It could
also handle subsequent dynamic memory stuff, like extending pages or
allocating/deallocating buffers, although it could delegate that, too.

You are still stuck in master/slave thinking. Will you ever be able
to think in other terms?
But The Boss cpu doesn't run applications.

Not even the apps that are necessary to manage the system?
The disk i/o cpu is waiting for heads to move and platters to spin.
How is that any different from a disk driver in a uniprocessor system,
except that in this case the disk driver has an entire CPU to run it.

I don't know how to reexplain this anymore.
You keep making claims that multiprocessor systems will have fatal
overheads that uniprocessor systems don't. That makes no sense.

That's because you don't have experience thinking about this kind
of stuff.
The
multiprocessor systems don't even have context switching overhead.
HUHHHH!!!!

Why
would a dedicated-CPU file manager be slower than a file manager that
has to share cycles with everything else?

Because it is waiting for permission to do its own work.

Even TOPS-10 had disk drivers.

Yes. and in the master/slave configuration the slave CPU had
to wait for master CPU to do the bookkeeping and assignments.


Right. That is not a master/slave configuration.
All but one.

Now you've just slipped back into having one CPU control everything
else.
Like we have now?
No.


Experience is good if it opens up your mind to possibilities. It's
terrible if it closes it off. Which is why younger people tend to have
most of the new ideas.

Come on, thaw out and contribute some ideas.

I have been. You are not understanding what I'm writing about.
/BAH
 
J

jmfbahciv

Joe said:
jmfbahciv said:
Joe said:
John Larkin wrote:
Take the huge GUI, the Taiwanese device drivers, the interrupt
handlers, all that dangerous stuff out of the OS kernal. A lot of that
can even be removed on uniprocessor systems.

But your idea is to have the Boss CPU have control of the whole
system; if it does, then the other CPUs cannot run the device drivers
without the Boss knowing about it. Having the control of the
system means that the scheduling for I/O and memory management
has to be done by the Boss, not the other CPUs. Thus, when
a slave CPU needs any resources, it has to ask the Boss for
it. This will cause the system to grind down to almost a halt
because the other CPUs will be in a constant wait state waiting
for the Boss to service their requests.
No, the part he's right about is that it really is possible to
distribute a lot of the functionality of a conventional kernel (even if
he can't spell it) among several services, so the microkernel doesn't
have to be involved with the details of the IO. The slave does have to
ask the boss for the resources as you say, but only once at system
startup and never again.
Nuts. [that was said politely :)] Consider a system that has an
uptime of years. Rebooting to acquire or redistribute resources is
not an option. think about replacing or adding gear without
having to reload the system. And what about cores who have a need
to cooperate with each other? Then think about updating software
such as those DLLs (I think that's what you call them). Are you
going to cause a system or CPU reboot? If you do need a CPU to
reboot, how do you go about it without having to take the rest
of the CPUs or the whole system down?

These are all extremely rare events (i.e., not on the order of thousands
of times per second). So "never" again wasn't quite accurate, but the
point is that the requests for new resources don't happen enough to be
considered as part of the load (if you're going to have hot-swapping you
do need to make sure it all works -- but it isn't part of why this
organization doesn't make much sense).

I don't think they will be rare events. Think about just-in-time
executables.
The issues in microkernel vs. more conventional organizations have
nothing to do with small vs. large computer thinking. If anything, it's
easier to see a path to things like upgrading and restarting services as
necessary, and isolating bugs to ensure multi-year uptimes, with a
microkernel than with a modular OS. Not any easier to actually get
there, but easier to see the path.

the problem I'm seeing here (with the people who are posting) is that
they are limited to thinking in single-user, single-owner systems,
where single-user implies a single job or task and not multiple
tasks/human being.

/BAH
 
J

jmfbahciv

Patrick said:
jmfbahciv said:
Charles said:
jmfbahciv wrote:
John Larkin wrote:
[snip...] [snip...] [snip...]

She said "really." She's being a timeshare snob just because I had a
PDP-11 and she had a VAX.

John


There is no reason to be insulting. You don't know what you're
talking about now.

/BAH
It should be spelled "VAXX", because we all *know* it is a "four letter
word". ;-)
nah. It was a 36-bit wannabe with 1/4 of its pecker chopped off.

I think you mean 1/9th?
No. 1/4. Think about it. :)

/BAH
 
Top