J
jmfbahciv
Yep.Ahem said:Sounds like a fairly normal light workload for one of my BSD boxe.
/BAH
Yep.Ahem said:Sounds like a fairly normal light workload for one of my BSD boxe.
Do you consider cut and pasting from many files to one a "singlePatrick said:This is not a particularly heavy system load. Even a Windows box
should be able to do this, except that it would be only one user
editing at a time. On my Windows XP box at work I'm often editing
several different Word and Excel documents, have a couple of web
browsers, a very large and demanding special-purpose database
application, and Google Earth running at once.
John said:Well, the opposite of control is no control.
That's not timesharing, that's a single-user multitasking OS.
Timesharing is (was) using a central computer system to service
multiple users who used essentially dumb terminals. Now all the users
have monstrous amounts of local compute power and some to huge amounts
of local storage. They have their own printers, too, or networked
printers that don't go through a mainframe.
But if you want to use the word "timesharing system" for a single-user
PC, that's your definition.
I ran real timesharing systems, and hung around a lot more, even
PDP-10s.
Windows ain't one, and cloud computing isn't either.
I once planned a timesharing system that had a CPU per user, a rack
full of LSI-11 boards, one per modem, doing the simple stuff ahead of
the Big Iron. That would have been cool, but then PCs came along.
John said:I was interested in discussing (this is a discussion group) what
future OSs might look like, and how multicore chips (which are
absolutely on the silicon peoples' roadmaps) would affect OS and
systems design.
Not many people seem to want to think about that.
John said:I guess my point, coming full circle, is that a "big" OS has millions
of lines of code, written by hundreds of people, running at the "ring
0" level where any error crashes the whole system. A secure OS must be
atiny chunk of code, absolutely insulated from damage by more complex,
less reliable stuff.
Kim said:Xray uses quite low energy fields. Never had any problems with my mobile
for example while it has gone trough xray.
In satellites the energy of particles is much higher and they still
work just fine. It's just a question of design and compromises
between cost and reliability, nothing comes for free.
The EMC requirements are not defined by kiddies but standards
organizations. And the tests must be made in accredited labs
to get the paperwork done and permit to sell the products.
Especially for telecoms equipment the limits are not always easy
to achieve.
The kiddie might have been someone without only knowledge how to
connect the stove, and read from manual ready made answers, and
imagine somethingg if the manual has no answer. You don't put real
engineers to call to customers. They are shielded by many layers of
customer service and escalation protocols.
Was this the house you had all kinds of wacky problems with
the electical connections.
Maybe the stove was just not correctly
connected (grounded for example), and the filtering did not work.
Kim said:Single user, single owner systems are the most common ones, but they
still use multiple tasks in parallel. But this is old thread, you just
don't belive that anything made during last 20 years can support many
users or tasks.
Don't bring windows 3.1 to the discussion like always,
it is ancient already...
I have used unix machines that have 500-1000 users in at the same time,
each user has had tens of tasks in parallel etc. But maybe I have been
dreaming. My XP in front of me has had few months of uptime, runs many
processes in parallel all the time. I have X emulation on the other
monitor that is connected to big linux cluster that runs all the time
hunderds of users doing parallel tasks etc.
It's you that is still thinking that single task is the norm, it is not.
Unfortunately, it's not quite that simple. The votes can onlyKim said:If two CPUs are used it needs internal diagnostics for the
detection. If Three CPUs are used they can vote who is incorrect.
In the most critical systems there are three parallel systems,
implemented to different platforms by multiple independent teams. The
results from the systems go trough the voting process.
JosephKK said:[relieved emoticon here] Good.JosephKK said:JosephKK wrote:
JosephKK wrote:
JosephKK wrote:
Peter Flass wrote:
John Larkin wrote:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass
<snip>
Not very efficient, it would be a return to the bad old days of
master-slave
Which will cause every system to grind to a halt. think about
networking.
or asymmetric multiprocessing.
Which will still require one CPU to be "boss".
The problem usually isn't
the CPU anyhow, it's memory corruption, and multiple cores don't solve
this problem.
The problem is scheduling. Memory corruption would be isolated to an
app address space, not the monitor's.
IFF the monitor is well designed and written. If MS develops it the
corruption will be coming from the monitor.
MS DOESN'T KNOW HOW TO DEVELOP!! That's the point I've been trying
to make. It's a distribution business and that is rooted deep in
its folklore.
/BAH
I see your point. Get some crap going just well enough to be useful
and pretend it is the second coming.
No. YOu don't see my point. Distribution has a completely different
set of problems that need to be solved. I worked long and hard
to try to solve the simplest of them (this implies that the simplest
were extremely complicated). When your business is distribution,
then the tradeoffs of all design decisions will be made in favor
of distribution and not anything else.
On-line distribution and support means that backdoors have to be
wide open.
<snip snot>
/BAH
I suspect you and i are talking past each other. Expand some more on
the distribution thing. Especially, clarify how it is different from
the marketeering thing. I sure do not follow you yet.
If I'm doing OS development and I need a UUO which gives me access to
the nether regions of all customer systems so I can distribute a file
FOO.EXE, my proposal would be turned down by TOPS-10 but accepted by NT.
Assume that FOO.EXE is an executable that is required to run on
all systems.
NT has to make the tradeoff decision because their business is to
distribute FOO.EXE while TOPS-10's is to provide general timesharing to
users without allowing disasters to happen. Allowing someone outside
the customer site to copy any file at any time to any place was
considered a security risk.
/BAH
A bit clearer. Make that 2 bits.
If you look at them with the assumption of their primary business, itThe anti-security consequences of
M$ design decisions is quite well documented. This get closer to the
why.
gets even clearer.
None of their work, AFAICT, has enough paranoia. Putting piecesAnd still they (M$) wonder why there is the likes of wine. And the
growing willingness to lock them up in a VM.
of an app's context directly into the kernel would have given
us the willies.
/BAH
Not to mention their persistent willingness to let almost any
application mess with system internals. Makes my skin crawl.
JosephKK said:High reliability = 2nd CPU is back up, not slave.jmfbahciv said:Andrew Swallow wrote:
JosephKK wrote:
On Mon, 01 Jun 2009 19:29:38 +0100, Andrew Swallow
Andrew Swallow wrote:
<snip>
You can complete you memory cycle so the data and instruction busses
are clear. The normal action on the watch dog timer for the OS core
going off is to restart all the cores. Some debugging information
may be kept but that is it.
<snip>
Andrew Swallow
I will not ever let you design any life and safety critical stuff that
i have anything to do with.
If it is that safety critical you have at least two chips and the
second one takes over.
But the second one is the one who is doing the I/O that is required
to "save" the data. [I'm using John's proposal of master/slave].
After the restart the chip may recover.
Watchdog timer on the entire chip is total crash, infinity loop
or too small a timeout.
/BAH
Andrew Swallow
And just exactly how do you decide which CPU is faulty?
JosephKK said:You are wrong. They were clever enough; they simply didn't takeJosephKK said:JosephKK wrote:
JosephKK wrote:
JosephKK wrote:
Peter Flass wrote:
Scott Lurndal wrote:
What you will see going forward is that the operating sytsem(s) never
really touch
the real hardware anymore and a VMM of some sort manages and coordinates
the hardware resources amongst the "operating system(s)", while the
operating systems are blissfully unaware and run applications as they
would
normally.
We've seen this since CP-67 in, what, 1968?. BTDT.
If the OS doesn't touch the hardware, then it's not the monitor,
but an app.
I think this one is currently an open ended argument. What do you
call an application that hosts hundreds of dynamically loaded user
applications?
A daemon.
That is way far from the usual definition of daemon. Check your
dictionaries.
Since we implemented a few of them, I know what the functionality
of our daemons were. You asked me what I would have called them.
I told you.
Yes you have. I basically come from the nuxi model.
Particularly when that application used to be an OS in
its own right?
Which one are you talking about? The emulators are running as
an app.
You are missing the boat here, in the current world there are several
cases of things like virtualbox, which run things like BSD, Solaris,
MSwin XP, Freedos, (as applications) and all their (sub)applications
"simultaneously" (time sharing, and supporting multiple CPU cores).
This would place it at the monitor level you have referenced.
No. Those are running as apps w.r.t. the computer system they are
executing on. Those apps will never (or should never) be running
at the exec level (what the hell does Unix call "exec level"?)
of the computer system. That is exclusively the address space
and instruction execution of the monitor (or kernal) running
on that system.
It is kernel space in the *nix world.
In olden unix' world. I'm beginning to have some doubts based
on what's been written here. It looks like a lot of things
get put into the kernel which shouldn't be there (if I believe
everything I've been told).
Terminology is failing here.
It's not a confusion of terminology. It's more a confusion of
the software level a piece of code is executing. I run into
this confusion all the time. I think it's caused by people
assuming that Windows is the monitor. It never was.
MSwin never was much of a proper OS. Just remember that there are
more things claiming to be an OS besides Multics, RSTS, TOPS-10, VMS,
MVS, VM-CMS, Unix(es), and MSwin.
MS got Cutler's flavor of VMS and called NT. They started out with
a somewhat [emoticon's bias alert here] proper monitor but spoiled
it when Windows' developers had to have direct access to the
nether parts of the monitor.
/BAH
Yep, just like the ruined win 3.1 by insisting on embedding the 32-bit
mode within the GUI, and insisting on internals access.
More yet of the tiny basic mentality.
Nah. It got started a long time ago, when the founders of MS discovered
a listing of DAEMON.MAC (of TOPS-10) in a dumpster and believed they
had a listing of the TOPS-10 monitor when they read the comment
"Swappable part of TOPS-10 monitor". It was a user mode program
that ran with privs.
/BAH
Wishful thinking. They were not smart enough to recognize the value
of such a document, let alone understand it, even if they did find
such.
enough time learning about what they were using. I guesstimate
that one more month of study and they would have learned about
how buffer mode I/O should work.
Are you so very sure?
They used DMA to read/write to the floppy in
the original PC. Used DMA again for the XT fixed disk as well.
Which history do you know? I have been watching computers since the
core days and even worked with core computers.
Unless i am mistaken the DEC-10/PDP-10 used only silicon ram.
jmfbahciv said:It doesn't matter how many humans are connected. When I was working,
I'd have 5-10 jobs running; in certain cases, a lot more. I was
a single user but I had the machine humming doing 10 "users'" worth
of work.
Anne said:one of the traditional requirements for time-sharing system ... besides
juggling multiple different tasks/jobs concurrently ... was sufficient
security to keep the different "users" protected from each other (and
external forces).
some of the current desktop machines had design point for stand-alone
kitchen table operation ... where many of the applications took-over
complete control of the whole machine ... environment lacked any
defenses against hostile operation. some number of these evolved into
multi-machine networked environment ... but again with the orientation
that they were operating in a non-hostile environment. a recent posts on
the subject:
http://www.garlic.com/~lynn/2009h.html#28 Computer virus strikes US Marshals, FBI affected
John said:JosephKK said:JosephKK wrote:
JosephKK wrote:
JosephKK wrote:
JosephKK wrote:
Peter Flass wrote:
John Larkin wrote:
On Mon, 25 May 2009 16:26:50 -0400, Peter Flass
<snip>
Not very efficient, it would be a return to the bad old days
of master-slave
Which will cause every system to grind to a halt. think about
networking.
or asymmetric multiprocessing.
Which will still require one CPU to be "boss".
The problem usually isn't
the CPU anyhow, it's memory corruption, and multiple cores
don't solve this problem.
The problem is scheduling. Memory corruption would be isolated
to an app address space, not the monitor's.
IFF the monitor is well designed and written. If MS develops it
the corruption will be coming from the monitor.
MS DOESN'T KNOW HOW TO DEVELOP!! That's the point I've been
trying
to make. It's a distribution business and that is rooted deep in
its folklore.
/BAH
I see your point. Get some crap going just well enough to be
useful and pretend it is the second coming.
No. YOu don't see my point. Distribution has a completely
different
set of problems that need to be solved. I worked long and hard
to try to solve the simplest of them (this implies that the
simplest
were extremely complicated). When your business is distribution,
then the tradeoffs of all design decisions will be made in favor
of distribution and not anything else.
On-line distribution and support means that backdoors have to be
wide open.
<snip snot>
/BAH
I suspect you and i are talking past each other. Expand some more
on
the distribution thing. Especially, clarify how it is different
from
the marketeering thing. I sure do not follow you yet.
If I'm doing OS development and I need a UUO which gives me access to
the nether regions of all customer systems so I can distribute a file
FOO.EXE, my proposal would be turned down by TOPS-10 but accepted by
NT. Assume that FOO.EXE is an executable that is required to run on
all systems.
NT has to make the tradeoff decision because their business is to
distribute FOO.EXE while TOPS-10's is to provide general timesharing
to
users without allowing disasters to happen. Allowing someone outside
the customer site to copy any file at any time to any place was
considered a security risk.
/BAH
A bit clearer. Make that 2 bits.
[relieved emoticon here] Good.
The anti-security consequences of
M$ design decisions is quite well documented. This get closer to the
why.
If you look at them with the assumption of their primary business, it
gets even clearer.
And still they (M$) wonder why there is the likes of wine. And the
growing willingness to lock them up in a VM.
None of their work, AFAICT, has enough paranoia. Putting pieces
of an app's context directly into the kernel would have given
us the willies.
/BAH
Not to mention their persistent willingness to let almost any
application mess with system internals. Makes my skin crawl.
Yep. If one can do it, any old virus code can do it.
/BAH
Or any device driver that comes with a $9 Chinese serial interface
board.
jmfbahciv said:Do you consider cut and pasting from many files to one a "single
edit"? I don't. Do you consider having more than one file
getting displayed (like you guys describe when looking at
listings) a single edit? I don't.
16 machines (all bought same time ?) & 20 machine-years so farJohn said:The hardware is exactly identical. HP ProLiant servers, hot-plug, ECC
ram, redundant bios, redundant power supplies and fans, hot-plug RAID.
I bought 16 of them so all of our machines are the same. With about 20
machine-years so far, absolutely nothing has gone wrong.
John said:I guess my point, coming full circle, is that a "big" OS has millions
of lines of code, written by hundreds of people, running at the "ring
0" level where any error crashes the whole system. A secure OS must be
atiny chunk of code, absolutely insulated from damage by more complex,
less reliable stuff.
(I should tell today's story, the incredible horror that Kontron BIOSs
turned out to be...)
We can only hope. In the long term, software just isn't worth what
Microsoft is charging for it. If two or three people can write a
good-enough word processor, it needn't be a billion-dollar-a-year
franchise.
Oh, the IBM thing happened half a century ago.
Yup. If some cores were device controllers, and only they could do,
say, DMA or tcp/ip, some things like system-takeover buffer overflow
exploits might be made impossible.
John
John said:OK, explain it.
You keep saying that. You don't explain why.
I read the entire RSTS/E listing, understood it all (except the Basic+
compiler, which Dec didn't write) and added my own disk driver. Did
you do anything like that?
I wrote three RTOSs from the ground up. Ditto?