J
jmfbahciv
John said:An os nanokernal needs to be neither.
Wrong; then you don't have an OS.
Both are tricks for the sake of
tricks.
/BAH
John said:An os nanokernal needs to be neither.
Both are tricks for the sake of
tricks.
You have obviously never been intimate with a timesharing OS at theJohn said:Why? Why not run the ultimate kernal on one dedicated processor?
Processors are getting cheaper every day.
The nanokernal could run on one CPU, and needn't be multithreaded,
because it only does one thing, manage the rest of the CPUs. Device
drivers and file systems certainly should not be part of the kernal...
should not even run on the same processor. Running lots of flakey
stuff in supervisor space creates the messes we have in Windows and
Linux and other "big kernal" OSs, where any one of hundreds of modules
and drivers and GUI interfaces can take down the entire system or
compromise its security.
Sure, multithread things that need it, like stacks and GUIs. But never
allow those things to crash the core OS.
Simple: use one CPU out of 256 as the ultimate manager, and waste as
much of its cycles as you like. CPUs will soon be free.
FatBytestard said:Nice unsubstantiated, peanut gallery mentality comment.
WHAT have you heard?
Mine runs fine. Vista has run fine for over three years, and W7 has
been running fine for several months now. You nay sayer retards are
idiots.
I love it how folks that have ZERO actual experience with things
expound on them like they actually know what is going on.
You do not.
FatBytestard said:Likely asking you if you know what a modem is since it came right after
your cryptic description.
Since you used the term (modem) in your reply, it raises even more
questions. You really are a dufus.
TheQuickBrownFox said:But memory caches, buffers, etc. HAVE changed, and your analysis (and
training)is about three decades OLD, minimum.
Speed scales over time. The number of transistors that can be integrated
into a given die area scales over time.
We all already know that. Your reply is meaningless.
The paradigm by which we utilize the hardware can and has changed, and
will continue to change. You claiming it is all the same is a sad
hilarity.
Your mind set is what has stagnated.
Do you even know what current mode logic is, for example?
TheQuickBrownFox said:Exactly. "multiple processes" on a single CPU is only one thread, in
the final analysis, even if it has little execution order functions, etc.
helping things out.
Lawrence said:You seem to be deluded by the belief that BAH will listen to reality.
Only those systems that were designed in the 60s to run on hardware of
the 60s are acceptable to her.
John said:Not really. The proportions of speed seem to remain the same.John said:Ahem A Rivet's Shot wrote:
On Tue, 26 May 2009 09:13:45 -0700
If
every task in turn has its own CPU, there would be no context
switching in the entire system, so there's even less for the
supervisor CPU to do.
Wheee it's MP/M all over again.
He's reinventing what didn't work well at all.
Times have changed, guys.
First of all, the virtual OSes will merely be apps and be treatedWhen cpu chips have 64 cores each running 4
threads, scattering bits of the OS and bits of applications all over
tha place dynamically, and virtualizing a dozen OS copies on top of
that mess,... is that going to make things more reliable?
that way. Why in the world would you equate 4 threads/core?
You need to think some more. If the single-point failure isFurthermore,
it is extremely insecure to insist the computer system have
a single-point failure which includes the entire running
monitor.
Fewer points of failure must be better than many points.
the monitor, you have no security at all.
It isn't going to be tiny. YOu are now thinking about size of theFew security
vuls must be better than many, many. The "entire running monitor"
could be tiny, and could run on a fully protected, conservatively
designed and clocked CPU core.
code. YOu have to include its data base and interrupt system.
The scheduler and memory handler alone will by huge to handle I/O.
Its MTBF, hardware and software, could
be a million hours.
That doesn't matter at all if the monitor is responsible for the
world power grid.
That is a very bad assumption. You need soft failovers.Hardware basically doesn't break any more; software does.
Hardware can't take water nor falling into a fault caused
by an earthquake or a bomb or an United Nations quanantine
[can't think of the word where a nation or group are
declared undesirable].
HOney, you still need an OS to deal with the virtuals. Virtuals areThe virtualizing trend is a way to have a single, relatively simple
kernal manage multiple unreliable OSs, and kill/restart them as they
fail. So why not cut out the middlemen?
applications.
/BAH
Yes. Applications are million-line chunks written by applications
programmers who will make mistakes. Their stuff will occasionally
crash. And the people who write device drivers and file systems and
comm stacks, while presumably better, make mistakes too. So get all
that stuff out of the OS space. Hell, get it out of the OS CPU.
How can an OS ever be reliable when twelve zillion Chinese video card
manufacturers are hacking device drivers that run in OS space?
If it's absolutely in charge of the entire system, then it has toThe top-level OS should be small, simple, absolutely in charge of the
entire system, totally protected, and never crash.
Why not?
These calls also provide the communication mechanisms that allow aCharles said:jmfbahciv said:JosephKK said:[snip...] [snip...] [snip...]
Inside the OS there are usually a switcher, a scheduler, a process
manager, a file manager, an IO manager, and other basic parts.
Optional parts relate to hardware configuration and possibly dynamic
hardware configuration and temporary file systems.
Now how do UUOs and CALLIs relate to how the above mentioned
interfaces? (If at all)
My user mode code has some buffers I want to be written to the
disk. I do a series of UUOs and CALLIs to convey to the
monitor's file system handler, memory handler, device routines,
and controller device routines that I want my bits to be copied
from here to there and labelled XYZ. I ask the monitor what
date/time it thinks it is by doing a CALLI. I set up certain
rules about subsequent usage of the computer system by telling
the monitor through CALLIs and UUOs. These are the only
ways the monitor and I, the user, communicate.
How is that for a start?
/BAH
Translating the DEC'isms, UUO's and CALLI's are what is more generically
known as "system calls". IBM "big iron" would call them "Supervisor
Calls", but then that's IBM.
These calls provide system-wide information (like TIME and DATE), and
protect the OS and other users by *preventing* the normal user from
directly programming dangerous and potentially system-wide damaging code.
Roland said:Indeed, Windows 7 (of which you can download the final beta and run it for
free for the next year or so) is widely held to be, as advertised, the most
secure Microsoft operating system ever.
Just remember that damnation with faint praise is still damnation.
JosephKK said:I see your point. Get some crap going just well enough to be useful
and pretend it is the second coming.
JosephKK said:Not so much forgotten as (inappropriately) devalued, and thus
untaught.
I must agree, i found out the hard way. Dealing more deeply with IBM
MVS in the early 1990s taught me a lot. Not even a misbehaving
"system" program could bring the system down. It got trapped,
blocked, and killed with rather thorough diagnostic logs available. I
know, i used them a few times.
JosephKK said:Again i must agree, a non-reentrant kernel is a time bomb.
I don't want these people to leave. The only way for old knowledgeJosephKK said:You clearly do not have a clue as to what you are talking about.
Please leave or self-destruct.
JosephKK said:Actually helpful to me, can't speak for anyone else.
Does the user directly specify the UUOs and CALLIs when using say a
text editing program?
This is programmer territory, isn't it? This
is OK, it establishes context.
Now UUO and CALLI seem to be acronyms or abbreviations.
Yes.
An expansion
of each in this modified context seems to be really to the point.
Please provide these. Perhaps even discuss an example or three of
each. Posting links is quite acceptable, as i expect the explanations
to be more than a few paragraphs.
JosephKK said:Translating the DEC'isms, UUO's and CALLI's are what is more genericallyjmfbahciv said:JosephKK wrote:
[snip...] [snip...] [snip...]
Inside the OS there are usually a switcher, a scheduler, a process
manager, a file manager, an IO manager, and other basic parts.
Optional parts relate to hardware configuration and possibly dynamic
hardware configuration and temporary file systems.
Now how do UUOs and CALLIs relate to how the above mentioned
interfaces? (If at all)
My user mode code has some buffers I want to be written to the
disk. I do a series of UUOs and CALLIs to convey to the
monitor's file system handler, memory handler, device routines,
and controller device routines that I want my bits to be copied
from here to there and labelled XYZ. I ask the monitor what
date/time it thinks it is by doing a CALLI. I set up certain
rules about subsequent usage of the computer system by telling
the monitor through CALLIs and UUOs. These are the only
ways the monitor and I, the user, communicate.
How is that for a start?
/BAH
known as "system calls". IBM "big iron" would call them "Supervisor
Calls", but then that's IBM.
These calls provide system-wide information (like TIME and DATE), and
protect the OS and other users by *preventing* the normal user from
directly programming dangerous and potentially system-wide damaging code.
The insight i am looking for is rather deeper than that. Things like
what is the difference between a UUO and a CALLI? And why
Kim said:No problem. If core in this context means memory for you,
Christopher said:I still have a bunch of thermal terminal printouts from around 1974
that are holding up OK, despite not being very careful with them.
It's the ASR-33 teletype printouts from back then that are the most degraded.
Greegor said:Grace Hopper USN predicted that 30 years ago.
She also predicted that each pixel on a computer
screen would at some point have it's own processor.
http://www.waterholes.com/~dennette/1996/hopper/grace86.gif
Either approach can work if properly executed.
Wouldn't it already be difficult to find a new
PC that isn't a dual or quad processor?
The genie's already out of the bottle.