Maker Pro
Maker Pro

How to develop a random number generation device

J

JosephKK

MooseFET [email protected] posted to sci.electronics.design:
I disagree with what you may not have meant to say above. In the
microprocessor area, you are largely correct but in other machines,
there were many hardware systems that could protect against buffer
overflows getting evil code to run. Some of them used a different
stack for call and return than for the data. Some such as the
IBM-360 didn't have a stack and required each routine to handle its
"save area".

Some of the more DSPish machines would also be hard to make a
buffers
overflow do anything evil. They are far from general purpose
machines so although they may show that it could have been done, we
can say that they could have made a general purpose PC that was well
defended.

OK. Thanks for the addition to my knowledge.
 
R

Richard Henry

You seem to be confusing whether it is possible to address an issue with
whether a particular statement actually addresses the issue.

Please read my actual question, quoted above, and removed from any
irrelevant context which might confuse the issue.

FWIW, I have no problem with either "An OS can surely make it impossible
to write safe code" or "a real OS is required to make safe code possible".
However, they don't appear to address the question which was actually
being asked.

If it helps, that question can be rephrased as whether an OS (any OS)
can "make unsafe code impossible", which is a different property to either
of those given.

AFAICT, you cannot do this without sacrificing the ability to run
arbitrary chunks of machine code, which appears to be a "must have"
feature for any OS (if there are OSes which don't allow this, they have
yet to escape from the lab).

Actually, even if you do sacrifice that ability, you can't truly
eliminate buffer overruns. If the OS only allows you to run e.g. Java
bytecode, you can write an x86 emulator in Java then feed it x86 code
which contains buffer-overrun bugs. Requiring the use of a higher-level
language simply means that a programmer has to make some effort to get
buffer overruns.

All things considered, eliminating buffer overruns is something which
should be the responsibility of the language. If you don't allow unbounded
arrays (i.e. referring to an array by its start address and relying upon
the programmer to keep track of where it ends), buffer overruns aren't an
issue. Once the program has been compiled into machine code, the
information which is required has been lost.

You have a bad case of Windows-user denial.
 
J

JosephKK

Nobody [email protected] posted to sci.electronics.design:
That still doesn't address the question of how you decide that a
write operation has overrun its buffer; the details of where one
buffer starts and another ends are unknown to the OS.

You might be able to catch specific cases (e.g. overwriting a return
address), if you're willing to take a massive performance hit (i.e.
a context switch on all writes to the stack). Even then, that isn't
the only type of buffer overrun which can be exploited.

Agreed mostly, a lot more hardware help is needed. Protection from
user programs altering MMU data, the stack pointer itself, making I/O
instructions privileged, and probably much more; all with careful OS
support. So the answer is possible, yes, but not without serious
hardware support.
 
J

JosephKK

Nobody [email protected] posted to sci.electronics.design:
I don't think you understand what a buffer overrun is. FWIW, it
isn't related to process isolation (preventing one process from
trashing another process' memory). That's a non-issue with modern
OSes and modern CPUs (for x86, that means 80286 and later).

A buffer overrun is where a process trashes its own memory. The
memory which is written is supposed to be written by that process,
but the wrong part of the program writes the wrong data to it (e.g.
writing a portion of a string to memory which is supposed to hold an
integer or pointer).

The reason why the OS cannot do anything about this is because it
lacks the detailed knowledge regarding which portions of memory are
used for what purpose. That information is normally discarded during
compilation (unless you compile with debug information). By the time
you get to running a binary executable, you're at a level of "code
writes data", with no details about which parts of memory belong to
specific variables.

I went and checked the Wikipedia definition, is basically correct.
The links to extended explanatory data seem to be good as well. Your
explanation does not match.

Use your own salt.
 
K

krw

Please explain how "An OS can surely make it impossible to write safe
code and a real OS is required to make safe code possible" addresses the
question of whether the OS can prevent buffer overruns.

The OS is necessary, but insufficient, part of the solution. The API
is certainly part of the solution. Compilers too. Saying that the
"OS can't" do something is letting it completely off the hook.
Windows, or more accurately M$, *is* the problem.
 
J

John E. Perry

krw said:
...
The OS is necessary, but insufficient, part of the solution. The API
is certainly part of the solution. Compilers too. Saying that the
"OS can't" do something is letting it completely off the hook.
Windows, or more accurately M$, *is* the problem.

Yes, the OS is part of the problem/solution, but it needs hardware help.
Actually, hardware/software combinations have existed at least since
the late '70's. One I'm personally familiar with is the Motorola MC6809
(what a sweet chip!) running Microware's OS-9.

The 6809 had a software interrupt that could be programmed (as could all
the other interrupts) to switch memory maps. A non-privileged user
running under OS-9 had no access at all to the system space; the user
could do any stupid thing imaginable and affect only himself. To get to
system resources he had to load a register with a code and issue a SWI.

I believe a few other microprocessors had similar features (didn't the
68K?) -- I'd be very surprised if they didn't have corresponding OS's.

And, notwithstanding the empty-headed MS worshiper who keeps calling
more knowledgeable people idiots, Microsoft still doesn't make use of
even what Intel provides.

---
...Microsoft's approach to multicore is incompatible with this
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.

But, John, we already have it. Linux is running right now on hundreds
of processors -- I don't remember offhand how many cores per chip, but
it's one of the later PowerPC processors. I think it's at Livermore.

....Yes, here it is. The whole list at livermore, with operating systems
and hardware summaries.

http://www.llnl.gov/computing/tutorials/lc_resources/

John perry
 
D

David Brown

John said:
For a while, merely receiving an email, without even opening it, could
infect a Windows machine!

Microsoft's design philosophy seems to be "when in doubt, execute it."
So "data" files, like emails and Word docs, can contain embedded
executables, and guess what Windows likes to do with them?

The other trick Microsoft does is to ignore filename extensions,
examine the file headers themselves, and take action based on the
content of the files, without bothering to point out to the users that
something is fishy.

I'm not sure about this one - it is sometimes a problem with windows
that it is so dependant on the filename extensions rather than actually
looking at what is in the file (compare with *nix systems, that
traditionally do not use filename extensions in this way). It is
precisely because the dependence on filename extensions is so ingrained
in the windows system and the users' understanding that it is so easy to
get people to run malware with names like "jokes.txt.exe". The default
setting of "hide file extensions for known types" is one of MS's
greatest gifts to malware writers.
But AlwaysWrong, He Of Many Nyms, loves Windows, and gets huffy if
anybody points out defects.


MS brought in the guy who wrote VMS, since even they knew they weren't
competant to do NT themselves. But the combination of legacy structure
and corporate culture limited what could be done.

It's also noticeable that the first version of NT, NT 3.51 (a brilliant
marketing ploy), was more solid than any version since then - because
the gui and non-essential device drivers (like the graphics drivers)
were kept out of kernel mode. But MS couldn't make such a system work
fast enough for good graphics, so with NT 4.0 the graphics system was
moved into the kernel like in Win95, with a corresponding drop in stability.
 
D

David Brown

John said:
Right. The first thing an OS should do is protect itself from a
pathological process.

This is essential for protection against other types of malware or
attacks, such as a trojan. If a system has proper separation of access
levels for programs, then a rogue program (either intentionally rogue
for a trojan, or accidentally for a normal program suffering a buffer
overflow, or just a good-old-fashioned bug) is limited in what it can
do. Thus on a *nix system, if apache is compromised, it cannot be used
to corrupt the rest of the system as it runs as user "nobody" (or
something similar - details vary) and cannot write to many areas of the
file system. On windows, the user control is so badly done that most
people run as "administrator", so that the pathological process has full
control.
But it should also manage data, code, and stack
spaces such as to make it very, very difficult for anything internal
or external to corrupt user-level processes.

It should do that too - although protection against bad processes is
more important (windows should learn to walk before trying to run). A
key point with protection against running data and stack segments is to
help user-level rogue processes destroying data for that user.
 
M

MooseFET

On Sep 15, 10:59 am, John Larkin
Windows is a "big OS" in that thousands of modules make up the actual
priviliged runtime mess. A "small OS" would have a very tight, maybe a
few thousand lines of code, kernal that was in charge of memory
management and scheduling, and absolutely controlled the priviliges of
lower-level tasks, including the visible user interface, drivers, and
the file systems.

I'm not sure that I would call "file systems" controlling part of the
Kernal's job. I would step that out one layer. It really would be a
task that is given its time and access privilages by the kernal. By
splitting the two, you would make it easier to design and debug both.
Once you have, one file system controlling software going, You could
run a second one, being debugged, controlling different media.



This was common practice in the 1970's, and even
 
M

MooseFET

Even a Von Neuman machine with memory management is in effect a
Harvard machine. There's no excuse for executing data or stack spaces.

Yes, we could call that a pseudo-Harvard because it is running on the
same bus and perhaps in the same chips.

Even the Z80 could be made to work as a pseudo-Harvard machine. You
could tell the difference between an instruction fetch and a memory
read. Because of this, just a little extra hardware let you connect
more total memory space than the 64K limit.
 
R

Rich Grise

Microsoft's design philosophy seems to be "when in doubt, execute it."
So "data" files, like emails and Word docs, can contain embedded
executables, and guess what Windows likes to do with them?

I think Bill Gates's dream was of a world where everybody's friendly,
there's no aggression or hostility, sort of like a Disneyland of computing,
where everybody shares all of their data with everyone, anyone can execute
anything on anybody's computer anywhere - sort of like the Garden of Eden
with interconnected computers.

Reality seems to have not turned out that way - there really are bad
people out there! =:-O

Thanks,
Rich
 
M

MooseFET

On Sep 15, 11:09 am, John Larkin
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.

I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point
ALUs than actual integer ALUs.
 
J

John Larkin

On Sep 15, 10:59 am, John Larkin


I'm not sure that I would call "file systems" controlling part of the
Kernal's job. I would step that out one layer. It really would be a
task that is given its time and access privilages by the kernal. By
splitting the two, you would make it easier to design and debug both.
Once you have, one file system controlling software going, You could
run a second one, being debugged, controlling different media.

That's what I meant; the file system, and user GUIs and such, are just
more jobs, not part of the kernal at all. VMS worked that way. File
systems were loadable tasks, not part of the os proper. RSTS had
multiple "runtime systems", API sets essentially, which is what the
user tasks saw; some emulated other OS's.

Drivers are an intermediate case. They can be dynamically loadable,
but must have hardware access and, directly or via DMA, can access all
of memory.

John
 
J

John Larkin

I think Bill Gates's dream was of a world where everybody's friendly,
there's no aggression or hostility, sort of like a Disneyland of computing,
where everybody shares all of their data with everyone, anyone can execute
anything on anybody's computer anywhere - sort of like the Garden of Eden
with interconnected computers.

Except that he, and Balmer, were the most vicious and predatory SOBs
in computer industry history.

John
 
R

Rich Grise

If it helps, that question can be rephrased as whether an OS (any OS)
can "make unsafe code impossible", which is a different property to either
of those given.

The answer is "Yes, with proper hardware support".

Why is it that for three days now, you've been resisting accepting the
right answer?

Windowholic? ;-)

Thanks,
Rich
 
J

John Larkin

On Sep 15, 11:09 am, John Larkin


I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point
ALUs than actual integer ALUs.

Yup. Low-horsepower tasks can just be a thread on a multithread core,
and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many,
many real or virtual CPUs are available. One CPU would be the manager,
and every task, process, or driver could have its own, totally
confined and protected, CPU, and there would be no context switching
ever, and few interrupts in fact.

John
 
D

David Brown

Sounds sort of like Sun's Niagra chips, which have (IIRC) 8 cores, each
with 4 threads, but only a few floating point units. For things like
web serving, it's ideal.
Yup. Low-horsepower tasks can just be a thread on a multithread core,
and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many,
many real or virtual CPUs are available. One CPU would be the manager,
and every task, process, or driver could have its own, totally
confined and protected, CPU, and there would be no context switching
ever, and few interrupts in fact.

That's not going to work for Linux, anyway - there is a utility thread
spawned per cpu at the moment (work is underway to avoid this, because
it is a bit of a pain when you have thousands of cpus in one box).

However, there is no point in having a cpu (or even a virtual cpu)
dedicated to each task. Many sorts of tasks spend a lot of time
sleeping while waiting for other events - a cpu in this state is a waste
of resources. Multiple cpus is a good thing, and faster context
switching would be a good thing too (multiple virtual cpus per real cpu
is part of this), but there is little to be gained by going overboard.
There comes a point when the die space used for all these extra cpus
would be better spent on cache or other sorts of buffers.
 
J

John Larkin

An idiot that applies patch "every Tuesday" is just as retarded as one
that runs "defrag" more than once or twice a year when they have no apps
that cause severe fragmentation, like a database app or such.

Patch Tuesday is Microsoft's practice of accumulating a bunch of
patches and releasing them on the 2nd Tuesday of each month.

http://en.wikipedia.org/wiki/Patch_Tuesday


I certainly don't update that often. I like to let the patches mellow
for a month or so, because Microsoft's patches are so stupid they
often break more than they fix.

"The second problem affected large deployments of Windows, such as can
be found at large companies. Such large deployments found it
increasingly difficult to make sure all systems across the company
were all up to date. The problem was made worse by the fact that,
occasionally, a patch issued by Microsoft would break existing
functionality, and would have to be uninstalled."

Damn, you actually *don't* know much about this stuff.

John
 
J

John Larkin

John said:
On Sep 15, 11:09 am, John Larkin
[....]
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.
I think that the number of virtual cores will grow faster than the
number fo real cores. With extra register banks and a bit of clever
design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point
ALUs than actual integer ALUs.

Sounds sort of like Sun's Niagra chips, which have (IIRC) 8 cores, each
with 4 threads, but only a few floating point units. For things like
web serving, it's ideal.
Yup. Low-horsepower tasks can just be a thread on a multithread core,
and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many,
many real or virtual CPUs are available. One CPU would be the manager,
and every task, process, or driver could have its own, totally
confined and protected, CPU, and there would be no context switching
ever, and few interrupts in fact.

That's not going to work for Linux, anyway - there is a utility thread
spawned per cpu at the moment (work is underway to avoid this, because
it is a bit of a pain when you have thousands of cpus in one box).

However, there is no point in having a cpu (or even a virtual cpu)
dedicated to each task. Many sorts of tasks spend a lot of time
sleeping while waiting for other events - a cpu in this state is a waste
of resources.

Only if you think of a CPU as a valuable resource. As silicon shrinks,
a CPU becomes a minor bit of real estate. It makes sense to use it
when there's something to do, and put it to sleep when there's not.
Lots of power gets saved by not doing context switches.

My point is that large numbers of CPU cores *will* become common and
cheap, and we need a new type of OS to take advantage of this new
reality. Done right, it could be simple and astoundingly secure and
reliable.

I'd be happy to waste a little silicon if I could have an OS that
doesn't crash and that doesn't go to sleep for seconds at a time for
no obvious reason.

John
 
Top