Maker Pro
Maker Pro

How to develop a random number generation device

M

MooseFET

I don't think you understand what a buffer overrun is. FWIW, it isn't
related to process isolation (preventing one process from trashing another
process' memory). That's a non-issue with modern OSes and modern CPUs (for
x86, that means 80286 and later).

A buffer overrun is where a process trashes its own memory. The memory
which is written is supposed to be written by that process, but the wrong
part of the program writes the wrong data to it (e.g. writing a portion of
a string to memory which is supposed to hold an integer or pointer).

No a buffer overrun is over running the buffer. It doesn't matter
what is in the memory you've over run into.

The exploit that takes advantage of the buffer overrun, causes an
overrun onto the return address or some other data that shouldn't be
writable by this task.

If an application trashes its own variables via a buffer overrun, only
that application is hurt in the process. The is exactly what Mr.
Larkin said was the case and he is correct in that.
 
M

Michael A. Terrell

TheKrakWhore said:
I'm an idiot.


--
Service to my country? Been there, Done that, and I've got my DD214 to
prove it.
Member of DAV #85.

Michael A. Terrell
Central Florida
 
N

Nobody

Well, Windows is not a modern OS, and x86 is not a modern processor.

The 80286 and later have a built-in MMU, and Windows 3.1 and later make
use of it (although the 95/98/ME branch don't make quite as much use of it
as they should). With NT/2K/XP, processes don't trash each other's memory
(one process might "persuade" another to trash its *own* memory, but
that's a separate issue).
Given decent hardware tools, an OS should abort a process that tries
to execute data.

That feature is available at least on current Linux and BSD systems; I'm
not sure about Windows. It can break certain programs, e.g. those written
in languages which make extensive use of thunks (trampolines), as well as
some emulators and languages which use just-in-time compilation.

That doesn't *prevent* buffer overflows, but it prevents exploitation via
the "classic" mechanism (write "shellcode" into a stack variable and
overwrite the return address to point into the shellcode).

However, there are other ways to exploit buffer overflows, e.g.
overwriting function pointers, or data which affects control flow. If you
can't inject your own code, you're limited to whatever code is already
part of the process, but that may be more than enough (system() will
almost certainly be available; if you're lucky, there may be a complete
interpreted language available to use).
There was a joke, in the DOS days, that a certain jpeg file contained
a virus. It was a joke because it was obviously impossible. A few
years later, Windows managed to make it happen.

There's no technical reason why it couldn't have happened under DOS.
And XP occasionally crashes when a user-level process screws up. Not
as often as '98 type systems, but it still happens. And Patch Tuesday
has become a ritual.

But that almost certainly isn't due to the user-level process directly
trashing OS memory. It's far more likely that the user process passes
"bad" data to the OS and the OS trashes its own memory.
If the code segment was write protected and execute-only, and data
segments were not executable,

That will prevent code injection, limiting an attack to whatever is
code is already available in the process' address space.
and if there were separate data and return-address stack pointers,

That would eliminate the return address as a vector. There's still
function pointers; C++ uses these extensively (virtual methods), COM
even moreso (all COM methods are virtual).

And there's still the case of overwriting data which affects control flow
(the most extreme case is data which is "code" for a feature-rich
interpreter).
the OS *would* know when a process does something dangerous.

Only sometimes.

In any case, none of this deals with *preventing* buffer overflows, but
with mitigating the consequences.
 
T

TheKraken

No, he's right. It is not hard to make data segments non-executable,
and even windows can do it to some extent (blocking execution on the
stack segment). And he's also right that windows is not a modern OS -
from the day DOS 1.0 came out to the day Vista arrived, the MS OS's have
never been "modern" in terms of how they are designed and implemented.
They have had plenty of modern features, and the age of a particular
feature or idea is no indication of quality, of course.

My remark was about his first line. "Windows is not a modern OS".

Utter horseshit.

The remainder of that line, "and x86 is not a modern processor"

The guy is a fucking loon.

Oh, and your assessment is flawed as well.

"Modern and quality" are two different elements.

When I was a kid, my father worked at Cinti. Milacron. I have seen
"Grade six bolts" that really were such things. In our "modern age", I
have seen bolts that are claimed to be grade six that have heads that
twist off like taffy.

A REAL grade six bolt does NOT EVER have a head that would twist off
like taffy.

Yet in today's ass backward society, we have entire industries that
utilize the cheapest components they can find, and only make a move to
change something if they are faced with their indiscretions in a blatant
manner, not unlike I face you people when you spew horseshit.

I have seen numerous OSes, and they all have their places. A PC was
not EVER meant it run world class, mission critical OSes that have zero
failure modes. It is a consumer product, and such things would have made
it an entirely unapproachable realm for the consumer, had they been
utilized.

As far as vulnerabilities go, every digital system has them.

As for the x86 not being "modern". I am quite sure that without the
personal computer industry as a whole, the world would not be anywhere
close to the degree of integration we currently enjoy without the
innovations put forth by such companies as Intel, IBM, and the like.

We would be nowhere close to 125M transistors on a die, like we are
today.

Credit, Intel and the consumer demand for personal computing,
regardless of the fucking OS.
 
T

TheKraken

No, that sounds like a pretty good summary. There's no doubt that the
NT line of windows is far more solid than the DOS line, but it is far
from solid enough. I have a W2K machine that stops dead when Opera is
showing certain web pages - there may be a bug in Opera, but it is
windows' fault that it stops the whole machine.


"Patch Tuesday" is what he does to himself after he makes a stupid
remark, and finds out he was full of shit. His brain has more scar
tissue on it than Sammy Davis Jr.'s liver did, and it was the size of a
basketball.

You guys are the same dopey crowd that spews horseshit about Vista.
 
C

ChairmanOfTheBored

So you didn't put in the patches LOL!


An idiot that applies patch "every Tuesday" is just as retarded as one
that runs "defrag" more than once or twice a year when they have no apps
that cause severe fragmentation, like a database app or such.
 
R

Rich Grise

That still doesn't address the question of how you decide that a write
operation has overrun its buffer; the details of where one buffer starts
and another ends are unknown to the OS.


But it knows what chunks of memory it has allocated to a particular
process. As long as it's in your own memory space, who cares if you
overwrite/overrun your own buffers?

You might be able to catch specific cases (e.g. overwriting a return
address), if you're willing to take a massive performance hit (i.e. a
context switch on all writes to the stack). Even then, that isn't the only
type of buffer overrun which can be exploited.

That's that sort of catchall "software that can catch the exception" part
of my answer. :)

Cheers!
Rich
 
R

Rich Grise

The reason why the OS cannot do anything about this is because it lacks
the detailed knowledge regarding which portions of memory are used for
what purpose. That information is normally discarded during compilation
(unless you compile with debug information). By the time you get to
running a binary executable, you're at a level of "code writes data",
with no details about which parts of memory belong to specific variables.

Evidently, you haven't done much compiling or linking or memory management.

Good Luck!
Rich
 
N

Nobody

No a buffer overrun is over running the buffer. It doesn't matter
what is in the memory you've over run into.

Yes. Well, it matters in terms of what happens next, but not in terms of
whether or not a "buffer overrun" has occurred.
The exploit that takes advantage of the buffer overrun, causes an
overrun onto the return address or some other data that shouldn't be
writable by this task.

No.

Leaving aside whether the return address "should" be writable (that's
how the code which most compilers generate normally works; whether or not
that's a good idea is a different matter), the term "buffer overrun" is
normally used where a process overruns one variable or field (which is
part of the memory which the process is allowed to modify) and corrupts
another variable or field (which is also part of the memory which the
process is allowed to modify).

E.g.:

char name[256];
int x;

x = foo();
strcpy(name, str);
bar(x);

If strlen(str)>255, and "x" follows "name" in memory, then the strcpy()
will corrupt the contents of x. That is a buffer overrun; the buffer is
"name", and the strcpy() overruns it.
If an application trashes its own variables via a buffer overrun, only
that application is hurt in the process.

Yes and no. Only that application is directly affected, but that
application can then do anything which its owner can do, e.g. email
sensitive files, connect to the owner's bank's website and request the
transfer of funds, etc. If the owner has administrative privileges it can
install new software (rootkit).
The is exactly what Mr. Larkin said was the case and he is correct in
that.

Mr. Larkin appeared to be talking about a different issue (process
isolation).

The "problem" with buffer overruns is when they hijack the process which
owns the buffer.

If the overrun tries to modify memory which doesn't belong to that
process, the process will normally be terminated. That has been a "solved"
problem ever since CPUs started having MMUs built in and OSes started to
make use of them. For "wintel", that's 80286 and Windows 3.1; mainframes
and minicomputers had this much earlier.

When the overrun corrupts the process' own memory, the process continues
to run as if nothing untoward has happened, but it is now doing the
bidding of the attacker.
 
N

Nobody

You seem to be confusing "Windows" and an "OS".

You seem to be confusing whether it is possible to address an issue with
whether a particular statement actually addresses the issue.

Please read my actual question, quoted above, and removed from any
irrelevant context which might confuse the issue.

FWIW, I have no problem with either "An OS can surely make it impossible
to write safe code" or "a real OS is required to make safe code possible".
However, they don't appear to address the question which was actually
being asked.

If it helps, that question can be rephrased as whether an OS (any OS)
can "make unsafe code impossible", which is a different property to either
of those given.

AFAICT, you cannot do this without sacrificing the ability to run
arbitrary chunks of machine code, which appears to be a "must have"
feature for any OS (if there are OSes which don't allow this, they have
yet to escape from the lab).

Actually, even if you do sacrifice that ability, you can't truly
eliminate buffer overruns. If the OS only allows you to run e.g. Java
bytecode, you can write an x86 emulator in Java then feed it x86 code
which contains buffer-overrun bugs. Requiring the use of a higher-level
language simply means that a programmer has to make some effort to get
buffer overruns.

All things considered, eliminating buffer overruns is something which
should be the responsibility of the language. If you don't allow unbounded
arrays (i.e. referring to an array by its start address and relying upon
the programmer to keep track of where it ends), buffer overruns aren't an
issue. Once the program has been compiled into machine code, the
information which is required has been lost.
 
M

MooseFET

Yes. Well, it matters in terms of what happens next, but not in terms of
whether or not a "buffer overrun" has occurred.

So you agree at this point.
Yes go back a re-read it carefully.

Leaving aside whether the return address "should" be writable (that's
how the code which most compilers generate normally works; whether or not
that's a good idea is a different matter), the term "buffer overrun" is
normally used where a process overruns one variable or field (which is
part of the memory which the process is allowed to modify) and corrupts
another variable or field (which is also part of the memory which the
process is allowed to modify

You seem to be confused about what we are talking about. We are
talking about making an OS safe. If an application task commits an
overrun that causes that task to fail, it is quite a different matter
than talking about a buffer over run based exploit.




[...]
Mr. Larkin appeared to be talking about a different issue (process
isolation).

He is talking about process isolation and it not being violated by a
buffer overrun if the OS is well written. He is correct in what he
said.

When the overrun corrupts the process' own memory, the process continues
to run as if nothing untoward has happened, but it is now doing the
bidding of the attacker.

You have assumed that by causing the over run the attacker has gained
control. As I explained earlier this need not be the case.
 
J

John Larkin

No, he's right. It is not hard to make data segments non-executable,
and even windows can do it to some extent (blocking execution on the
stack segment). And he's also right that windows is not a modern OS -
from the day DOS 1.0 came out to the day Vista arrived, the MS OS's have
never been "modern" in terms of how they are designed and implemented.
They have had plenty of modern features, and the age of a particular
feature or idea is no indication of quality, of course.


No, he's right - I remember such hoax viruses (from early windows days
rather than DOS days - IIRC jpg did not exist so early), and Windows did
have a vulnerability allowing malware to spread by jpgs:
http://news.techwhack.com/430/microsoft-jpeg-exploit-might-result-into-a-worm-soon/

I also remember hoaxes about emails that would infect you just by
reading them - and then MS managed to turn that into reality too
(especially for those poor sods that used Outlook and had MS Office
installed).

For a while, merely receiving an email, without even opening it, could
infect a Windows machine!

Microsoft's design philosophy seems to be "when in doubt, execute it."
So "data" files, like emails and Word docs, can contain embedded
executables, and guess what Windows likes to do with them?

The other trick Microsoft does is to ignore filename extensions,
examine the file headers themselves, and take action based on the
content of the files, without bothering to point out to the users that
something is fishy.

But AlwaysWrong, He Of Many Nyms, loves Windows, and gets huffy if
anybody points out defects.
No, that sounds like a pretty good summary. There's no doubt that the
NT line of windows is far more solid than the DOS line, but it is far
from solid enough.

MS brought in the guy who wrote VMS, since even they knew they weren't
competant to do NT themselves. But the combination of legacy structure
and corporate culture limited what could be done.
I have a W2K machine that stops dead when Opera is
showing certain web pages - there may be a bug in Opera, but it is
windows' fault that it stops the whole machine.

Windows is a "big OS" in that thousands of modules make up the actual
priviliged runtime mess. A "small OS" would have a very tight, maybe a
few thousand lines of code, kernal that was in charge of memory
management and scheduling, and absolutely controlled the priviliges of
lower-level tasks, including the visible user interface, drivers, and
the file systems. This was common practice in the 1970's, and even
then a decent multiuser, time-share OS would run for months between
power failures. You can totally debug a few thousand lines of code,
authored by one or two programmers; you will never debug a hundred
million lines of code that has 2000 authors.


John
 
J

John Larkin

No, he is right. The Harvard architecture may be an exception because
"trying execute data" may be meaningless in that sort of processor.

Even a Von Neuman machine with memory management is in effect a
Harvard machine. There's no excuse for executing data or stack spaces.

John
 
J

John Larkin

You seem to be confusing "Windows" and an "OS".

Clearly the design of Windows can never be fixed; it was bungled from
Day 1. I wonder what will be next?

I like the idea of a multicore CPU that has a processor per task, with
no context switching at all. One CPU would do nothing but manage the
system; it would be the "OS". Other CPUs would run known-secure device
drivers and file systems. Finally, some mix of low-power and
high-performance CPUs would be assigned to user tasks.

Microsoft's approach to multicore is incompatible with this
architecture. In a few years we'll have, say, 1024 processors on a
chip, and something new will be required to manage them. It will be a
thousand times simpler and more reliable than Windows.

John
 
J

John Larkin

My remark was about his first line. "Windows is not a modern OS".

Utter horseshit.


Windows evolved incrementally from DOS. Windows 1..3 were hurry-up,
non-preemptive messes kluged on top of DOS un order to catch up with
the McIntosh. The idiotic APIs were frozen for all time, and had to be
supported in 95/98/NT/XP, which peverted their architectures, if they
can be said to have anything as elegant as an "architecture."

The remainder of that line, "and x86 is not a modern processor"

The guy is a fucking loon.

It's an 8008, kluged again and again. Register-poor, with appallingly
klutzy memory management. What made it successful was IBM's horrible
decision to use the 8088, and Intel's silicon superiority.

John
 
J

John Larkin

When the overrun corrupts the process' own memory, the process continues
to run as if nothing untoward has happened, but it is now doing the
bidding of the attacker.

Right. The first thing an OS should do is protect itself from a
pathological process. But it should also manage data, code, and stack
spaces such as to make it very, very difficult for anything internal
or external to corrupt user-level processes.

John
 
J

John Larkin

You seem to be confusing whether it is possible to address an issue with
whether a particular statement actually addresses the issue.

Please read my actual question, quoted above, and removed from any
irrelevant context which might confuse the issue.

FWIW, I have no problem with either "An OS can surely make it impossible
to write safe code" or "a real OS is required to make safe code possible".
However, they don't appear to address the question which was actually
being asked.

If it helps, that question can be rephrased as whether an OS (any OS)
can "make unsafe code impossible", which is a different property to either
of those given.

AFAICT, you cannot do this without sacrificing the ability to run
arbitrary chunks of machine code, which appears to be a "must have"
feature for any OS (if there are OSes which don't allow this, they have
yet to escape from the lab).

I used to run a PDP-11 timeshare system, under the RSTS/E os. It would
run, typically, a dozen or so local or remote users and a few more
background processes, system management and print spooling and such.
Each user could dynamically select a shell, a virtual OS, to run
under, and could program in a number of languages, including assembly,
and run and debug the resulting machine code. You could also run ODT
(like "debug"), poke in machine instructions, and execute them.
Non-priviliged users could do all this, and crash their own jobs, but
they absolutely could not damage the OS or other user tasks. In a
hostile environment (we sold time to four rival high schools, who kept
trying to hack one another) the system would run for months,
essentially between power failures.

This OS, and RSX-11, and TOPS-10, and VMS, and UNIX, and no doubt many
more, from other vendors, *did* escape from the lab.

John
 
D

David Brown

TheKraken said:
My remark was about his first line. "Windows is not a modern OS".

Utter horseshit.

In a modern OS, you have a carefully separated and modular design. You
don't put dangerous (in terms of crashing or corrupting other parts)
like the graphics drivers in the kernel. You certainly don't have a gui
as an essential part of the operating system. Windows has a totally
jumbled mess rather than a proper design - the company even stood up in
a court and swore that it was impossible to separate the web browser
from the operating system!
The remainder of that line, "and x86 is not a modern processor"

The x86 architecture was outdated and outclassed when the first 8086
chip came on the market - everyone, including Intel, knew that. The
only reason it turned up in the first PC was a PHB at IBM thought it
would save a few dollars over the engineers' choice (a 68000), and since
the PC was a low-volume dead-end experiment product, it would not matter.

Current x86 devices are excellent implementations of a terrible
architecture that was outdated 20 years ago.
The guy is a fucking loon.

I've had my disagreements with John, but I don't think that's how I'd
describe him!
Oh, and your assessment is flawed as well.

"Modern and quality" are two different elements.

As I said, if you'd noticed. The basic Unix design is still a good way
to build an operating system after over 30 years.
When I was a kid, my father worked at Cinti. Milacron. I have seen
"Grade six bolts" that really were such things. In our "modern age", I
have seen bolts that are claimed to be grade six that have heads that
twist off like taffy.

A REAL grade six bolt does NOT EVER have a head that would twist off
like taffy.

Yet in today's ass backward society, we have entire industries that
utilize the cheapest components they can find, and only make a move to
change something if they are faced with their indiscretions in a blatant
manner, not unlike I face you people when you spew horseshit.

I agree - lots of "modern" changes are very much a step backwards. But
that does not make windows a "modern OS", no matter how many backwards
steps MS have taken since they conned SCP out of their "quick and dirty
operating system".
I have seen numerous OSes, and they all have their places. A PC was
not EVER meant it run world class, mission critical OSes that have zero
failure modes. It is a consumer product, and such things would have made
it an entirely unapproachable realm for the consumer, had they been
utilized.

That's also true, and the failures in windows does not imply strengths
in other OSes. I use windows in many systems, because it is the best
choice for the job - that does not make it a modern OS, nor does it make
someone an "idiot" for pointing that out.
As far as vulnerabilities go, every digital system has them.

Windows has far more than its fair share, for many reasons.
As for the x86 not being "modern". I am quite sure that without the
personal computer industry as a whole, the world would not be anywhere
close to the degree of integration we currently enjoy without the
innovations put forth by such companies as Intel, IBM, and the like.

That is a total non-sequitar. The x86 was not a modern design - when it
was made, it was considered old-fashioned and badly designed according
to current standards (compare it to the 68000 for example, or the Z80).
The fact that PC's revolutionised computing has nothing to do with
the x86 being "modern" or not.
We would be nowhere close to 125M transistors on a die, like we are
today.

So what?
Credit, Intel and the consumer demand for personal computing,
regardless of the fucking OS.

Lots of companies, Intel included, had their part to play in the history
of personal computing (many by sheer luck, rather than hard work). But
none of that changes *anything* in this thread - neither windows nor the
x86 have ever been "modern" designs, nor is John an idiot for saying so.
 
D

David Brown

ChairmanOfTheBored said:
An idiot that applies patch "every Tuesday" is just as retarded as one
that runs "defrag" more than once or twice a year when they have no apps
that cause severe fragmentation, like a database app or such.

I don't think databases are common causes of fragmentation - serious
databases often do their own file handling at a lower level precisely to
avoid that sort of thing.

But other than that, I agree - blindly patching windows is not a good
idea, and defragmenting does not last long enough to make it worth the
effort - it's more effective to invest in more RAM so your file caches
are bigger.
 
J

JosephKK

David Brown [email protected] posted to
sci.electronics.design:
MooseFET said:
MooseFET wrote:
On Sep 11, 4:58 pm, John Larkin
[... buffer overflow ...]
It sounds to me like C compilers/linkers tend to allocate memory
to code, buffers, and stack sort of anywhere they like.
It's up to the linker to build the segments, and the run-time
link-loader picks the addresses - the compiler is not involved in
the process.

I included the linker in the "compiler/linker" in many cases they
are
the same program. In many environments the segments end up in
memory in the same order as they were in the file.

The linker is not the same program (for C) in any environment I have
ever heard of - but it is generally *called* by the compiler
automatically, so it just looks like it is part of the compiler.
The point is, any linking issues are handled by linking directives
and not by anything you give to the compiler (i.e., the source
code).

The link-loader is a different animal altogether - it is what the
operating system uses to actually load and run a program. It
handles the final linking of the binary with any required run-time
libraries, it allocates space and addresses for the different
segments of the program,
and it links the parts together. It is at that stage that the
addresses
are finalised. In particular, if you are using a system with
randomised addresses, each time a program is loaded it is linked to
a different random address.


Yes, and with the addition of dynamically linked modules there are now
at least three pieces to the linker issue.

1st a semi-static linker to tie the loadable base modules of a single
possibly transitory application program.

2nd a link-loader that corrects system and other resource calls
external to the application ad load time.

3rd what are called dynamically linked libraries that provide various
less commonly used capabilities for the main application and provides
for an extensibility interface (API). The notable difference is that
these libraries can be dynamically loaded and unloaded to make more
room for other uses of memory.
 
Top