Maker Pro
Maker Pro

How to develop a random number generation device

N

Nobody

Evidently, you haven't done much compiling or linking or memory management.

I've done lots of all of those.

The linker only gets to "see" exported variables, i.e. global variables
which aren't declared "static". The most common form of buffer overrun
involves automatic (stack-based) variables, which don't exist outside of
the compiler (i.e. they don't appear in the symbol table of an object
file, library, or executable).

Depending upon the platform, the executables and DLLs may contain
information on inter-object symbols (i.e. those exported by one object
file and imported by another), or these may be eliminated during linking,
leaving only those symbols which are required by the loader.

The former is the case on Linux ("nm -D" will list the symbol table), the
latter on Windows.

As for memory management:

The process maintains a "heap" built from large chunks of memory, obtained
from the OS via e.g. brk() or mmap(..., MAP_ANONYMOUS). The process then
satisfies individual requests (malloc etc) from the heap. Memory which
is released by e.g. free() is returned to the heap for later re-use;
memory is seldom returned to the OS.

The OS only gets to see the bigger picture, i.e. brk(), mmap(), maybe
munmap(), not all of the individual malloc() and free() calls.
 
R

Rich Grise

Doing so is the essence of a "buffer overrun exploit", one of the most
common types of security vulnerability for code written in C/C++.

It allows a malicious user to make a program do something that it isn't
supposed to do.

E.g. consider a program being run on a web server to process form
input from a web page. If the program suffers from a buffer overrun flaw,
simply sending the right data in a POST request can allow the attacker to
execute arbitrary code on the web server.

My God! You've got to quit using MICRO$~1 web servers!

Good Luck!
Rich
 
R

Rich Grise

True, but if you can manage to create a buffer overflow in a kernel process
(the TCP/IP stack being a common target here, often implemented as a
kernel-level driver), you have the keys to the kingdom.

I suppose neither of you has ever heard of a "chroot jail".

Cheers!
Rich
 
N

Nobody

I used to run a PDP-11 timeshare system, under the RSTS/E os. It would
run, typically, a dozen or so local or remote users and a few more
background processes, system management and print spooling and such.
Each user could dynamically select a shell, a virtual OS, to run
under, and could program in a number of languages, including assembly,
and run and debug the resulting machine code. You could also run ODT
(like "debug"), poke in machine instructions, and execute them.
Non-priviliged users could do all this, and crash their own jobs, but
they absolutely could not damage the OS or other user tasks. In a
hostile environment (we sold time to four rival high schools, who kept
trying to hack one another) the system would run for months,
essentially between power failures.

This OS, and RSX-11, and TOPS-10, and VMS, and UNIX, and no doubt many
more, from other vendors, *did* escape from the lab.

You and I are talking about different issues. Process isolation has been
solved, even on mainstream OSes.

I'm not talking about process isolation. I'm talking about the ability to
make a program behave other than how its author intended by overrunning a
a buffer (e.g. by making some portion of its input larger than the buffer
in which it will be stored).
 
N

Nobody

You have a bad case of Windows-user denial.

You might want to check my User-Agent header before you assume that I'm a
Windows user.

You might also want to check that you are actually correct before you
start making ad hominem attacks against anyone who contradicts your
viewpoint.
 
V

Vladimir Vassilevsky

Joel said:
True, but if you can manage to create a buffer overflow in a kernel process
(the TCP/IP stack being a common target here, often implemented as a
kernel-level driver), you have the keys to the kingdom.

A messed up data segment is still the data segment. It shouldn't be
possible to execute it as a code.

Since 286 there were the goodies like 4 levels of priviledge, separate
LDTs for every process, different segment rights for code, data and
stack. In the theory, that should allow for a pretty solid protection,
however in the practice it was (and still is!) unused for the
simplicity, sw compatibility and performance reasons.

Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com
 
V

Vladimir Vassilevsky

Nobody wrote:

I'm not talking about process isolation. I'm talking about the ability to
make a program behave other than how its author intended by overrunning a
a buffer (e.g. by making some portion of its input larger than the buffer
in which it will be stored).

It is possible to declare every data object in a program as a separate
segment. That is what LDT was intended for. Of course, there will be a
lot of overhead and the compatibility issues, too.

Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com
 
N

Nobody

The answer is "Yes, with proper hardware support".

The answer is "No, regardless of hardware support".
Why is it that for three days now, you've been resisting accepting the
right answer?

I should ask the same of you.

Or, avoiding loaded terms like "right answer" ...

Why is it that for three days now, you've been resisting accepting that
the problem with "buffer overruns" isn't about segfaults (they aren't a
problem; process misbehaves, process gets SIGSEGV, process dies; good
riddance), it's about intra-process overruns.

Look at the Wikipedia article for "buffer overrun". Or search the BugTraq
archives for that term. This isn't about process isolation, it's about a
process trashing its own memory in response to "bad" input.
Windowholic? ;-)

Checked my User-Agent headers? ;-)

FWIW, I use both Linux and XP; Linux if I can, XP if I have to.

I'm by no means enamoured of Windows, but suggesting that it lacks the
process isolation found in Linux or MacOSX is incorrect (95/98/ME were
partially lacking in this regard, but not the NT/2K/XP branch). Windows
has plenty of problems, but lack of memory protection isn't one of them.

If you think that the "buffer overrun" problem is somehow specific to
Windows, try searching for "buffer overrun" and "buffer overflow" along
with "linux". Or take a glance at the GLSA list:

http://www.xatrix.org/advisories.php?s=Gentoo
 
J

John Larkin

A shared memory interface for 1024 cpus? That's going to be absolutely
vast, or have terrible latency.

Why would a *system* care about the latency of one processor accessing
memory? The system only cares about net performance. As it stands now,
only one *process* can access memory at a time (because all processes
share a single CPU) and they all suffer from context switching
overhead. Multiple CPUs never context switch, so *must* be overall
faster.
I still don't understand why you think that interrupts or context
switches are a reliability issue - processors don't have problems with them.

It's the OS's that we have problems with. Hardware is cheap and
reliable; software is expensive and buggy. So we should shift more of
the burden to hardware.

And I'd love to hear you explain to customers that while their web
server has a load average of a couple of percent, they need to buy a
second processor chip just to run an extra cron job. A single cpu per
process will *never* be realistic.

The IBM Cell structure is a hint of the future.
"Just share a central cache?" It might sound easy to you, but I suspect
it would be *slightly* more challenging to implement.

Sure, but Moore's Law keeps going, in spite of a pattern of people
refusing to accept its implications.

You are too used to solid, reliable, *simple* cores like the cpu32.
Complex hardware is like complex software - it *is* complex software,
written in design languages then "compiled" to silicon. Like software,
big and complex hardware has bugs.

Far fewer than software bugs. Hardware design, parallel execution of
state machines, is mature, solid stuff. Software just keeps bloating.

Yes, many RISC machines have substantial errata. The more complex you
make the design, the more bugs you get.

So let's make it simple, waste some billions of CMOS devices, and get
computers that always work. We'll save a fortune.



What you seem to be missing is that although the cores on your 1K cpu
chip are simple (and can therefore be expected to be reliable, if
designed well), they don't exist alone. If you want them to support
general purpose computing tasks, rather than a massive SIMD system, then
you have a huge infrastructure around them to feed them with instruction
streams and data, and enormous complications trying to keep memory
consistent.


My desktop machine might well run more than 256 processes. How does
that fit in your device? But most of the time, there are only 2 or 3
processes doing much work - often there will be 1 process which should
run as fast as possible, as single-thread performance is the main
bottleneck for desktop cpus.

My XP is currently running about 30 processes, with a browser, mail
client, a PDF datasheet open, and Agent running. A number of them are
not really necessary.

How many on yours?

John
 
J

John Larkin

You and I are talking about different issues. Process isolation has been
solved, even on mainstream OSes.

A decent OS, using decent hardware, should enforce isolation of code,
stack, and data, in itself and in all lower-priority processes. It
should be impossible for data to ever be executed, anywhere, or for
code segments to be altered, and buffer overflow exploits should be
impossible. This ain't even hard, except for the massive legacy
issues.
I'm not talking about process isolation. I'm talking about the ability to
make a program behave other than how its author intended by overrunning a
a buffer (e.g. by making some portion of its input larger than the buffer
in which it will be stored).

Right. A good hardware+OS architecture should prevent this, too. Bad
code should crash, terminated by the OS, not take over the world, or
send all your email contacts to some guy in Poland.

John
 
K

krw

There's not a lot I can do about Windows (I have to run it for some of
the apps I use) but it's certainly worth $3K to have reliable hardware
and drives. Every time a Dell dies, it costs me or one of my people a
week or two to get everything back to where it was, and we're surely
worth more than $3K a week.

The cool thing about raid hot-plug is that I can occasionally plug in
a blank drive, and my C: drive gets cloned, OS and all. I stash the
clone in a baggie. If my machine dies for any reason, I grab a spare
box from down the hall, plug in the copy of C:, and I'm back online in
5 minutes.

And, once a year maybe, I plug a brand-new drive in one of the raid
lots, so my drives never die from shear wear-out.

You're probably better off going quite a bit longer than that between
replacements. Believe me, there are two ends of a bath tub. They
gave me a new Dell (hmm, maybe there is a common thread here) on my
first day. The disk drive died that afternoon and I lost a couple of
days work (couldn't get it replaced right away). It could have been
far worse though.
 
K

krw

Where I work, we just did another install of SUSE Linux. We have a
huge investment in DOS based code in the test department. Running
"dosemu" under SUSE they work just fine. Under XP there were lots of
problems. Under Vista there was no hope at all.

I'm not at all happy with SuSE. I guess it's mutual because it
doesn't like my hardware, even though it is rather vanilla. I have
Ubuntu on the to-do list for a rainy day. I bought a new drive for
my laptop so I'll try it on this system too.
XP has a character dropping rate on the RS232 of about 1 in 10^5 to
10^7. This is much worse than the "its broken" limit on what is being
tested.

Ick! I know XP has "issues" but I didn't think it was within a few
orders of magnitude of that!
XP also doesn't let DOS talk to USB to RS-232 converters. Under SUSE,
it works just fine.
Understandable.

One of the machines is not on a network. XP seems to get unhappy if
it is not allowed to phone home every now and then.

I don't believe 2K does. I rather liked Win2K. It was the only O$ I
was willing to drop OS/2 for. I'd planned to move to Linux by now,
but there are too many hardware issues.
 
C

ChairmanOfTheBored

The PPC-970 had two FPUs, two FXUs, a VMX, and separate ALUs in the
Load/Store unit*S*. The dual core was still in the 200sq.mm. class.
Most of that area was in arrays.


The Cell BE processor beats it by a factor of at least three.
 
M

Michael A. Terrell

krw said:
I see you've been in mommy's hamper again, Dimbulb.


The dirty panties on his head was a dead giveaway.


--
Service to my country? Been there, Done that, and I've got my DD214 to
prove it.
Member of DAV #85.

Michael A. Terrell
Central Florida
 
M

Michael A. Terrell

krw said:
Another of Dimbulb's socks escapes from his mommy's hamper.


And he still can't spell 'Crack Whore' properly. :)


--
Service to my country? Been there, Done that, and I've got my DD214 to
prove it.
Member of DAV #85.

Michael A. Terrell
Central Florida
 
C

ChairmanOfTheBored

Where I work, we just did another install of SUSE Linux. We have a
huge investment in DOS based code in the test department. Running
"dosemu" under SUSE they work just fine.

DOSBox is better. My 640x480 legacy apps are at 1280x1024 now.
Beautiful upscaling capacity. Tango PCB and OrCAD are great under
DOSBox.
Under XP there were lots of
problems. Under Vista there was no hope at all.

You should try the emu apps within windows then as well, especially
DOSBox.

Vista's VDM is far better than XP's. Ever heard of the NET USE
command, and its brethren?
XP has a character dropping rate on the RS232 of about 1 in 10^5 to
10^7. This is much worse than the "its broken" limit on what is being
tested.

Learn how to change the settings on the ports in device mangler then.
XP also doesn't let DOS talk to USB to RS-232 converters. Under SUSE,
it works just fine.

Sounds like more operator error to me.
One of the machines is not on a network. XP seems to get unhappy if
it is not allowed to phone home every now and then.

XP works just fine on a machine that has no network capacity.
 
C

ChairmanOfTheBored

CPUs *are* a valuable resource - modern cpu cores take up a lot of
space, even when you exclude things like the cache (which take more
space, but cost less per mm^2 since you can design in a bit of
redundancy and thus tolerate some faults).


There are good CPUs that are less than a cm square. And the one I
refer to is scalable too.
 
N

Nobody

The OS is necessary, but insufficient, part of the solution.

It's certainly insufficient. I'm not sure that it's even necessary. There
are mechanisms which the OS could provide and which a language could use,
but there are also mechanisms which don't require support from the OS.
The API is certainly part of the solution.

Agreed. The fact that ANSI C provides e.g. strcpy() but doesn't provide a
safe alternative (strncpy() won't NUL-terminate the string if it is
truncated) has been responsible for innumerable buffer overrun bugs.
Compilers too.

Compilers are constrained by the language. Not only does C support
treating any pointer as an array, but arrays are automatically converted
to pointers when used in an expression or as a function argument.
Keeping track of the end (and ensuring that it isn't overrun) is the
programmer's responsibility.

[OTOH, C doesn't require that negative indices are supported, yet every
compiler which I've ever used allows this.]

If arrays were a first-class data type, containing both their start
address and length, many of the problems would go away.
Saying that the
"OS can't" do something is letting it completely off the hook.

I didn't say that it can't do "something", although I'm not sure that it
actually matters all that much.
Windows, or more accurately M$, *is* the problem.

Windows is no worse than any other OS when it comes to buffer overruns.

[And the NT/2K/XP branch is no worse when it comes to process isolation.
That's a separate issue to what I was talking about, but there seems to be
some confusion.]

The real problem is the widespread use of C/C++ as an application
programming language.

C was designed as a systems programming language, one step up from
assembler. In that area, you often need the flexibility of a language
which will let programmers do whatever they want, including shooting
themselves in the foot. You may also need the efficiency.

This isn't true for applications, where the additional overhead and
reduced flexibility of a higher level language wouldn't be a problem for a
word processor or web browser.

IMHO, Windows' weak point is its extreme complexity (sometimes I'm
convinced that Rube Goldberg is alive and well and working as a systems
programmer at Microsoft).

Windows doesn't crash because applications are allowed to trash system
memory. Windows crashes because Windows trashes system memory, because of
occasional programming errors multiplied by the massive size of the code
base.

[In this context, "system" memory doesn't have to be "kernel" memory.
Windows has lots of auxilliary "services", which run as normal
applications but without which the OS will effectively cease to function.]
 
Top