Maker Pro
Maker Pro

How to develop a random number generation device

J

John Larkin

---
I wasn't referring to the nitty-gritty of it, of which your clients
would certainly not be aware unless they asked for your design
documentation, I was referring to getting the widgets to do what
they're supposed to do within your contractual cost constraints.

For example, let's say that for $1M a hit you could guarantee that
one tenth of the ten million dollar fighters on which your equipment
was mounted wouldn't be shot down because of failures in your stuff,
but for $2M a hit you could guarantee ten times better than that.

You present your figures to your customer and let _them_ make the
choice.

We sometimes do present a customer with a cost/performance tradeoff,
although once a gadget becomes a catalog item there's not much wiggle
room. We have never, so far, offered a customer a price:reliability
tradeoff. The image we'd like to convey is that we make things as
reliable as we can.

In some cases, it's like never telling your mother that you have a
term paper due; if you did, she'd nag you about it. Similarly, if we
have an MTBF tradeoff of some sort, it's probably better to not let
the customer get involved.

John
 
H

Herbert John \Jackie\ Gleason

Ancient computers, PDP-11 and VAX certainly, had memory management
hardware that separated I and D space, where I space was read-only,
and D space could not be executed. And the OS's enforced those rules.
It was common to have many users running the exact same code, but
mapped into different data spaces.

Problem is, neither Intel nor Microsoft was in the mainstream of
computing when they kluged up x86 and Windows.

John

They kludged up x86 and DOS. Windows came years later.

Also "once they hit" they WERE mainstream items, as it costs a hell of
a lot less to have a viable CAD workstation on a PC than any other
platform at the time.

Took us off the drafting boards with 4X and onto computers and CAD
based layout.
 
J

John Larkin

On Sat, 08 Sep 2007 08:21:15 -0500, John Fields

On Fri, 07 Sep 2007 13:44:15 -0700, John Larkin

On Fri, 07 Sep 2007 15:06:31 -0400, Spehro Pefhany

On Fri, 07 Sep 2007 10:21:50 -0500, Vladimir Vassilevsky



John Larkin wrote:


Hmmm, seems like all 0's must be the lockup for an all XOR feedback.

OK, but you missed my point, which was that it's possible to
eliminate the lockup state by forcing it to be part of the sequence.


I don't follow that. One state, all 0's usually, is the lockup. How do
you force that to be part of the sequence?

You can add a lockup state detector, a big OR gate or something, and
jam in a "1" if the whole register ever gets to the all-0's state, but
then the all-0's state is not part of the sequence, because it never
happens again.

It can happen at the startup though. You have to ensure the nonzero
initial state.

VLV

If you care about fault tolerance you will ensure recovery from a zero
state which occurs at any time.

Best regards,
Spehro Pefhany

That gets into the philosophical issue: should we attempt to detect
and correct for transient hardware errors in digital systems? That can
apply to config bits in FPGAs (do we check them on a regular basis?),
registers in uPs (including PC, SP, etc), values in counters,
whatever.

We generally assume that if it's broke, it's broke.

---
The point here, though, is that the machine will get itself unbroke
if it ever accidentally gets into what would normally have been the
lock-up state.



And my point is that it shouldn't "accidentally" get into a broken
state, any more than the program counter of a CPU should accidentally
find itself in never-never land.

---
It shouldn't, but it can [get into a "broken" state] if that broken
state is allowed to exist. For instance, a glitch on a power supply
rail can cause any number of problems, including putting a shift
register in a prohibited state and causing a circuit to hang.

Without trying to get into the middle of this harangue...

If you look at state transition diagrams of incompletely described
state machines, illegal "islands" can be formed that once entered can
never be exited. These "islands" can be single states feeding back
on themselves. To guarantee that there are no such "islands" in a
design, synthesis tools have "safe" options that will cause illegal
states to be detected and control transferred to valid states.
My circuit (Not "mine" in the sense that I invented it; I didn't.)
side-steps the problem by forcing the potentially problematical
normally prohibited state to be part of the sequence.

Yes, all possible states should be accounted for to make sure this
doesn't happen. "Invalid" states should be decoded also.

<snip>

As noted, we have decided that there are far too many system states
that could hang us up, not the few in an obvious, official "state
machine." So we try to make sure that the logic is solid enough that
invalid states are never entered, without worrying much about what
happens if they are. Even if the design provides for exiting bad
states, just getting there is a system failure.

I'd rather have a system hang in a bad state, because that would make
us aware of a problem, and fix it. If it "fixes" itself, then it
becomes a mysterious, occasional, transient malfunction, the worst
kind of error.


John
 
S

SomeKindOfWonderful

Without trying to get into the middle of this harangue...

If you look at state transition diagrams of incompletely described
state machines, illegal "islands" can be formed that once entered can
never be exited. These "islands" can be single states feeding back
on themselves. To guarantee that there are no such "islands" in a
design, synthesis tools have "safe" options that will cause illegal
states to be detected and control transferred to valid states.


A recursive loop.

Like you giving folks you don't like pet names, and me calling you the
retard you are for doing it, and for various other reasons.
 
C

ChairmanOfTheBored

Yes, all possible states should be accounted for to make sure this
doesn't happen. "Invalid" states should be decoded also.


Yes. That is exactly what you decode as. An invalid state.
 
V

Vladimir Vassilevsky

John Larkin wrote:

It's not weak if a randomizing process, applied *internally* to the
register (ie, xor some noise into its guts) causes it to hop all over
its state space randomly.

Not a good idea. It can easily make the things much worse then for the
pure LSFR. Any statistical dependencies in the noise will kill the
randomness.
In that case, there will be no un-boogered
sequences long enough to identify the nature of the register.

Due to the linear nature of the operation, the statistical dependencies
will go through directly.

VLV
 
K

krw

When it stops letting you run arbitrary machine code.

Many have said that Windows is useless. Your proposal would seal
that.
Nothing the OS does can prevent machine code from overrunning a buffer.

Absolute nonsense. Perhaps buffer overruns can't be prevented using
C++, but they *can* be prevented.
The only thing that the OS can do in this regard is to either restrict
what machine code you can run (e.g. cryptographic signing), and/or run
untrusted code in a heavily-restricted environment so that a buffer
overrun cannot be "exploited".

...or do what decent OSs do; protect tasks from each other.
Given that:

a) this would make Windows totally incompatible with most existing
software, and

No change.
b) about the only thing that Windows has in its favour is the mass of
existing Windows software,

M$ broke that with VIsta and still didn't fix buffer overruns.
I don't expect this to happen any time soon.

I don't either. Quality and security aren't priorities.
 
K

krw

A recursive loop.

No recursion, Dimbulb. You're the cursing loop.
Like you giving folks you don't like pet names, and me calling you the
retard you are for doing it, and for various other reasons.

You are dumber than a stump and proud of it, Dimmie. We can't help
you.
 
K

krw

On Sat, 08 Sep 2007 15:16:12 -0700, John Larkin

On Sat, 08 Sep 2007 08:21:15 -0500, John Fields

On Fri, 07 Sep 2007 13:44:15 -0700, John Larkin

On Fri, 07 Sep 2007 15:06:31 -0400, Spehro Pefhany

On Fri, 07 Sep 2007 10:21:50 -0500, Vladimir Vassilevsky



John Larkin wrote:


Hmmm, seems like all 0's must be the lockup for an all XOR feedback.

OK, but you missed my point, which was that it's possible to
eliminate the lockup state by forcing it to be part of the sequence.


I don't follow that. One state, all 0's usually, is the lockup. How do
you force that to be part of the sequence?

You can add a lockup state detector, a big OR gate or something, and
jam in a "1" if the whole register ever gets to the all-0's state, but
then the all-0's state is not part of the sequence, because it never
happens again.

It can happen at the startup though. You have to ensure the nonzero
initial state.

VLV

If you care about fault tolerance you will ensure recovery from a zero
state which occurs at any time.

Best regards,
Spehro Pefhany

That gets into the philosophical issue: should we attempt to detect
and correct for transient hardware errors in digital systems? That can
apply to config bits in FPGAs (do we check them on a regular basis?),
registers in uPs (including PC, SP, etc), values in counters,
whatever.

We generally assume that if it's broke, it's broke.

---
The point here, though, is that the machine will get itself unbroke
if it ever accidentally gets into what would normally have been the
lock-up state.



And my point is that it shouldn't "accidentally" get into a broken
state, any more than the program counter of a CPU should accidentally
find itself in never-never land.

---
It shouldn't, but it can [get into a "broken" state] if that broken
state is allowed to exist. For instance, a glitch on a power supply
rail can cause any number of problems, including putting a shift
register in a prohibited state and causing a circuit to hang.

Without trying to get into the middle of this harangue...

If you look at state transition diagrams of incompletely described
state machines, illegal "islands" can be formed that once entered can
never be exited. These "islands" can be single states feeding back
on themselves. To guarantee that there are no such "islands" in a
design, synthesis tools have "safe" options that will cause illegal
states to be detected and control transferred to valid states.
My circuit (Not "mine" in the sense that I invented it; I didn't.)
side-steps the problem by forcing the potentially problematical
normally prohibited state to be part of the sequence.

Yes, all possible states should be accounted for to make sure this
doesn't happen. "Invalid" states should be decoded also.

<snip>

As noted, we have decided that there are far too many system states
that could hang us up, not the few in an obvious, official "state
machine." So we try to make sure that the logic is solid enough that
invalid states are never entered, without worrying much about what
happens if they are. Even if the design provides for exiting bad
states, just getting there is a system failure.

For the LFSR case, one should pick a maximum length equation so there
is only one "illegal" state and that trivially tested for. Other
state machines are fairly simple to add illegal state decode, also.
Often one-hot state machines are made with shift registers which make
the number of possible illegal states small.
I'd rather have a system hang in a bad state, because that would make
us aware of a problem, and fix it. If it "fixes" itself, then it
becomes a mysterious, occasional, transient malfunction, the worst
kind of error.

There is something to be said for that decision. The original IBM PC
hung on a parity error, under the assumption that bad data was worse
than a hung system. Many customer didn't agree so others skipped
parity checking all together (the only two real choices at the time).
 
M

MooseFET

On Sep 11, 4:58 pm, John Larkin
It sounds to me like C compilers/linkers tend to allocate memory to
code, buffers, and stack sort of anywhere they like.

No the problem isn't really with code mixed with data. It is data
mixed with data. The return addresses etc are on the stack along with
the local arrays. This means that a routine can overwrite the return
address with data by walking off the end of an array. Once that
happens the return instruction jumps you to the bad code.
Why can't at least the compilers be fixed so
that they put all the stacks first, then the code, then all the
buffers?

With an x86's MMU, you can make segments for code and stack and the
like that have limits on their sizes. The problems can be partly
overcome by this.

On most programs, the stack is just above the data segment in physical
memory and the malloc() obtained memory is beyond that.
 
M

MooseFET

Many have said that Windows is useless. Your proposal would seal
that.


Absolute nonsense. Perhaps buffer overruns can't be prevented using
C++, but they *can* be prevented.

A C++ compiler could be created that inserted checking code in every
operation that may overrun. Every buffer would have to have its
length recorded somewhere.

The OS can let your program single step and check what every
instruction does.

The OS can always leave a dead page after every malloc() block so you
get a segment fault on stepping off the end.
 
M

MooseFET

If you do it wrong, you could end up reducing the randomness rather than
increasing it. The result can easily be affected by the behaviour of the
chip (e.g. EMI, or the pull-down impedance being affected by the current
draw of nearby circuitry), causing a feedback loop which biases the RNG
towards certain states.


Doing it right isn't all that hard to ensure. XOR gates don't cost
much. If you XOR a pseudo-random bit string with a slightly random
bit string, you get a more random bit string.

If you clock everything with a good crystal, you can avoid having the
clock rate vary depending on the state of the output. The ring
oscillators can be sampled at the clock edge to make a slightly random
bit stream.

For a bit of extra fun, you could make the Vcc with a hit and miss
regulator.
 
N

No Spam

Nobody said:
Nothing the OS does can prevent machine code from overrunning a buffer.

That's not true. Many operating systens are by design, immune to
buffer over-runs modifying unrelated code.
Given that:

a) this would make Windows totally incompatible with most existing
software, and

Did you mean to write "nothing the *Windows* OS does can prevent machine
code from overrunning a buffer?
 
V

Vladimir Vassilevsky

Doing it right isn't all that hard to ensure. XOR gates don't cost
much. If you XOR a pseudo-random bit string with a slightly random
bit string, you get a more random bit string.

A very dangerous assumption. XOR is a linear operation.

LSFR(register ^ random) = LSFR(register) ^ LSFR(random)

Adding a 'salt' to a pseudo random number generator is fairly
complicated subject.
If you clock everything with a good crystal, you can avoid having the
clock rate vary depending on the state of the output. The ring
oscillators can be sampled at the clock edge to make a slightly random
bit stream.

For a bit of extra fun, you could make the Vcc with a hit and miss
regulator.

Those things can exibit a periodic behavior.


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com
 
N

Nobody

On most programs, the stack is just above the data segment in physical
memory and the malloc() obtained memory is beyond that.

At least on Linux/x86, the primary stack is at the top of memory (the 3GiB
mark; the last GiB is reserved for the kernel, so it can combine its own
address space and the process' address space into a single 32-bit space).

Everything which is mapped from files at start-up (i.e. the code, data,
rodata and BSS segments of the executable and shared libraries) is at the
bottom.

The heap (memory allocated by brk()) goes immediately above that.
[brk() is used by traditional malloc() implementations; modern malloc()s
use a combination of brk() and mmap(..., MAP_ANONYMOUS).]

Regions which are created dynamically (mmap(), shmat(), stacks for
additional threads, etc) go somewhere in the middle.

Before life got complicated with mmap() etc, you typically had the heap at
the bottom and the stack at the top, and "out of memory" occurred when one
collided with the other; IOW, a combined limit for the two rather than a
separate limit for each.
 
N

Nobody

Not by the *OS*.

[Using the traditional definition (the kernel, i.e. the part that the
process can't bypass), not Microsoft's definition (the OS plus a bunch of
user-level applications which are bundled in order to kill the market for
competing applications).]
A C++ compiler could be created that inserted checking code in every
operation that may overrun. Every buffer would have to have its
length recorded somewhere.

The language can prevent overruns; the OS just sees machine code.
The OS can let your program single step and check what every
instruction does.

The OS doesn't know where one "buffer" ends and the next one begins; it's
all just memory. And single-stepping programs would be out of the question
from an efficiency perspective.
The OS can always leave a dead page after every malloc() block so you
get a segment fault on stepping off the end.

That prevents overflowing malloc()ed buffers, but it doesn't work for
those on the stack ("auto" local variables) or in the data segment (global
or "static" local variables).

Historically, stack overflows have caused the most problems, due to the
ability to overwrite the function's return address.

Also, malloc() is part of the library, not the OS (kernel). Requesting
individual blocks directly from the kernel would cause a substantial
performance hit (not as bad as single-stepping, but still too high to be
practical).

Even if you consider malloc() (and C++'s "new", etc) to be part of the OS,
it's not uncommon for programs to use custom allocators (C++ allows
"operator new()" to be overriden for this specific purpose).

As I said:

There are lots of things which can be done to prevent overruns (and other
things which can be done to prevent overruns from being exploitable), but
most of them need to be done at a higher level than the OS.
 
N

Nobody

That's not true. Many operating systens are by design, immune to
buffer over-runs modifying unrelated code.

The issue isn't about modifying code, related or otherwise. It's about
either injecting new code or executing existing code with
attacker-supplied data.

This isn't about protecting one process from another, but about protecting
a process from itself. Most of the existing mechanisms for mitigating
buffer overruns are implemented in either the compiler or libraries. The
only OS-level mechanisms (things that work on any executable, however it
was built) involve making it harder to exploit an overrun (e.g.
randomising memory locations) rather than actually preventing the overrun.
Did you mean to write "nothing the *Windows* OS does can prevent machine
code from overrunning a buffer?

No, the issues apply to any OS. But binary compatibility is much more
important for Windows (and Mac) than for Linux.

If you try to run a 5-year old Linux binary on a current distribution,
you'll probably find that a lot of the interfaces on which it depends have
either disappeared or have changed in an incompatible manner. Lack of a
stable ABI is a simple fact of life on Linux.
 
G

Guy Macon

John said:
We sometimes do present a customer with a cost/performance tradeoff,
although once a gadget becomes a catalog item there's not much wiggle
room. We have never, so far, offered a customer a price:reliability
tradeoff. The image we'd like to convey is that we make things as
reliable as we can.

While at the same time conveying the image of not increasing
reliability by 1% at the cost of 100X higher costs and a huge
slip in delivery date... said:
In some cases, it's like never telling your mother that you have a
term paper due; if you did, she'd nag you about it. Similarly, if we
have an MTBF tradeoff of some sort, it's probably better to not let
the customer get involved.

That's usually the case. There are some customers who prefer to be
kept in the loop on such issues[1], but most of them correctly assume
that the person doing the work is in a better position to make such
choices.

In the toy industry, the assumptions are just the opposite;
The image you like to convey is that you make things as cheaply
as you can as long as the number of dead on arrival toys is under
a percent or so. This, of course, applies to the real customers;
toy buyers for Walmart and Toys-R-Us. The image they like to
convey to their customers (parents, not kids) is quite different.


Note [1]
This reminds me of the classic repair-shop sign:

LABOR RATES

If you let us do our job: $100/hour

If you want to watch: $120/hour

If you want to talk: $150/hour

If you want to help: $200/hour

If you worked on it yourself: $500/hour

If you worked on it yourself
and still think you know how
to fix it better than we do: $1000/hour
 
G

Guy Macon

Vladimir said:
A very dangerous assumption. XOR is a linear operation.

Assuming no correlation between two sources of data, XOR
cannot decrease the randomness of the output stream below that
of either input stream If one of the streams is 100% true
random, the output of the XOR will be 100% true random, the
only exception being the case where the second input is
derived from the first.
Adding a 'salt' to a pseudo random number generator is fairly
complicated subject.

That's not the usual definition of cryptographic "salt."
 
Top