Maker Pro
Maker Pro

How to develop a random number generation device

J

John Larkin

I have seen engineers get into trouble that would have been
avoided had they only followed the above advice.

Another problem-hider is filling unused ROM with jumps to the
reset vector. I only do that on the production units; for
the prototype (and sometimes for the pilot run) I like to fill
unused ROM with stop instructions.

A technique that I sometimes use when designing toys is to
have the button / switch that tells the toy to start moving
and making noise cause a hardware reset, and the timeout at
the end of play that tells the toy to stop moving and conserve
power to invoke the deepest available sleep mode -- usually
with the clock stopped entirely -- to be woken up by the next
hardware reset. In industrial control applications you
sometimes see the same sort of thing but with a counter causing
the resets to occur every N seconds. This techniques isn't
always applicable (check to see how fast the oscillator can
come up, for example; some are annoyingly slow) but in some
limited cases it works well.

Some things have to run continuously, with microsecond response to
inputs, so can't be periodically reset. Most of my products have long
startup times, too, as long a 5 seconds, so a true system reset is
pretty traumatic.

The reliability of a good digital system should be dominated by
classic hardware MTBF, not by bugs or glitches. It should have *no*
bugs or glitches.

John
 
J

John Larkin

And my point is that it shouldn't "accidentally" get into a broken
state, any more than the program counter of a CPU should accidentally
find itself in never-never land.

---
It shouldn't, but it can [get into a "broken" state] if that broken
state is allowed to exist. For instance, a glitch on a power supply
rail can cause any number of problems, including putting a shift
register in a prohibited state and causing a circuit to hang.

Power supply rails shouldn't glitch, and a decent system will either
work correctly through a brownout, or reset/restart properly if it
can't.

My circuit (Not "mine" in the sense that I invented it; I didn't.)
side-steps the problem by forcing the potentially problematical
normally prohibited state to be part of the sequence.
---


---
I agree, and my circuit is just one way to make the circuit more
reliable by totally eliminating the lock-up state.
---


---
It's hardly a kluge, and I have trouble understanding why someone as
ostensibly intelligent as you profess to be can't see that the
circuit eliminates a potential problem. Either that or you're
miffed about something.

We (me and my guys) have had this discussion, about whether we should
try to anticipate/sense/fix states that "can't happen", in the sense
that there's no logical path to the bad state. Our concensus is that
such sensing/fixing is fruitless, since any non-trivial system has
astronomically more hazardous states (like the enormous state space of
a uP program and its variables, or the megabits of configuration ram
in an FPGA) than can even be analyzed, much less repaired. If a
counter botches its state sequence, find out why and fix it. If a uP
program crashes, ditto. And don't design asynchronous state machines
that have small, lurking probabilities of screwing up. Things like
watchdog timers hide the design errors, so you neither fix things nor
learn.


Because it may fix a system hangup caused by who-knows-what, things
that analysis and a few months of running didn't reveal, huge ESD
shots or something.
It is, after all, just one more thing that can go wrong.

I've never seen a watchdog timer cause a problem by malfunctioning on
its own. I did recently code a diagnostic routine that, given a
request to take and average zero ADC samples, would divide by zero,
hang, and trip the watchdog. I found the bug after some units had been
shipped, by reading the code, and as far as I know the bug was never
tripped. Cases like that justify enabling a watchdog *after*
considerable testing has not found provokable bugs that might trip it.

When my opinions differ from yours, you like to explain it by
conjecturing some sort of emotional crisis on my part. Do you treat
everybody that way?

John
 
J

John Fields

On Sat, 08 Sep 2007 08:21:15 -0500, John Fields

On Fri, 07 Sep 2007 13:44:15 -0700, John Larkin

On Fri, 07 Sep 2007 15:06:31 -0400, Spehro Pefhany

On Fri, 07 Sep 2007 10:21:50 -0500, Vladimir Vassilevsky



John Larkin wrote:


Hmmm, seems like all 0's must be the lockup for an all XOR feedback.

OK, but you missed my point, which was that it's possible to
eliminate the lockup state by forcing it to be part of the sequence.


I don't follow that. One state, all 0's usually, is the lockup. How do
you force that to be part of the sequence?

You can add a lockup state detector, a big OR gate or something, and
jam in a "1" if the whole register ever gets to the all-0's state, but
then the all-0's state is not part of the sequence, because it never
happens again.

It can happen at the startup though. You have to ensure the nonzero
initial state.

VLV

If you care about fault tolerance you will ensure recovery from a zero
state which occurs at any time.

Best regards,
Spehro Pefhany

That gets into the philosophical issue: should we attempt to detect
and correct for transient hardware errors in digital systems? That can
apply to config bits in FPGAs (do we check them on a regular basis?),
registers in uPs (including PC, SP, etc), values in counters,
whatever.

We generally assume that if it's broke, it's broke.

---
The point here, though, is that the machine will get itself unbroke
if it ever accidentally gets into what would normally have been the
lock-up state.



And my point is that it shouldn't "accidentally" get into a broken
state, any more than the program counter of a CPU should accidentally
find itself in never-never land.

---
It shouldn't, but it can [get into a "broken" state] if that broken
state is allowed to exist. For instance, a glitch on a power supply
rail can cause any number of problems, including putting a shift
register in a prohibited state and causing a circuit to hang.

Power supply rails shouldn't glitch, and a decent system will either
work correctly through a brownout, or reset/restart properly if it
can't.

My circuit (Not "mine" in the sense that I invented it; I didn't.)
side-steps the problem by forcing the potentially problematical
normally prohibited state to be part of the sequence.
---


---
I agree, and my circuit is just one way to make the circuit more
reliable by totally eliminating the lock-up state.
---


---
It's hardly a kluge, and I have trouble understanding why someone as
ostensibly intelligent as you profess to be can't see that the
circuit eliminates a potential problem. Either that or you're
miffed about something.

We (me and my guys) have had this discussion, about whether we should
try to anticipate/sense/fix states that "can't happen", in the sense
that there's no logical path to the bad state. Our concensus is that
such sensing/fixing is fruitless, since any non-trivial system has
astronomically more hazardous states (like the enormous state space of
a uP program and its variables, or the megabits of configuration ram
in an FPGA) than can even be analyzed, much less repaired. If a
counter botches its state sequence, find out why and fix it. If a uP
program crashes, ditto. And don't design asynchronous state machines
that have small, lurking probabilities of screwing up. Things like
watchdog timers hide the design errors, so you neither fix things nor
learn.


Because it may fix a system hangup caused by who-knows-what, things
that analysis and a few months of running didn't reveal, huge ESD
shots or something.
It is, after all, just one more thing that can go wrong.

I've never seen a watchdog timer cause a problem by malfunctioning on
its own. I did recently code a diagnostic routine that, given a
request to take and average zero ADC samples, would divide by zero,
hang, and trip the watchdog. I found the bug after some units had been
shipped, by reading the code, and as far as I know the bug was never
tripped. Cases like that justify enabling a watchdog *after*
considerable testing has not found provokable bugs that might trip it.

When my opinions differ from yours, you like to explain it by
conjecturing some sort of emotional crisis on my part. Do you treat
everybody that way?

---
You read my stuff don't you? Answer your own question.

Being miffed is hardly an emotional crisis, and it seems whenever
you get your feathers ruffled you like to start using vaguely
derogative words like 'kluge' in order to secure what appears to be
a technical advantage in order to keep from having to admit that you
made a technical blunder.
 
J

John Larkin

On Sat, 08 Sep 2007 15:16:12 -0700, John Larkin

On Sat, 08 Sep 2007 08:21:15 -0500, John Fields

On Fri, 07 Sep 2007 13:44:15 -0700, John Larkin

On Fri, 07 Sep 2007 15:06:31 -0400, Spehro Pefhany

On Fri, 07 Sep 2007 10:21:50 -0500, Vladimir Vassilevsky



John Larkin wrote:


Hmmm, seems like all 0's must be the lockup for an all XOR feedback.

OK, but you missed my point, which was that it's possible to
eliminate the lockup state by forcing it to be part of the sequence.


I don't follow that. One state, all 0's usually, is the lockup. How do
you force that to be part of the sequence?

You can add a lockup state detector, a big OR gate or something, and
jam in a "1" if the whole register ever gets to the all-0's state, but
then the all-0's state is not part of the sequence, because it never
happens again.

It can happen at the startup though. You have to ensure the nonzero
initial state.

VLV

If you care about fault tolerance you will ensure recovery from a zero
state which occurs at any time.

Best regards,
Spehro Pefhany

That gets into the philosophical issue: should we attempt to detect
and correct for transient hardware errors in digital systems? That can
apply to config bits in FPGAs (do we check them on a regular basis?),
registers in uPs (including PC, SP, etc), values in counters,
whatever.

We generally assume that if it's broke, it's broke.

---
The point here, though, is that the machine will get itself unbroke
if it ever accidentally gets into what would normally have been the
lock-up state.



And my point is that it shouldn't "accidentally" get into a broken
state, any more than the program counter of a CPU should accidentally
find itself in never-never land.

---
It shouldn't, but it can [get into a "broken" state] if that broken
state is allowed to exist. For instance, a glitch on a power supply
rail can cause any number of problems, including putting a shift
register in a prohibited state and causing a circuit to hang.

Power supply rails shouldn't glitch, and a decent system will either
work correctly through a brownout, or reset/restart properly if it
can't.

My circuit (Not "mine" in the sense that I invented it; I didn't.)
side-steps the problem by forcing the potentially problematical
normally prohibited state to be part of the sequence.
---

If a digital system is unreliable, the cause should be found and
fixed.

---
I agree, and my circuit is just one way to make the circuit more
reliable by totally eliminating the lock-up state.
---

The problem with kluges like this is the same problem with
watchdog timers: they hide the real problem, so keep it from getting
fixed.

---
It's hardly a kluge, and I have trouble understanding why someone as
ostensibly intelligent as you profess to be can't see that the
circuit eliminates a potential problem. Either that or you're
miffed about something.

We (me and my guys) have had this discussion, about whether we should
try to anticipate/sense/fix states that "can't happen", in the sense
that there's no logical path to the bad state. Our concensus is that
such sensing/fixing is fruitless, since any non-trivial system has
astronomically more hazardous states (like the enormous state space of
a uP program and its variables, or the megabits of configuration ram
in an FPGA) than can even be analyzed, much less repaired. If a
counter botches its state sequence, find out why and fix it. If a uP
program crashes, ditto. And don't design asynchronous state machines
that have small, lurking probabilities of screwing up. Things like
watchdog timers hide the design errors, so you neither fix things nor
learn.

---

I always turn off the watchdog timer on test units, and protos
delivered to customers. I only enable it after we're sure we don't
need it.

Because it may fix a system hangup caused by who-knows-what, things
that analysis and a few months of running didn't reveal, huge ESD
shots or something.
It is, after all, just one more thing that can go wrong.

I've never seen a watchdog timer cause a problem by malfunctioning on
its own. I did recently code a diagnostic routine that, given a
request to take and average zero ADC samples, would divide by zero,
hang, and trip the watchdog. I found the bug after some units had been
shipped, by reading the code, and as far as I know the bug was never
tripped. Cases like that justify enabling a watchdog *after*
considerable testing has not found provokable bugs that might trip it.

When my opinions differ from yours, you like to explain it by
conjecturing some sort of emotional crisis on my part. Do you treat
everybody that way?

---
You read my stuff don't you? Answer your own question.

Being miffed is hardly an emotional crisis, and it seems whenever
you get your feathers ruffled you like to start using vaguely
derogative words like 'kluge' in order to secure what appears to be
a technical advantage in order to keep from having to admit that you
made a technical blunder.

Disagreeing with you on a design point is hardly "miffed." And, not
being a chicken, I don't have feathers.

Adding a circuit to find and fix a presumably impossible state is
worthy of "kluge" if anything is. Your circuit brought up the
interesting and, I think, important question of whether we should
incorporate such catcher/fixer things into our hardware and software.
Pardon me for caring about ideas like that.

Technical blunder? In a newsgroup? A technical blunder is a mistake
that fries thirty NMR probes, or blows up jet engines, or makes us
recall a hundred instruments. Newsgroups don't matter.

John
 
G

Guy Macon

Content-Transfer-Encoding: 8Bit


John said:
The reliability of a good digital system should be dominated by
classic hardware MTBF, not by bugs or glitches. It should have *no*
bugs or glitches.

Certainly none caused by the hardware itself, and things
like ESD and EMI can be shielded against, but what about
the occasional cosmic ray flipping a bit?

From _Soft Errors in Electronic Memory – A White Paper_
[ http://www.tezzaron.com/about/papers/soft_errors_1_1_secure.pdf ]:

"Even using a relatively conservative error rate (500 FIT/Mbit),
a system with 1 GByte of RAM can expect an error every two weeks;
a hypothetical Terabyte system would experience a soft error every
few minutes."
 
J

John Larkin

Watch dogs don't always recover the system from a glitch. If you are
storing data in battery back RAM or flash, you need to be sure that
wrong values don't cause things to hang in some non-recoverable way.

One of my programming rules is that data should never be able to crash
code.

John
 
J

John Larkin

Content-Transfer-Encoding: 8Bit


John said:
The reliability of a good digital system should be dominated by
classic hardware MTBF, not by bugs or glitches. It should have *no*
bugs or glitches.

Certainly none caused by the hardware itself, and things
like ESD and EMI can be shielded against, but what about
the occasional cosmic ray flipping a bit?

From _Soft Errors in Electronic Memory – A White Paper_
[ http://www.tezzaron.com/about/papers/soft_errors_1_1_secure.pdf ]:

"Even using a relatively conservative error rate (500 FIT/Mbit),
a system with 1 GByte of RAM can expect an error every two weeks;
a hypothetical Terabyte system would experience a soft error every
few minutes."

If you're using deep sub-micron technology, like modern dram, whose
soft errors compromise reliability, then something like ECC becomes
mandatory. What scares me is stuff I can't do anything about, like
FPGA config ram.

Things like using redundant data storage, with background checks and
checksums, can be done, but that seems like more trouble than the
device density is worth.

John
 
M

MooseFET

On Sep 9, 1:48 pm, John Larkin
One of my programming rules is that data should never be able to crash
code.


The assignment statement is more dangerous than a GOTO.
 
J

John Fields

On Sun, 09 Sep 2007 14:10:27 -0500, John Fields


Disagreeing with you on a design point is hardly "miffed." And, not
being a chicken, I don't have feathers.

---
Turkeys do, though ;), and it wasn't a disagreement on a "design
point" that got you miffed, your attitude seemed to change when your
rather cavalier assessment of the operation of the circuit was
pointed out as flawed and the proper sequence of events explained to
you.

Kind of a "David and Goliath" thing, you being the mighty Goliath,
of course.
---
Adding a circuit to find and fix a presumably impossible state is
worthy of "kluge" if anything is.

---
You still don't get it. While the lock-up state isn't likely to be
entered normally, it _is_ possible for it to be entered
accidentally, so what my circuit (Since I didn't invent it I feel
uncomfortable calling it "my circuit", so let's call it a 'pulse
stuffer' instead, OK?) does, among other things, is to force the
shift register into and out of what would be the lock-up state if
the pulse stuffer wasn't there. Doing that is what makes it
impossible for the shift register to hang up, and also changes the
length of the count sequence from (2^n)-1 to 2^n, which is/can be
handy.
---
Your circuit brought up the interesting and, I think, important
question of whether we should incorporate such catcher/fixer
things into our hardware and software.
Pardon me for caring about ideas like that.

---
Do I detect a little facetiousness there?

I'm not denigrating you for caring about ideas like that. Far from
it, my comments relate to your, on the one hand, rah-rah-rah-ing
about measures designed to increase reliability of a circuit while,
on the other hand, poo-poo-ing methods which do that but aren't
quite to your liking since they weren't yours.
---
Technical blunder? In a newsgroup? A technical blunder is a mistake
that fries thirty NMR probes, or blows up jet engines, or makes us
recall a hundred instruments. Newsgroups don't matter.

---
Wrong.

USENET allows us to fearlessly exchange ideas, share strategies and
correct each other in ways which would be impossible otherwise.

Kind of like old time HAM radio, but with the politeness
restrictions lifted.

If you think newsgroups don't matter then why are you wasting your
time on this one?

Looking for sales?
 
J

John Fields

If you're using deep sub-micron technology, like modern dram, whose
soft errors compromise reliability, then something like ECC becomes
mandatory. What scares me is stuff I can't do anything about, like
FPGA config ram.

Things like using redundant data storage, with background checks and
checksums, can be done, but that seems like more trouble than the
device density is worth.
 
J

John Larkin

On Sep 9, 1:48 pm, John Larkin



The assignment statement is more dangerous than a GOTO.

Assignments don't crash in assembly, since assembly is untyped. The
only math error possible is a divide-by-zero trap, or a stupid
pointer. What's important is that program flow doesn't bomb just
because some cal table is trashed.

There's nothing wrong with GOTO; hell, Dijkstra didn't even have
regular access to a computer, and didn't actually program much. I
think in state machines, so GOTO is perfectly logical. In assembly, a
conditional branch, or a computed/table driven jump, are the primary
control structures.

Nested curly brackets are just as dangerous, or moreso.

John
 
J

John Larkin

I have many customers. Sometimes I conferr with them as regards
product performance or interface; I almost never discuss internal
design issues with them. Few of them ever see schematics or source
code.

John
 
M

MooseFET

Assignments don't crash in assembly, since assembly is untyped. The
only math error possible is a divide-by-zero trap, or a stupid
pointer. What's important is that program flow doesn't bomb just
because some cal table is trashed.

I think you missed the point. What I mean is that an assignment can
change all of the future operation of the program so it can (if you
don't code carefully) lead to hang up states and the like.

There's nothing wrong with GOTO; hell, Dijkstra didn't even have
regular access to a computer, and didn't actually program much. I
think in state machines, so GOTO is perfectly logical. In assembly, a
conditional branch, or a computed/table driven jump, are the primary
control structures.

I certainly agree. Quite a lot of my coding contains GOTOs in the
form of jump instructions. When you write on an 8051 you often do
things like this:

SqrtFastOut:
RET

Sqrt:
MOV A,Work1
ORL A,Work1+1
ORL A,Work1+2
ORL A,Work1+3
JZ SqrtFastOut

Zero is a special case that my sqrt routine would get wrong otherwise.

Nested curly brackets are just as dangerous, or moreso.

As are the declares of any soft of array.
 
J

John Larkin

On Sep 9, 1:48 pm, John Larkin
[....]
Watch dogs don't always recover the system from a glitch. If you are
storing data in battery back RAM or flash, you need to be sure that
wrong values don't cause things to hang in some non-recoverable way.
One of my programming rules is that data should never be able to crash
code.
The assignment statement is more dangerous than a GOTO.

Assignments don't crash in assembly, since assembly is untyped. The
only math error possible is a divide-by-zero trap, or a stupid
pointer. What's important is that program flow doesn't bomb just
because some cal table is trashed.

I think you missed the point. What I mean is that an assignment can
change all of the future operation of the program so it can (if you
don't code carefully) lead to hang up states and the like.

Well, that's bad code. The point of coding in state machines, or at
least thinking in them, is that all possible cases are accounted for.

I certainly agree. Quite a lot of my coding contains GOTOs in the
form of jump instructions. When you write on an 8051 you often do
things like this:

SqrtFastOut:
RET

Sqrt:
MOV A,Work1
ORL A,Work1+1
ORL A,Work1+2
ORL A,Work1+3
JZ SqrtFastOut

Zero is a special case that my sqrt routine would get wrong otherwise.

C doesn't encourage subroutines that have multiple entry and multiple
exit points. Pity.
As are the declares of any soft of array.

I's occurred to me, more than once, that C was invented to run on a
PDP-11, and that a $400 Dell has 10,000 times the memory and a
thousand times the speed of an 11. So why doesn't somebody invent a
language (and a methodology) that produces/forces reliable code and
requires less debugging, and does rudimentary stuff like data and code
separation, at some expense in runtime resources?

John
 
J

John Larkin

---
Turkeys do, though ;), and it wasn't a disagreement on a "design
point" that got you miffed, your attitude seemed to change when your
rather cavalier assessment of the operation of the circuit was
pointed out as flawed and the proper sequence of events explained to
you.

What in the world are you talking about?


Kind of a "David and Goliath" thing, you being the mighty Goliath,
of course.

Oh. I don't like circuits that do accidental things.

so what my circuit (Since I didn't invent it I feel
uncomfortable calling it "my circuit", so let's call it a 'pulse
stuffer' instead, OK?) does, among other things, is to force the
shift register into and out of what would be the lock-up state if
the pulse stuffer wasn't there. Doing that is what makes it
impossible for the shift register to hang up, and also changes the
length of the count sequence from (2^n)-1 to 2^n, which is/can be
handy.

None at all. The issue is a serious one.
I'm not denigrating you for caring about ideas like that. Far from
it, my comments relate to your, on the one hand, rah-rah-rah-ing
about measures designed to increase reliability of a circuit while,
on the other hand, poo-poo-ing methods which do that but aren't
quite to your liking since they weren't yours.
---


---
Wrong.

USENET allows us to fearlessly exchange ideas, share strategies and
correct each other in ways which would be impossible otherwise.

You might want to work on the "fearless" part.
Kind of like old time HAM radio, but with the politeness
restrictions lifted.

If you think newsgroups don't matter then why are you wasting your
time on this one?

Sometimes it's interesting, and I don't watch TV.
Looking for sales?

No. But I might be looking for people.

John
 
J

John Devereux

[...]
C doesn't encourage subroutines that have multiple entry and multiple
exit points. Pity.

Multiple exit points are no problem - you can easily

return;

from a function wherever you like.

Multiple entry points to a *function* are a bit more awkward - but you
can enter other types of block at multiple points. If you know C you
might enjoy this:

I's occurred to me, more than once, that C was invented to run on a
PDP-11, and that a $400 Dell has 10,000 times the memory and a
thousand times the speed of an 11. So why doesn't somebody invent a
language (and a methodology) that produces/forces reliable code and
requires less debugging, and does rudimentary stuff like data and code
separation, at some expense in runtime resources?

They have, of course - not many people use C to write applications for
desktop machines any more. They use C#, java, all sorts of other
languages and frameworks that do most or all of what you are asking.

However, to take up your analogy, todays microcontroller *is* similar
in power to a PDP-11, so C is now in fact quite a good fit!
 
M

MooseFET

Well, that's bad code. The point of coding in state machines, or at
least thinking in them, is that all possible cases are accounted for.

Im not just thinking of the indexes of state machines and I'm
certainly not defending code that has this sort of problem. I am
pointing out is that this is an area where people must be careful.
Just coding without GOTOs is no insurance against any problem that
GOTOless code is supposed to prevent.


C doesn't encourage subroutines that have multiple entry and multiple
exit points. Pity.

There are lots of limitations you accept when you code in C.

This is hard to do in C:

IntRoutine:
 
M

MooseFET

Ooops heres the rest of what I was saying:

On Sep 10, 8:12 am, John Larkin
C doesn't encourage subroutines that have multiple entry and multiple
exit points. Pity.

In C it is hard to do things like this:

AnotherReti:
RETI


Interrupt0:
... stuff ...
CALL AnotherReti

; Code runs as non-interrupt code
... stuff ...
RET

This is handy because it lets you deal with a process that may take
longer than the shortest time between interrupts. You have to keep
track of how many interrupt pulses there have been and react correctly
but it isn't too tough.

I also write a lot of multi entry routines that look a little like
this:

DoSomethingWith17:
MOV A,#17
DoSomething: ; Process the number in A
... stuff ...
RET

I put the comment:
;******* Fall Through ***********************************
in to make it really obvious what I'm doing.


[....]
I's occurred to me, more than once, that C was invented to run on a
PDP-11, and that a $400 Dell has 10,000 times the memory and a
thousand times the speed of an 11. So why doesn't somebody invent a
language (and a methodology) that produces/forces reliable code and
requires less debugging, and does rudimentary stuff like data and code
separation, at some expense in runtime resources?

They did partly. It is called Pascal. Specifically the Borland
flavor of Pascal.
Pascal has run time checking and very strict type checking. This
helps a lot to prevent the common "type related" errors such as losing
the signedness of a number.

Pascal generally doesn't allow you to express anything that requires
mixing of code and data spaces.

In the OO version of Borland Pascal, they check an object's VMT for
being likely to be valid before they dispatch a virtual function.

This seems to be a good start. The places where it falls short is in
working with floating point values. In ordinals, you can specify the
valid range that the number can be. For floats you can't. Also since
the exponent of the float can change, there can be problems with lost
bits.

I think it would be better if the accuracy and range was stated and
the compiler made the choice about the right type (float, double, or
extended) to be used.

I think it was Godel that proved that no compiler could out smart a
stupid programmer, but at least it could check for the obvious stuff.
 
M

MooseFET

On Sep 10, 10:55 am, John Devereux <[email protected]>
wrote:
[....]
They have, of course - not many people use C to write applications for
desktop machines any more. They use C#, java, all sorts of other
languages and frameworks that do most or all of what you are asking.

I disagree. Many programs today are written using MFC which ensures
that it is a hopeless morass of bugs.
 
R

Robert Baer

<OT>
Good thing i kept my backup HD as well as my OithRink subscription.
Copper.net plainly "offers" newsgroups, but more plainly DOES NOT
support them.
<\ot>
It is *NOT* possible to calculate a random number sequence; any such
foolish attempt will result in a (perhaps long) repeating sequence.
Even a "short" (say 1-10%) sample of a loooong sequence will fail one
test of a truly random sequence of the same length.
If you want random, start with (or re-seed from) a table of random
numbers, similar to what was used by the military ages ago.
Or..take a tested random process (eg: nuclear decay, brownian
motion), digitize that for either random values or a "lookup table".
Oh...you were asking about one of those tests?
Any given value can and will repeat - - - randomly!
 
Top