Maker Pro
Maker Pro

How to develop a random number generation device

K

krw

[email protected] says... [...]
The way I had imagined it was that the registers of the virtual CPUs
that are not currently running would be in a different place than the
ones that are actually being used. My concern was not increasing the
fan in and out of the busses on the ALU so that there would be no
increase in the loading and hence delay in those circuits.

If they're "somewhere" else, they have to be un/re/loaded. That
takes substantial time.

Yes, it may take a clock cycle to do the register swapping. Reducing
the number of registers on the bus allows those clock cycles to be at
a higher frequency so I think the advantage will out weigh the
disadvantage. BTW: I'm assuming several CPUs and lost of sets of
registers are on one chip.

It's going to take a *LOT* more than a clock cycle. You have to find
all the data in the file and you can't broadside that much data.
I was thinking in terms of a not very pipelined CPU so that the switch
over could happen in a few cycles. The registers currently being
written would have to stay in place until the write finished. This is
part of why I'm assuming a fairly simple CPU.

Why bother then? If you're giving it that much dead time simply do
things serially. You're essentially allowing time for a complete
context switch.
I don't see how you come to that conclusion.

Because that's how it's done? You have another source/destination
accessing the register file.
Yes and when a multiply doesn't draw an amp.

What does an amp matter at <1V?

My 32 bit -> 16 bit integer sqrt() for the 8051 doesn't use a look up
table and yet is fairly quick about it. It uses two observations:

1 - The sum of the first N odd numbers is N^2

2 - If you multiply X * 4, sqrt(X) doubles and both are shifts.

There is nothing in an 8051 that can be considered "fairly quick".
....and I rather like 8051s.
 
M

MooseFET

You are treating the symptoms and not the disease. Strongly typed
languages already exist that would make most of the classical
errors of C/C++ programmers go away. Better tools would help in
software development, but until the true cost of delivering faulty
software is driven home the suits will always go for the quick
buck.
No, I am making the true observation that complex digital logic
designs are usually bug-free, simple software systems have a chance
of being so, and complex software systems never are.
John
May i introduce you to a concept called cyclomatic complexity. The
cyclomatic complexity of 100's of interacting state machines is on
the order of 10^5 to 10^6. A memory array of regular blocks of
storage accessed by a regular decoder has a cyclomatic complexity of
on the order of 10 to 10^2. In the memory there is much
self-similarity across several orders of magnitude in size.
So *that's* why Windows is so reliable! It's a single state machine
that traverses a simple linear array of self-similar memory.
Beware of anything that is claimed to lead to better programming.
When Intel introduced segmentation on the 8086, they said it improved
program modularity etc. At that time I suggested that the program
counter should have been made such that it decremented to help with
top down program design.

I was impressed by the COPS processor that used a pseudo-random shift
register as the program counter. That was all-over-the-place program
design.

One of the ones for video games had a whole bunch more pseudo-random
circuits. It had timers and sound generators and all sorts of stuff
using that method. The sound one had a very long complex pattern and
a few much less ones. You could make all sorts of weird noises by
programing it back and forth among the different codes.
 
J

JosephKK

MooseFET [email protected] posted to sci.electronics.design:
MooseFET [email protected] posted to sci.electronics.design:


On Sep 19, 7:01 am, John Larkin
But in proper synchronous design, controlled by
state machines, immensely complex stuff just works. It's sort of
ironic that in a big logic design, 100K gates and maybe 100
state machines, everything happens all at once, every clock,
across the entire chip, and it works. Whereas with software,
there's only one PC, only one thing happens, at a single
location, at a time, and usually nobody can predict the actual
paths, or write truly reliable code.
4G of RAM * 8 Bits is a lot more bits than a 100K gates. You
need to keep your sizes equal if you want to make a fair
comparison.
On Sep 19, 7:01 am, John Larkin
On 18 Sep, 17:12, John Larkin
On Sep 17, 7:55 pm, John

Programmers have pretty much proven that they cannot
write bug-free large systems.
In every other area, humans make mistakes and yet we seem
surprised that programmers do too.
In most other areas of endeavour small tolerance errors do
not so often lead to disaster. Boolean logic is less
forgiving. And fence
Software programming hasn't really had the true transition
to a hard engineering discipline yet. There hasn't been
enough standardisation
Compare a software system to an FPGA. Both are complex, full
of state machines (implicit or explicit!), both are usually
programmed in a heirarichal language (C++ or VHDL) that has a
library of available modules, but the FPGAs rarely have bugs
that get to the field, whereas most software rarely is ever
fully debugged.
I think that hardware engineers get a better grounding in
logical design (although I haven't looked at modern CS
syllabuses so I may be out of date).
Hardware can be spaghetti too, and can be buggy and nasty, if
one does asynchronous design. But in proper synchronous design,
controlled by state machines, immensely complex stuff just
works. It's sort of ironic that in a big logic design, 100K
gates and maybe 100 state machines, everything happens all at
once, every clock, across the entire chip, and it works. Whereas
with software, there's only one PC, only one thing happens, at a
single location, at a time, and usually nobody can predict the
actual paths, or write truly reliable code.
But it is mostly a cultural thing. Software houses view minimum
time to market and first mover advantage to gain maximum market
share as more important than correct functionality. And it
seems they are right. Just look at Microsoft Windows vs IBMs
OS/2 a triumph of superb marketting over technical excellence!
And I have bought my fair share of hardware that made it onto
the market bugs and all too. My new fax machine caught fire.
Early V90 modems that only half work etc.
So, computers should use more hardware and less software to
manage resources. In fact, the "OS kernal" of my multiple-CPU
chip could be entirely hardware. Should be, in fact.
You are treating the symptoms and not the disease. Strongly
typed languages already exist that would make most of the
classical errors of C/C++ programmers go away. Better tools
would help in software development, but until the true cost of
delivering faulty software is driven home the suits will always
go for the quick buck.
No, I am making the true observation that complex digital logic
designs are usually bug-free, simple software systems have a
chance of being so, and complex software systems never are.

May i introduce you to a concept called cyclomatic complexity. The
cyclomatic complexity of 100's of interacting state machines is on
the order of 10^5 to 10^6. A memory array of regular blocks of
storage accessed by a regular decoder has a cyclomatic complexity
of
on the order of 10 to 10^2. In the memory there is much
self-similarity across several orders of magnitude in size.


So what exactly is the definition. It seems to me that just because
the memory is a repeated array in physical space, it needn't be in
logical space.

A rather minimal article but the basic definition is there.

http://en.wikipedia.org/wiki/Cyclomatic_complexity

The first result from a google search for cyclomatic complexity.

Further reading is recommended.
 
J

JosephKK

Richard Henry [email protected] posted to sci.electronics.design:
If Microsoft followed their own advice (see Code Complete by Stevew
McConnell) , they could develop acceptably bug-free versions of
their
operating systems. However, that would require them to follow a
pattern of testing and bug repair before relaease that would mean we
would still be waiting for the release of Windows 98.

Yet another case of "do what i say and not as i do".
 
M

MooseFET

On Sep 20, 2:16 pm, MooseFET <[email protected]> wrote:


[... cyclomatic complexity ...]
It is actually a better metric for deciding on the number of test
cases needed to excerise every path in a complex decision network at
least once. Essentially it gives a path complexity count of all the
control flows through the code.

http://www.sei.cmu.edu/str/descriptions/cyclomatic_body.html
andhttp://en.wikipedia.org/wiki/Cyclomatic_complexity

That seems to be a slightly different meaning than Mr. Larkin was
thinking of.

There are a few reasons that I can think of as to why it hasn't caught
on widely.

1) It measures something about a software design. It doesn't give
you a tool to do something with that measurement. It doesn't help you
to simplify things.

2) It gives you the bad news once you are a long way into the
design. My then you've likely already got a feeling about how much
you underestimated the testing difficulty by.

3) It produces a bad news number. The number gets bigger the worse
the situation is. Nobady likes to pay for bad news.

It should be better known in the industry.

I agree that it should be known but I don't see it as a very good
measure for software. Consider a bit of code like this:

FastShift:
switch-like-construct-with(ShiftNo)

case 8:
Y = 0 All the bits are gone out the top
goto ShiftDone

case 7:
move carry, X.0 get LSB
Y = 0
rotate left thru carry Y
goto ShiftDone

... etc ...
case 0: Nothing to do
ShiftDone:


This would be considered fairly complex by a measure that used the
number of paths through the code. It however can be checked visually
fairly quickly. If the complexity is measured as a number of states
and number of ways to change states, it also would be fairly complex
because the X and N can start as 256*256 states.

Something that used a 12 bit pseudo-random generator to control just
two paths would be rated as less complex by both measures but would be
a lot harder to check.

I like McCabes CCI which I find a *very* good indicator of code likely
to contain bugs.

It is quicker just to look for the word "windows" on the
packaging. :)

I think you will agree that there is a weakness in the method. This
weakness may also be part of why it is not so widely used.

[...]
I can pretty much guarantee that above a certain size or complexity
there will be bugs in a given routine. You will get more hits Googling
with the longer "Tom McCabe" and "cyclomatic complexity index". Sadly
it is yet another useful tool ignored by the mainstream. "McCabe's
CCI" gets mostly my own postings and a medical usage.

It may be that you need to look for s different term. Some people may
have a different name for basically the same measure. "measure of
system complexity" leads to many hits in google. Some may be the CCI
under a different name.
 
Top