Maker Pro
Maker Pro

Electronic components aging

Let say for prototyping, XC6SLX9, since they have a cheap enough startup tool package. However, the LX9 can only implement around 100K Q, not even a bare bond BM32. We can probably strip some instructions such as floating point multiple and division.

We need 32 bits data, perhaps 24 bits address.


Yes, that what we are trying to find. The right soft core on the right FPGA.

It's a loser, all the way around. You might find an acceptable hard
core but soft cores are a loser, for many reasons.
 
W

whit3rd

On Sunday, October 20, 2013 9:40:24 AM UTC-7, Don Y wrote:

[on squeezing performance from small CPUs]
I particularly favor good counter/timer modules. With just a few
"little" features you can enhance a tiny processor's capabilities
far beyond what a larger, "bloated" processor could do...

Speaking of which, what IS available in off-the-shelf counter/timer
support? I've still got a few DAQ cards with AMD's 9513 counter
chips, which I KNOW are obsolete, but what's the modern replacement?

The 9513 had five 16-bit counters, lots of modes, and topped out
at 10 MHz; you could make an 80-bit counter, and test it once
during the next many-lifetimes-of-the-universe.
 
D

Don Y

On Sunday, October 20, 2013 9:40:24 AM UTC-7, Don Y wrote:

[on squeezing performance from small CPUs]
I particularly favor good counter/timer modules. With just a few
"little" features you can enhance a tiny processor's capabilities
far beyond what a larger, "bloated" processor could do...

Speaking of which, what IS available in off-the-shelf counter/timer
support? I've still got a few DAQ cards with AMD's 9513 counter
chips, which I KNOW are obsolete, but what's the modern replacement?

I don't think there is a "free-standing" counter/timer "chip".
Nowadays, most MCUs have counters of varying degrees of
capability/buginess built in. So, we're supposed to learn
to live with said:
The 9513 had five 16-bit counters, lots of modes, and topped out
at 10 MHz; you could make an 80-bit counter, and test it once
during the next many-lifetimes-of-the-universe.

But many of its modes were silly. I.e., configuring it as a
time-of-day clock/calendar? Sheesh! What idiot decided that
a MICROPROCESSOR PERIPHERAL needed that capability? Can you
spell "software"?

It also had some funky bugs, was a *huge* die (for its time
and functionality), etc.

A lot of counter/timers are really "uninspired" designs, lately.
Its as if the designer had *one* idea about how it should be
used and that's how you're going to use it!

The Z80's CTC, crippled as it was (not bad for that era), could
be coaxed to add significant value -- if you thought carefully
about how you *used* it!

E.g., you could set it up to "count down once armed", initialize the
"count" to 1 (so, it "times out" almost immediately after being
armed), set it to arm on a rising (or falling) edge AND PRELOAD
THE NEXT COUNT VALUE AS '00' (along with picking an appropriate
prescaler and enabling that interrupt source)

As a result, when the desired edge comes along, the timer arms
at that instant (neglecting synchronization issues). Then, "one"
cycle (depends on prescaler) later, it times out and generates an
interrupt. I.e., you now effectively have an edge triggered interrupt
input -- but one that has a FIXED, built-in latency before it is
signalled to the processor.

The magic happens when the counter reloads on this timeout and, instead
of reloading with '1', uses that nice *big* number that you preloaded
in the "reload register". AND, STARTS COUNTING that very same timebase!

So, when your ISR comes along, it can read the current count and know
exactly how long ago (to the precision of the timebase) the actual
edge event occurred. EVEN IF THE PROCESSOR HAD INTERRUPTS DISABLED
for a big portion of this time! (or, was busy serving a competing
ISR, etc.).

The naive way of doing this would configure the device as a COUNTER,
preload the count at '1' and program the input to the desired polarity.
One edge comes along, counter hits '0'/terminal_count and generates
IRQ. Then, *hopes* you get around to noticing it promptly (or, at
least *consistently*).

The "smarter" approach lets you actually measure events with some
degree of precision without having to stand on your head trying to
keep interrupt latencies down to <nothing>.
 
A

Anthony Stewart

I first designed high reliability products for Aerospace in 1975 using Mil-HBK-217. It was based on generic components with stress factors for environment, design stress levels or margin based on actual field reliability data..

It is based on the assumption that the design is defect-free and proven by test validation methods and the material quality is representative of the field of data collected, which would be validated by vendor and component qualification. The overall product would be validated for reliability with Highly Accelerated Stress Screening (HASS) and Life Test (HALT) methods to investigate the weak link in the design or component.

Failures in Test (FIT) must be fixed by design to prevent future occurrences and MTBF hours are recorded with confidence rates.

The only thing that prevents a design from NOT meeting a 50 yr goal is lackof experience in knowing how to design and verify the above assumptions for design , material & process quality.

You have to know how to predict every stress that a product will see, and test it with an acceptable margin requirement for aging, which means you must have the established failure rate of each part.

This means you cannot use new components without an established reliabilityrecord. COTS parts must be tested and verified with HALT/HASS methods.

In the end, pre-mature failures occur due to oversights in awareness of badparts, design or process and the statistical process to measure reliability.
 
R

Robert Baer

Phil said:
The '217 methodology has been discredited pretty thoroughly since then,
though, as has the Arrhenius model for failures. (It's still in use,
because the alternative is waiting 50 years, but AFAICT nobody trusts
the numbers much. The IBM folks I used to work with sure don't.)

It's pretty silly when your calculation predicts that reliability will
go _down_ when you add input protection components or a power supply
crowbar.

Cheers

Phil Hobbs
Well....adding parts reduces OVERALL reliability due to the fact they
can (and will) fail.
Some parts,when they fail can induce spikes or surges that will
stress "protected" parts.
So, in some (specific) cases it is not silly.
 
Well....adding parts reduces OVERALL reliability due to the fact they
can (and will) fail.

That's true but these components will also prevent others from
failing. That isn't what the reliability numbers show, however.
Some parts,when they fail can induce spikes or surges that will
stress "protected" parts.
So, in some (specific) cases it is not silly.

The numbers are silly and they way they're normally used is even
worse. It's the old "be careful what you ask for because you're
likely to get it".
 
P

Phil Hobbs

Well....adding parts reduces OVERALL reliability due to the fact they
can (and will) fail.
Some parts,when they fail can induce spikes or surges that will
stress "protected" parts.
So, in some (specific) cases it is not silly.

I find it very hard to believe that input protection networks increase
the field return rate.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
J

josephkk

Hi Joseph,



You replace preemptively when the cost of a *required* replacement
(i.e., after a failure) exceeds the cost of the preemptive maintenance.

Unfortunately not so. The relevant manager has her/his eye altogether too
closely on the quarterly figures that determine his/her bonus to do that.
That preempts the preventative maintenance.
They replaced all the gas lines (*to* each residence as well as
the feeds throughout the subdivision) here in the past year or two.
(i.e., "piecemeal").

And it might have been the result of a PUC or Court order, both rather
non-negotioable. It is amazing how the CA PUC suddenly developed some
teeth after the San Bruno disaster.
 
J

josephkk

Hi Tim,

I have some JAN microminiature tubes of recent production. Good reasons,
yes... just not many! ;-)

"Newer is always better", right? :-/

I remember the first semiconductor memory boards we used in Nova2's
(or maybe 3's?). Really cool to see all those (identical) chips
in neat rows and columns (I think it was 4K on a 16x16 board?).

[Of course, it had been similarly cool to see that fine "fabric" of
cores on a similarly sized board!]

But, it was *really* disappointing to "discover" (d'uh!) that the
machine no longer could *retain* it's program image in the absence
of power! Every startup now required IPL from secondary storage.
From the user's standpoint: "Gee, that sucks! Now, tell me again,
why is this an IMPROVEMENT??"

That's not all; the new memory wasn't any faster or that much lower power.
A long time a go (~ 40 years) i worked on computer that used core with 120
ns access time and 300 ns cycle time, faster than DRAM nearly 20 years
later.
 
T

Tim Williams

josephkk said:
That's not all; the new memory wasn't any faster or that much lower
power.
A long time a go (~ 40 years) i worked on computer that used core with
120
ns access time and 300 ns cycle time, faster than DRAM nearly 20 years
later.

Which still hasn't changed much; RAS/CAS cycle times are around, erm, I
see figures around 10ns. Quite a bit less in absolute terms, but with CPU
clock rates over a thousand times higher, it simply hasn't scaled
accordingly. I/O is even worse; one figure puts PCIe latency on the order
of a microsecond (I forget if that's fractional or multiple us?),
absolutely no different from the old ISA bus (8MHz, though only 8 bit).

Tim
 
J

josephkk

Hey, those current limiters on power output stages and those overtemperature
shutdown circuits increase the FIT rates!

This discussion reminds me of a bit of conversation i once overheard. The
discussion was about some new (back then) high energy density metallized
film capacitors. The goal energy density was something like 20 mF*V per
in^3 at 400 V. The manufacturer could make them with a lifetime of 200 to
300 hours. The operational goal was 168 hours. The problem arose when
the customer insisted on 168 hour burn-in on 100% of the parts. After
burn-in they could no longer meet operational goals due to the limited
life. IIRC the infant mortality period was something like 2 hours at 120
% rated voltage. Never did hear how it all worked out though.

?-)
 
Top