Maker Pro
Maker Pro

Prefered resistor range

R

Rheilly Phoull

G'Day All,
I seem to have lost the formula that calculates the 1% 'preferred' range of
resitors.
Can anyone help out please ??
 
L

Larry Brasfield

Rheilly Phoull said:
G'Day All, Hi.
I seem to have lost the formula that calculates the 1% 'preferred' range of
resitors.
Can anyone help out please ??

I have appended a very handy Perl script I wrote to
deal with not knowing all (or having forgot some) of
the 1% values. It also finds pairs of values to get
closer values than the standard spreads allow.
(This is useful when using 0.1% parts)

Run it without arguments for invocation tips.
Get Perl at www.activestate.com .

--
--Larry Brasfield
email: [email protected]
Above views may belong only to me.

============== script only follows ================
#!/usr/bin/perl

my $usage = <<'_';
Usage:
stdvals tolerance
or
stdvals tolerance value [2a|2p|2r]
or
stdvals -n decsplit
_


my $pick = shift;

my %stdsplits = (
3 => '50%',
6 => '20%',
12 => '10%',
24 => '5%',
96 => '1%',
);
my %digits = (
3 => 2,
6 => 2,
12 => 2,
24 => 2,
96 => 3
);

# Unfortunately, the lower tolerances do not derive mathematically.
%fudge2digits = (
'1.0' => '1.0',
'1.1' => '1.1',
'1.2' => '1.2',
'1.3' => '1.3',
'1.5' => '1.5',
'1.6' => '1.6',
'1.8' => '1.8',
'2.0' => '2.0',
'2.2' => '2.2',
'2.4' => '2.4',
'2.6' => '2.7',
'2.9' => '3.0',
'3.2' => '3.3',
'3.5' => '3.6',
'3.8' => '3.9',
'4.2' => '4.3',
'4.6' => '4.7',
'5.1' => '5.1',
'5.6' => '5.6',
'6.2' => '6.2',
'6.8' => '6.8',
'7.5' => '7.5',
'8.3' => '8.2',
'9.1' => '9.1',
);

my %stdtols;

map {
my $tol = $stdsplits{$_};
$stdtols{$tol} = $_;
} keys %stdsplits;

my $decsplit;
my $tolerance;
my $sigdigits;

sub saydec {
print "Standard decade splits are: ", join(' ', sort keys %stdsplits), "\n";
}
sub saytol {
print "Standard tolerances are: ", join(' ', sort keys %stdtols), "\n";
}

if (!$pick) {
print $usage;
&saytol;
&saydec;
exit 1;
}

if ($pick =~ m/^\d+$/) {
if ($decsplit = $stdtols{$pick.'%'}) {
$tolerance = $pick.'%';
}
else {
print $usage;
&saytol;
exit 1;
}
}
elsif ($pick eq '-n' && ($tolerance = $stdsplits{$pick = shift})) {
print "Standard $tolerance values:\n";
$decsplit = $pick;
}
else {
print $usage;
&saydec;
exit 1;
}

my $target = shift;
my $parts = shift;
my $combine = undef;
if (! $parts) { $parts = 1; }
else {
unless ($parts =~ s/([apr])$/$combine=$1,''/e) {
print $usage;
print "Specify 2a, 2p or 2r as a combination method.\n";
exit 1;
}
}

$sigdigits = $digits{0+$decsplit};

my @values = ();

my $loginc = log(10) / $decsplit;
my $fmt = sprintf("%%%1d.%1df", $sigdigits+1, $sigdigits-1);
for (my $i = 0; $i < $decsplit; ++$i) {
my $value = sprintf($fmt, exp($i * $loginc));
if ($sigdigits < 3) {
$value = $fudge2digits{$value};
}
if (! $target) {
my $sep = (($i+1) % 12)? " " : "\n";
print "$value$sep";
}
else {
push(@values, $value + 0.0);
}
}

exit 0 unless $target;

push(@values, 10.0);

my %multsuf = (
'f' => 1e-15,
'p' => 1e-12,
'n' => 1e-9,
'u' => 1e-6,
'm' => 1e-3,
'k' => 1e3,
'K' => 1e3,
'M' => 1e6,
'G' => 1e9,
'T' => 1e12,
);

sub floor {
my $v = shift;
if ($v >= 0) { return int($v); }
else { return -1 - int(-$v); }
}

sub unsuffix {
my $tv = shift;
my $mult = 1;
my $suf;
if ($tv =~ s/([kKnupfmMGT])$/$suf=$1,''/e) {
$mult = $multsuf{$suf};
}
else { $suf = ''; }
return $tv * $mult;
}

sub closest {
my $tv = shift;
$tv = &unsuffix($tv);
if ($tv <= 0) { return undef; }
my $prescale = &floor(log($tv) / log(10));
$prescale = 10 ** $prescale;
my $findv = $tv / $prescale;
my $bestv = undef;
for (my $i = 0; $i < $decsplit; ++$i) {
my $lo = $values[$i];
my $hi = $values[$i+1];
if ($findv >= $lo && $findv < $hi) {
my $gm = sqrt($lo * $hi);
$bestv = ($findv < $gm)? $lo : $hi;
last;
}
}
return $bestv *= $prescale;
}

if ($parts == 1) {
print &closest($target), "\n";
}

sub gapprox {
my ($tv, $scale, $rtvbase, $rtvadj) = @_;
my $vb = &closest($scale * $tv);
my $va = 1.0/$tv - 1.0/$vb;
if ($va > 0) {
$va = &closest(1.0/$va);
$$rtvbase = $vb;
$$rtvadj = $va;
return 1.0/(1.0/$vb + 1.0/$va);
}
else {
$$rtvbase = $vb;
$$rtvadj = undef;
return $vb;
}
}

sub zapprox {
my ($tv, $scale, $rtvbase, $rtvadj) = @_;
$$rtvbase = &closest($scale * $tv);
my $va = $tv - $$rtvbase;
if ($va > 0) {
$$rtvadj = &closest($va);
return $$rtvbase + $$rtvadj;
}
else {
$$rtvadj = 0;
return $$rtvbase;
}
}

sub rapprox {
my ($rv, $scale, $rtvbase, $rtvadj) = @_;
$$rtvbase = &closest($scale * $rv);
$$rtvadj = &closest($$rtvbase / $rv);
return $$rtvbase / $$rtvadj;
}

if ($parts == 2) {
my $tv = &unsuffix($target);
if ($combine eq 'p') {
my ($tvbase, $tvadj);
my $bestratio = 2.0;
my $bestgot = undef;
my $bestscale = undef;
for (my $scale = 1.00; $scale < 1.5; $scale += 0.01) {
my ($tvb, $tva);
my $tvgot = &gapprox($tv, $scale, \$tvb, \$tva);
next unless $tvgot;
my $ratio = $tvgot / $tv;
if ($ratio < 1) { $ratio = 1.0 / $ratio; }
if ($ratio < $bestratio) {
$tvbase = $tvb;
$tvadj = $tva;
$bestratio = $ratio;
$bestgot = $tvgot;
}
}
my $adjv = defined($tvadj)? sprintf("%g",$tvadj) : 'none';
printf("Approximate %g as 1(1/%g + 1/%s), yielding %g (ratio = %6.4f)\n",
$tv, $tvbase, $adjv, $bestgot, $bestratio);
}
if ($combine eq 'a') {
my ($tvbase, $tvadj);
my $bestratio = 2.0;
my $bestgot = undef;
my $bestscale = undef;
for (my $scale = 1.0; $scale > 0.7; $scale -= 0.01) {
my ($tvb, $tva);
my $tvgot = &zapprox($tv, $scale, \$tvb, \$tva);
my $ratio = $tvgot / $tv;
if ($ratio < 1) { $ratio = 1.0 / $ratio; }
if ($ratio < $bestratio) {
$tvbase = $tvb;
$tvadj = $tva;
$bestratio = $ratio;
$bestgot = $tvgot;
}
}
printf("Approximate %g with (%g + %g), yielding %g (ratio = %6.4f)\n",
$tv, $tvbase, $tvadj, $bestgot, $bestratio);
}
if ($combine eq 'r') {
my ($tvbase, $tvadj);
my $bestratio = 2.0;
my $bestgot = undef;
my $bestscale = undef;
for (my $scale = 0.34; $scale < 3.4; $scale += 0.01) {
my ($tvb, $tva);
my $tvgot = &rapprox($tv, $scale, \$tvb, \$tva);
my $ratio = $tvgot / $tv;
if ($ratio < 1) { $ratio = 1.0 / $ratio; }
if ($ratio < $bestratio) {
$tvbase = $tvb;
$tvadj = $tva;
$bestratio = $ratio;
$bestgot = $tvgot;
}
}
printf("Approximate ratio %8.6f with (%g / %g), yielding %g\n",
$tv, $tvbase, $tvadj, $bestgot);
}
}
if ($parts > 2) {
print "Sorry, not doing more than 2 parts yet.\n";
}


__END__
 
J

John Woodgate

I read in sci.electronics.design that Rheilly Phoull
G'Day All,
I seem to have lost the formula that calculates the 1% 'preferred' range of
resitors.
Can anyone help out please ??
If you mean the E96 series, the ratio between adjacent values is the
97th root of 10 (97 because the 97th value is ten times the first value
and is thus part of the next decade), which is 1.024021979....
 
L

Larry Brasfield

John Woodgate said:
I read in sci.electronics.design that Rheilly Phoull

If you mean the E96 series, the ratio between adjacent values is the
97th root of 10 (97 because the 97th value is ten times the first value
and is thus part of the next decade), which is 1.024021979....

Can you reconsider that assertion? I cannot make
sense of it, being stuck in the following thought pattern:
If there are 96 distinct values per decade, then for each
96 value steps, a whole power of ten is traversed.
Expressed mathematically, (and ignoring the rounding
necessary to get standard values), the E96 set can be
obtained as 10^(N * log10(10) / 96) == 10^(N/96)
for the 96 integer values of N from 0 to 95. This
corresponds to a multiplicative interval equal to the
96th root of 10, not the 97 root.

In addition to the problem outlined above, what you
say is contrary to the algorithm that I successfully
applied to devise the program I posted earlier on
this thread. I tested that quite a bit, so I am quite
sure that the decade should be split logarithmically
into 96 equal steps (sans rounding). In my testing,
I compared results with the table published by
several resistor manufacturers.
 
J

John Woodgate

I read in sci.electronics.design that Larry Brasfield <donotspam_larry_b
Can you reconsider that assertion? I cannot make sense of it, being
stuck in the following thought pattern: If there are 96 distinct values
per decade, then for each 96 value steps, a whole power of ten is
traversed. Expressed mathematically, (and ignoring the rounding
necessary to get standard values), the E96 set can be obtained as 10^(N
* log10(10) / 96) == 10^(N/96) for the 96 integer values of N from 0 to
95. This corresponds to a multiplicative interval equal to the 96th
root of 10, not the 97 root.

My understanding is that the values in these series don't include the
first value of the next decade. So the E6 series has 7 values (which
actually don't quite match the calculated values for reasons lost in the
mists of time) : 1, 1.5, 2.2, 3.3, 4.7, 6.8 and 8.2, involving 6
multiplications by the *7th* root of ten; the 7th multiplication would
give 10, which is the first value in the next decade.

However, I don't have the 'preferred numbers' standard, so I won't
assert that I am correct.
In addition to the problem outlined above, what you say is contrary to
the algorithm that I successfully applied to devise the program I posted
earlier on this thread. I tested that quite a bit, so I am quite sure
that the decade should be split logarithmically into 96 equal steps
(sans rounding). In my testing, I compared results with the table
published by several resistor manufacturers.

Well, it depends on how your algorithm works, and I can't read PERL. The
96th root is 1.024275221... To four significant figures, the two roots
are 1.024... So if you preserve only those four figures and calculate
the values by successive multiplications, you will get the same answers
whether you use the 96th root or the 97th root.
 
A

Allan Herriman

I read in sci.electronics.design that Larry Brasfield <donotspam_larry_b


My understanding is that the values in these series don't include the
first value of the next decade. So the E6 series has 7 values (which
actually don't quite match the calculated values for reasons lost in the
mists of time) : 1, 1.5, 2.2, 3.3, 4.7, 6.8 and 8.2, involving 6
multiplications by the *7th* root of ten; the 7th multiplication would
give 10, which is the first value in the next decade.

However, I don't have the 'preferred numbers' standard, so I won't
assert that I am correct.

IEC 60063

Google will find a copy, e.g.
http://www.bourns.com/pdfs/standard_decade_values_02.pdf


It seems that John is wrong (an infrequent occurrence!). The E6
series does *not* include 8.2, and there are only 6 multiplications by
the 6th root of ten.

Regards,
Allan
 
J

John Woodgate

I read in sci.electronics.design that Allan Herriman <allan.herriman.hat
[email protected]> wrote (in <24dd319v1dp5isq55b8qmsqrqcurn2hh
[email protected]>) about 'Prefered resistor range', on Tue, 15 Mar 2005:

IEC 60063

Google will find a copy, e.g.
http://www.bourns.com/pdfs/standard_decade_values_02.pdf


It seems that John is wrong (an infrequent occurrence!).

Thank you.
The E6 series
does *not* include 8.2, and there are only 6 multiplications by the 6th
root of ten.

Yes, of course. I should have chosen E12, where such an error would have
been much more obvious to me. The magnitude of the error for the E96
series is 0.000253241...
 
T

The Phantom

I read in sci.electronics.design that Larry Brasfield <donotspam_larry_b


My understanding is that the values in these series don't include the
first value of the next decade. So the E6 series has 7 values (which
actually don't quite match the calculated values for reasons lost in the
mists of time) : 1, 1.5, 2.2, 3.3, 4.7, 6.8 and 8.2, involving 6
multiplications by the *7th* root of ten; the 7th multiplication would
give 10, which is the first value in the next decade.

However, I don't have the 'preferred numbers' standard, so I won't
assert that I am correct.

Well, it depends on how your algorithm works, and I can't read PERL. The
96th root is 1.024275221... To four significant figures, the two roots
are 1.024... So if you preserve only those four figures and calculate
the values by successive multiplications, you will get the same answers
whether you use the 96th root or the 97th root.

And the answers you get will not always be in the E96 series.

For example, 102 is the second in the E96 series; multiply by 1.024
and you get 104.448, which rounds to 104 which is not in the series.

Or, if you start with 100 and repeatedly multiply by 1.024, keeping
the unrounded product as you go along (and *rounding* the running
product at each step only to get the three digit number which should
be a member of the E96 series), the first error occurs when the
running product equals 136.1129468, which, rounded, is 136 and not 137
as it should be.

To get right results, you must use a multiplier of more than 4 digits.

Try this: go to
http://www.bourns.com/pdfs/standard_decade_values_02.pdf

and put the 96 values from the E96 series in a list, take the log of
all the elements of the list (still in a list), make another list of
the 95 first differences, add up the 95 first differences, divide the
sum by 95, and take the antilog of that number. This will be the
average ratio between elements of the E96 series. I get
1.02427190671, which is much closer to the 96th root of 10 than to the
97th root of 10.

By the way, there is a peculiarity (error?) in the E192 series. If
you use the 192nd root of 10 to generate the series, the 186th member
will be calculated as 919.478686, which would round to 919, but the
official value is 920. Apparently the folks who originally did the
numbers were a little off on that one.
 
R

Rheilly Phoull

The Phantom said:
And the answers you get will not always be in the E96 series.

For example, 102 is the second in the E96 series; multiply by 1.024
and you get 104.448, which rounds to 104 which is not in the series.

Or, if you start with 100 and repeatedly multiply by 1.024, keeping
the unrounded product as you go along (and *rounding* the running
product at each step only to get the three digit number which should
be a member of the E96 series), the first error occurs when the
running product equals 136.1129468, which, rounded, is 136 and not 137
as it should be.

To get right results, you must use a multiplier of more than 4 digits.

Try this: go to
http://www.bourns.com/pdfs/standard_decade_values_02.pdf

and put the 96 values from the E96 series in a list, take the log of
all the elements of the list (still in a list), make another list of
the 95 first differences, add up the 95 first differences, divide the
sum by 95, and take the antilog of that number. This will be the
average ratio between elements of the E96 series. I get
1.02427190671, which is much closer to the 96th root of 10 than to the
97th root of 10.

By the way, there is a peculiarity (error?) in the E192 series. If
you use the 192nd root of 10 to generate the series, the 186th member
will be calculated as 919.478686, which would round to 919, but the
official value is 920. Apparently the folks who originally did the
numbers were a little off on that one.

Hmmm, mebbe I shouldn't have asked ??
 
J

John Woodgate

I read in sci.electronics.design that The Phantom <[email protected]>
wrote (in said:
By the way, there is a peculiarity (error?) in the E192 series. If you
use the 192nd root of 10 to generate the series, the 186th member will
be calculated as 919.478686, which would round to 919, but the official
value is 920. Apparently the folks who originally did the numbers were
a little off on that one.

They obviously looked at rounding: 919.478686..-> 919.47869 -> 919.4787
-> 919.479 -> 919.48 -> 919.5 - > 920.

This is a well-known effect of rounding and can be combated, but not
necessarily eliminated (as in this case), by 'casting to the odd'. This
means that you round down a number ending in 5 if the resulting final
digit would be odd if you rounded up.
 
T

The Phantom

I read in sci.electronics.design that The Phantom <[email protected]>


They obviously looked at rounding: 919.478686..-> 919.47869 -> 919.4787
-> 919.479 -> 919.48 -> 919.5 - > 920.

This is a well-known effect of rounding and can be combated, but not
necessarily eliminated (as in this case), by 'casting to the odd'. This
means that you round down a number ending in 5 if the resulting final
digit would be odd if you rounded up.

I don't think this is the correct explanation. If it were then why
wasn't the 7th member of the E192 series calculated like this?:

107.4607828 -> 107.460783 -> 107.46078 -> 107.4608 -> 107.461
-> 107.46 -> 107.5 -> 108.

Or why wasn't the 13th member calculated like this?:

115.478198 -> 115.47820 -> 115.4782 -> 115.478 -> 115.48
-> 115.5 -> 116.

or the 43rd member, or the 44th member, etc.

You know better than to do this, as do I, and they probably did too as
evidenced by the four cases just above, where they didn't do it.

They just made a mistake.
 
T

The Phantom

Hmmm, mebbe I shouldn't have asked ??

Why not?

Now that you have the answer to your first question, don't you want to
know why the 10% and 5% standard values don't result from a simple
root-of-10 formula?
 
L

Larry Brasfield

John Woodgate said:
I read in sci.electronics.design that Larry Brasfield <donotspam_larry_b


My understanding is that the values in these series don't include the
first value of the next decade. So the E6 series has 7 values (which
actually don't quite match the calculated values for reasons lost in the
mists of time) : 1, 1.5, 2.2, 3.3, 4.7, 6.8 and 8.2, involving 6
multiplications by the *7th* root of ten; the 7th multiplication would
give 10, which is the first value in the next decade.

However, I don't have the 'preferred numbers' standard, so I won't
assert that I am correct.

I should maybe leave this at that point, but I am responding also
to some points you make elsewhere in this thread.
Well, it depends on how your algorithm works, and I can't read PERL.

You don't need to study my code to understand my point,
which is that by splitting the decade logarithmically into 96
intervals, the correct 96 values are obtained. I conveyed
this experience to underscore the idea that the reasoning I
laid out should be examined carefully, as it is likely correct.
The
96th root is 1.024275221... To four significant figures, the two roots
are 1.024... So if you preserve only those four figures and calculate
the values by successive multiplications, you will get the same answers
whether you use the 96th root or the 97th root.

When I evaluate 1.024^96, I get 9.7453 on my HP32S. (And
this agrees with another calculator I checked.) For the E96 series
starting with 1, the 96th and last member is 9.76. So your method
requires one more step (times 1.024) to reach the next decade. It
produces 97 values per decade, (assuming we round off to 3 digits
at each closest approach to whole powers of 10), whereas all the
tables I've seen have only 96 values per decade.

Maybe there is some trick rule for rounding, dropping accumulated
"erroneous" residue at special places, or other machinations, that
will convert what would otherwise be 97 intervals into the 96 that
are required to duplicate the standard values. I do not doubt your
ability to devise such a rule. But I would hope you could see, if
not acknowledge, the preferable simplicity of this rule:
for (N = 0 to 95)
standard_value[n] = round_to_3_digits( 10 ^ (N/96) )

Answering Mr. Phoull's request for "the formula that calculates the
1% 'preferred' range of resitors", absent any authoritive statement
about how somebody long ago derived those tables, we have a
choice: Use the above formula because it is easy and works; or
complete the task of devising or elucidating your formula, which
will be more complicated, amenable to no closed form solution,
and unintuitive; or await or discover some other formula in the
hope that it will be better. I should think it is an easy choice.
 
N

nospam

Rheilly Phoull said:
G'Day All,
I seem to have lost the formula that calculates the 1% 'preferred' range of
resitors.
Can anyone help out please ??

No help with a formula, but, you can get ResCAD for Win32 here

http://web.newsguy.com/pentangle/rescad11.zip

which will give you nearest E6/12/24/96 values and combinations for
series/parallel/voltage divider.

It uses lookup tables.
 
L

Larry Brasfield

Glenn Gundlach said:
Gee, I just had a copy of the page in the Digi-Key catalog. You have to
be able to buy the value you spec.

I can appreciate the humor here, abstractly.
But after designing the Nth precision active filter,
countless dividers, and other precise circuits, I
got tired of table lookups. To lookup just one
value is not overly hard, just tedious. You start
with what you want, look at the table, and do a
little simple arithmetic (often by eye) to decide
which value is closest. Maybe you adjust that
choice according to which way it went for some
nearby resistor(s).

Now, if I want a 63.54k resistor, I enter:
stdvals 1% 63.54k
and get back 63400. This saves just a little time,
so I cannot argue any clear superiority there.

Here is where an algorithmic approach really pays.
I really want something closer, and I am willing to
parallel two parts to get it. I enter:
stdvals 1% 63.54k 2p
and quickly get a useful result:
Approximate 63540 as 1(1/66500 + 1/1.43e+06),
yielding 63544.9 (ratio = 1.0001)
This saves some real time. (I can remember when,
during my stint as an engineering tech, I was given
the task of doing approximately this same search
for a short list of resistors. If not for the aid of an
HP-35, it would have taken days, not the hours I
had to spend on it.)

A common task in active RC filter design is to
come up with capacitor pairs that deliver a close
approximation to the values you really want. Ask:
stdvals 5% 588p 2a
and ye shall receive:
Approximate 5.88e-10 with (5.6e-10 + 2.7e-11),
yielding 5.87e-10 (ratio = 1.0017)

Any of you who have done this by hand know
the lingering doubt that remains when you have
not used an exhaustive search to find whatever
answer you settled upon. Or you know just
how tedious the exhaustive search is. Or both.

I guess the natural extension of Rich's joke is:
Who needs calculators or computers? I've
got tables of values and logarithms, a slide rule,
and my trusty pencil and pad. Valves forever!
 
R

R.Lewis

Larry Brasfield said:
I can appreciate the humor here, abstractly.
But after designing the Nth precision active filter,
countless dividers, and other precise circuits, I
got tired of table lookups. To lookup just one
value is not overly hard, just tedious. You start
with what you want, look at the table, and do a
little simple arithmetic (often by eye) to decide
which value is closest. Maybe you adjust that
choice according to which way it went for some
nearby resistor(s).

Now, if I want a 63.54k resistor, I enter:
stdvals 1% 63.54k
<<snip>>

If you need 63.54K and you have the complete range of 1% tolerance resistors
available you have a problem that no elementary, or sophistacated,
calculator is going to solve
 
L

Larry Brasfield

R.Lewis said:
<<snip>>

If you need 63.54K and you have the complete range of 1% tolerance resistors
available you have a problem that no elementary, or sophistacated,
calculator is going to solve

You appear to have assumed that the resistors
to be used would be 1% tolerance. It happens
that the 0.1% tolerance parts come in the same
values, (or more, for more money), so there is a
use for calculations such as the above example.
 
R

Robert Monsen

Larry said:
I have appended a very handy Perl script I wrote to
deal with not knowing all (or having forgot some) of
the 1% values. It also finds pairs of values to get
closer values than the standard spreads allow.
(This is useful when using 0.1% parts)

Run it without arguments for invocation tips.
Get Perl at www.activestate.com .

This script doesn't work as advertised using perl under cygwin
(suprise). What version of perl is it targeted towards? If you send me
some sample output, I'll take a look. It's probably something simple.

(rc<surname>@comcast.net)

--
Regards,
Robert Monsen

"Your Highness, I have no need of this hypothesis."
- Pierre Laplace (1749-1827), to Napoleon,
on why his works on celestial mechanics make no mention of God.
 
Top