Maker Pro
Maker Pro

Discussing audio amplifier design -- BJT, discrete

J

Jon Kirwan

"Vbe multiplier."

Got it. Since that time, I've found "rubber diode" as
another term mentioned on wiki. ;)
The classic output stage biasing scheme uses small emitter resistors
and biases the output transistors to idle current using a couple of
junction drops between the bases, or a Vbe multiplier with a pot. Both
are good ways to have a poorly defined idle current and maybe fry
transistors.

I've already expressed my concern about that.
Two alternates are:

1. Use zero bias. Connect the complementary output transistors
base-to-base, emitter-to-emitter. Add a resistor from their bases to
their emitters, namely the output. At low levels, the driver stage
drives the load through this resistor. At high levels, the output
transistors turn on and take over.

2. Do the clasic diode or Vbe multiplier bias, but use big emitter
resistors. Parallel the emitter resistors with diodes.

In both cses, the thing will be absolutely free frfom thermal runaway
issues and won't need adjustments. Bothe need negative feeback to kill
crossover distortion.

I need to get some basics down before I return to these. I'm
not there, yet. But I _do_ see an issue with the Vbe
multiplier if it isn't crafted carefully for the situation.
Or...

3. Use mosfets

No FETs.

Jon
 
J

Jon Kirwan

Discrete-transistor audio design, being such an ancient practice,
tends to refer to history and authority rather than design from
engineering fundamentals.

So there's a need to refresh our minds about this. You
talked about FET designs, but before one can understand
whether or not they compare well to BJT designs it seems to
me that one needs to understand what can be done with BJTs
first. Just _stating_ (or making a premise based on what you
say is more history and authority that actual _best_ design
practices) that they would be better isn't enough, I suspect.
If I were designing an audio amp nowadays (which I certainly aren't)
I'd use mosfets with an opamp gate driver per fet. That turns the fets
into almost-perfect, temperature-independent, absolutely identical
gain elements. That's what I do in my MRI gradient amps, whose noise
and distortion are measured in PPMs.

Well, if I were opening the door to ICs there'd be no real
learning going on. An opamp doesn't teach one that much.
They are pretty close to ideal and what's learned by that?
Now designing one... that would be another case. But using
ICs with a FET tacked on the end teachs about as much about
deeper levels of analog design (getting closer to the physics
around us) as does using a Visual BASIC drop-down box teaches
about the Windows messaging layer. It's almost all hidden
from view in either case.

That doesn't mean a Visual BASIC drop-down isn't useful or
that people should make programs using them as a shortcut.
They should, and do. But if you want to know how to make
some new widget of your own... you may find yourself with
only one oar in the water... spinning in circles.
ftp://jjlarkin.lmi.net/Amp.jpg

Why keep repeating a 50-year-old topology when you could have a little
fun?

The fun for me is in digging closer into the physics. Just
as the programming fun comes from seeing how a c compiler
implements the resulting code on the machine instruction
level or how a coroutine may be implemented using a thunk.

Put another way, one can move from understanding one's own
backyard in two directions. (1) Towards seeing how plants
participate in the meadow or a woods and how those
participate within an Earth/air/ice/water/sun system (in
other words, reaching towards larger and larger abtraction
levels.) (2) Or else, delve deeper towards seeing how organs
function within the organism, how cells function within that,
how proteins work, how peptide chains are brought together
into those, how atoms work in making peptide chains, and so
on. In other words, there are two telescoping directions to
head.

With electronics, this can be towards higher abstraction
levels using ICs or towards smaller, more concrete levels
towards depletion regions and eventually QM events at the
particle interaction level.

Which is more interesting depends on your goals. Right now,
I want to focus on the BJT design level of abstraction. This
has nothing to do with making an amplifier. It has to do
with using an amplifier as an excuse to learn but also as a
well defined outcome that can then be measured and observed
using well-understood measurement criteria (and the ability
to experience the result as a basic, visceral thing to the
ear, too.)

Jon
 
P

Phil Allison

"John Larkin is lying IDIOT
Discrete-transistor audio design, being such an ancient practice,
tends to refer to history and authority rather than design from
engineering fundamentals.

** How the **** would you know ?

Clearly you don't and just make things up to suit your wacky prejudices.

If I were designing an audio amp nowadays (which I certainly aren't)

** So shut the **** up.

You clueless fucking wanker.


.... Phil
 
P

Phil Allison

"John Larkin is lying IDIOT
I showed everybody an amp I designed,


** FFS that monstrosity is NOT any kind of audio amp.

YOU have no experience with any aspect if the subject.

YOU are nothing but a LYING PIECE OF SHIT.

**** OFF



..... Phil
 
J

Jon Kirwan

Signal Splitting? Can you sketch out what your thinking?

Yeah, I think so. Something like this:
: | |
: \ |
: / R2 |
: \ |
: / |
: | |
: | |/c Q2
: +---------|
: | |>e
: | |
: |/c Q3 |
: -------| +-----
: |>e |
: | |
: | |/c Q1
: +---------|
: | |>e
: | |
: \ |
: / R1 |
: \ |
: / |
: | |

The "signal splitter" here is Q3. It's also providing gain,
too, though. The emitter and collector move in opposite
directions and the signal "splits" at Q3. (The emitter
follows the base, the collector inverts the base.)

If I read with any understanding about these things, properly
biasing Q3 is a pain, the Q3 gain varies with the load itself
as well as its bias, and compensation issues are complicated
a bit.
The wikipedia type circuit can use a few n2222 - I count a max of 5.
Even if you could use only n2222 it would not be a good idea - making
the circuit stable would be more difficult.

Yes, ignorant as I am still of the details, I think that's
very true. The splitter has significant signal voltage on
its input and I've read that pole-splitting methods for
improving stability are harder to apply here.
On the good side the n2222
is a good choice for Q1,Q2 and as active replacements for R5,R6 and
one other (optional) we haven't met yet. What makes it a good
transistor is the large current gain / bandwidth product and the flat
DC current gain over a wide range of viable bias currents. Both
contribute to low distortion.

http://www.onsemi.com/pub_link/Collateral/P2N2222A-D.PDF (page 3
graph)

compared to say 2n3904
http://www.onsemi.com/pub_link/Collateral/2N3903-D.PDF

where the flat portion of the DC gain curve is over a very limited
range.

Interesting point to consider. Something that had slipped by
me, so far.
I come up with a figure of 50 volts rail to rail no load voltage -
after picking out a common transformer with 15% regulation.

Okay. This is going to force me to sit down with paper and
work through. I was stupidly imagining +/-18V max, or 36V
rail to rail. I haven't considered the details of the output
section yet, driving a load from rails that run up and down
on capacitors that charge and discharge at 1A-level currents
into the load, and perhaps I need to spend some more time
there before moving on.

There are so many ways to cut this. Start at the input and
that's one focus that may work okay. Start at the output
stage and that provides important power supply information,
though. So maybe I should start at that end?
I'm looking at some of your other posts and I don't think you need a
maths lesson from me. If you want to do a power supply great. Its a
small one so nothing much too it. If you don't want, I'm OK too.

I still haven't been down the path on my own, yet. So I
don't have strong opinions about this. It's like going to
Disneyland for the first time. Which land should I go to,
first? Later, after being there a few times, I may look at
the flow of people and decide that "Adventureland" is the
best first start. But first time out? Who knows? I'm open
to guidance. Everything is new.
I think that's very much the preferred way.

I feel more comfortable assuming it, too.
Exactly right! There are two common ways to reduce/remove any offset
from the output. Neither is shown on the wikipedia circuit. If you
have another split rail circuit it will certainly have one method -
both methods involved use the diff amp.
Thanks.


No. Thats all correct. I'll show a different circuit latter

Okay. I'll enjoy the moment when it happens.

Okay. So I am picking up details not too poorly, so far.
There is the parallel resistance of R5 x Beta Q1 as well, but this is
normally so high it won't affect the result. And if R5 is replaced
with an active device it can become essentially infinite.

Okay. I've got that detail from other discussions, too. So
yes, understood. Also, I mentioned replacing R5, I think. In
replacing R5 with active parts, I'm thinking of two BJTs in a
usual form that seems to work pretty well over supply
variations.
If you can do that the military wants you to EMP harden all there
electronics. The input is a little different because some user always
want to stick a bloody gret big long wire onto it.

:)

I actually _do_ work on low-mass, direct-contact temperature
measuring devices designed to work within a microwave
environment. (But no electronics or metals inside.)

But you brought up the microwave environment, so I hope you
don't mind the teasing about it.
It sets the bandwidth of the VAS stage so you can use negative
feedback without the whole thing turning into smoke. Do you know of
control theory / bode diagrams. There is a minuscule amount needed
for this app.

I am familiar with _some_ closed loop control theory,
sufficient to get me by with PID controls (using _and_
writing code for them.) Bode diagrams are something I have
not used, though I've seen them. My math is adequate, I
suspect. But I will have to read up on them, I suppose.

For Laplace analysis, I'm familiar with complex numbers,
poles and zeros, partial fraction extractions, and so on.
Just inexperienced in the "short cuts" that many use to get
(and think about) answers.
Try working through the thermal stabilization. Just make a stab at the
transistor junction temperatures - it will be pretty hot (unless you
can afford mega bucks for heatsinking)

I need to understand the output configuration a little better
before I do that.

Including thinking more closely about swinging one end of an
output cap around so that 1Amp rms can pass through it at
20Hz. I = C dv/dt, but V=V0*sin(w*t), so I=C*w*V0*cos(w*t).
Assuming max current at the max slew rate for a sine at phase
angle zero, the w*t is some 2*PI*N thing, so cos(w*t) goes to
1. That makes I=w*C*V0. But w=2*pi*20, or about 126 or so.
So I=126*C*V0. So with I=1A, C=1/(126*V0). With V0=15V, I
get about 530uF for the output cap. That's an amp peak only
at the right phase, too. It'll be less elsewhere. To make
that an amp rms, the cap would need to be still bigger.

Peak current via the cap will take place right about the time
when the two BJTs's emitters are at their midpoint. One of
the BJTs will be supplying that. Not only that, but also
depending upon class mode of operation, supplying current to
the other one as well. How much is important to figuring out
the wattage.

I need to sit down with paper, I suspect. But if you want to
provide some suggested thinking process here, I'd also be
very open to that, as well. I'll take a shot at it either
way, but it helps to see your thinking, too. If you can
afford the moment for me.
I don't mind. Earlier I put a stab at a no load worst case voltage,
you can use that if you want to. Until you get to output stage power
dissipation that is all you need.

Maybe I'd like to focus on understanding different output
pair configurations, first. I frankly don't like the "haul
the output pair around with a collector on one side and a
resistor on the other with a rubber diode in between to keep
them biased up" approach. It's smacks of heavy-handedness
and I simply don't like the way it looks to me. Everything
tells me this works, but it is indelicate at the very least.

However, it is crucial that I understand it in detail before
deciding what I really think about it. For example, I might
want to replace the resistor with a current source. But
without apprehending the output stage more fully, its time
domain behavior over a single cycle for example, I'm not
comfortable with hacking it here and there, ignorantly.

Jon
 
J

Jon Kirwan

<snip>
I like the biasing scheme mentioned by Jon and use it for all my
designs except the early ones using germanium transistors, though
I don't know the name either. The biasing transistor can be
mounted on the output transistors' heatsink for temperature
tracking.
<snip>

Okay. I'm giving this a little more thought -- as it applies
to temperature variation. The basic idea is that the two
bases of the two output BJTs (or output BJT structures) must
be separated a little bit in order to ensure both quadrants
are in forward conduction. With a "Vbe multiplier" in place
and with its own BJT tacked onto the same heat sink, the idea
is that the the Vbe multiplier's own voltage separation will
shrink as temperature rises, exactly in some proportion
needed to maintain the designed forward conduction
relationship of the output BJTs.

To be honest, this designed forward conduction mode may not
be critital. It might move a class-AB around a little within
its AB operation, for example, if the voltage tracking with
temperature weren't flawlessly applied. And that may be
harmless. I don't know. On the other hand, if tweaked for
class-A I can imagine that it might move the operation into
class-AB; if tweaked for lower-dissipation class-AB it might
move the operation into class-B; and if class-B were desired
it could move it into class-C with associated distortion.

There are several parts of the basic Shockley equation. One
is the always-in-mind part that includes a kT/q part in it
and relates that to Vbe. The other is the Is part and Eg is
the key there. So one thing that crosses my mind is in
selecting the BJT for the "rubber diode" thingy. Unless it's
Vbe (at 27C and designed constant current) and its Eg are the
same, even though it is a small signal device, doesn't that
mean that the variations over temperature will be two lines
that cross over only at one temperature point? In other
words, basically matches nowhere except at one temperature?

It seems crude.

I've seen this as a modification. In ASCII form:
: A
: |
: ,---+---,
: | |
: | \
: | / R3
: \ \
: / R2 /
: \ |
: / +--- C
: | |
: | |
: | |/c Q1
: +-----|
: | |>e
: \ |
: / R1 |
: \ |
: / |
: | |
: '---+---'
: |
: B

We've already decided that R1 might be both a simple resistor
plus a variable pot to allow adjustment. The usual case I
see on the web does NOT include R3, though. However, I've
seen a few examples where R3 (small-valued) exists and one of
the two output BJTs' base is connected at C and not at A.

The above circuit is a somewhat different version of the Vbe
multiplier/rubber diode thing. The difference being R3,
which I'm still grappling with.

But does anyone know, before I go writing equations all over
the place, why R3 is added? Or is R3 just some book author's
wild ass guess?

This is all pressing me into studying the output structure
more, I guess. It basically looks simple when I wave my
hands over it, but I suspect the intimate details need to be
exposed to view. On to that part, I suppose.

Jon
 
P

pimpom

Jon said:
Okay. I'm giving this a little more thought -- as it applies
to temperature variation. The basic idea is that the two
bases of the two output BJTs (or output BJT structures) must
be separated a little bit in order to ensure both quadrants
are in forward conduction. With a "Vbe multiplier" in place
and with its own BJT tacked onto the same heat sink, the idea
is that the the Vbe multiplier's own voltage separation will
shrink as temperature rises, exactly in some proportion
needed to maintain the designed forward conduction
relationship of the output BJTs.

To be honest, this designed forward conduction mode may not
be critital. It might move a class-AB around a little within
its AB operation, for example, if the voltage tracking with
temperature weren't flawlessly applied. And that may be
harmless. I don't know. On the other hand, if tweaked for
class-A I can imagine that it might move the operation into
class-AB; if tweaked for lower-dissipation class-AB it might
move the operation into class-B; and if class-B were desired
it could move it into class-C with associated distortion.

There are several parts of the basic Shockley equation. One
is the always-in-mind part that includes a kT/q part in it
and relates that to Vbe. The other is the Is part and Eg is
the key there. So one thing that crosses my mind is in
selecting the BJT for the "rubber diode" thingy. Unless it's
Vbe (at 27C and designed constant current) and its Eg are the
same, even though it is a small signal device, doesn't that
mean that the variations over temperature will be two lines
that cross over only at one temperature point? In other
words, basically matches nowhere except at one temperature?

It seems crude.

You lost me for a while with the Eg term. You mean the emitter
transconductance?

Perhaps a short diversion into my own background may be
appropriate here. Shortage of funds and scarcity of good books
even for those who could afford them in a technologically
primitive environment kept me from delving deeply into
semiconductor physics when I started teaching myself electronics
over 40 years ago. I had advanced Math in college, but lack of
practice has made me very rusty. You're probably much better at
that.

Over the years, I developed my own shortcuts and approximations
using mostly basic algebra, trigonometry and bits of calculus
here and there, blended with empirical formulas.

In any case, the Shockley equation seems to hold fairly well in
practice for the purpose of bias regulation within the
temperature range normally encountered. Temperature tracking with
simple circuits like diodes in series or a Vbe multiplier cannot
be more than approximate. Such a device can sense only the
heatsink temperature and,.except under long-term static
conditions, that temp will almost always be different from Tj of
the output devices. That Tj is what needs to be tracked and when
the output transistors are pumping out audio power, that
difference can be tens of degrees.
I've seen this as a modification. In ASCII form:


We've already decided that R1 might be both a simple resistor
plus a variable pot to allow adjustment. The usual case I
see on the web does NOT include R3, though. However, I've
seen a few examples where R3 (small-valued) exists and one of
the two output BJTs' base is connected at C and not at A.

The above circuit is a somewhat different version of the Vbe
multiplier/rubber diode thing. The difference being R3,
which I'm still grappling with.

I've seen R3 used in that position too, but never gave it much
thought until you brought it up. Offhand I still can't see a
reason for it either. Maybe for stability against a local
oscillation? Perhaps taking some time to think about it will
bring some revelation. Or someone else can save us the trouble
and enlighten us.
But does anyone know, before I go writing equations all over
the place, why R3 is added? Or is R3 just some book author's
wild ass guess?
A possibility. But I wouldn't go out on a limb and call it that
:)
 
J

Jon Kirwan

You lost me for a while with the Eg term. You mean the emitter
transconductance?

No. Eg is the effective energy gap, specified in electron
volts. Eg (and Tnom, which is the nominal temperature at
which the Is used in the Shockley equation is given at) are
used to account for and calibrate the variation of Is over
the BJT's temperature. In other words, Is is a function of
T, namely Is(T), and not a constant at all.

If you solve the Shockley equation for Vbe and then look at
the derivative (partial, since Is is momentarily taken as a
constant) of it with respect to temperature, you will see
that it varies in the _wrong_ direction... the sign is wrong:

Id(T) = Is(T) * ( e^( q*Vd / (k*T) ) - 1 )

which becomes:

Vd(T) = (k*T/q) * ln( 1 + Ic/Is(T) )

The derivative is then trivially:

d Vd(T) = (k/q) * ln( 1 + Ic/Is(T) ) dT

which is a positive trend, very nearly +2mV/K for modest
Ic... but __positive__.

Does that make sense? It just is wrong. BJTs don't _do_
that. The figure is more like -2mV/K. So why is the sign
wrong?

Because that isn't the whole picture. "Is" also varies with
temp. As in:

Is(T) = Is(Tn) * (T/Tn)^3 * e^( -(q*Eg/k) * (1/T-1/Tn) )

where "Tn" is the nominal calibration temperature point.

The new derivative is a bit large. To get it onto a silly
post page with some chance that it won't sprawl for lines and
lines, I have to set up these math phrases.

Assume:
X = T^3 * Isat * e^(q*Eg/(k*Tnom))
Y = Tnom^3 * Ic * e^(q*Eg/(k*T))

Then the derivative is (if you use fixed-spaced ASCII):

X+Y
k*Tn*T*((X+Y)*ln( -------- )-3*Y) - q*Eg*(X*T+Y*T+Y*Tn)
Isat*T^3
-------------------------------------------------------
q * Tnom * T * (X+Y)

What a mess, even then. Here again, Tn is the nominal
temperature (in Kelvin, of course) at which the device data
is taken and Eg is the effective energy gap in electron
volts for the semiconductor material. Of course, 'k' is the
usual Boltzmann's constant, q the usual electron charge
value, and T is the temperature of interest.

Eg often defaults to around 1.11eV in spice, I think. For an
Ic=10uA and a stock Isat of about 1E-15, the figure comes out
to about -2.07mV/K in the vicinity of 20 Celsius ambient.

Which is the more usual value.

The "Is" term is the y-axis intercept, which isn't actually
measured, by the way, but instead extrapolated from measured
values elsewhere.

All this is the reason I was asking about the voltage bias
mechanism (that rubber diode/Vbe multiplier thing) and
selecting its BJT vs those in the output stage. (Which, if
PNP _and_ NPN are used, probably themselves do not vary the
same as either other, even, so there is another problem there
as well.) It fries my brain thinking about selecting
"perfect" parts for this.

Another issue I'm starting to wonder about is sweeping out
charges in the BJTs at higher frequencies and providing
sufficient drive current to do it quickly enough. But one
thing at a time, I guess.
Perhaps a short diversion into my own background may be
appropriate here. Shortage of funds and scarcity of good books
even for those who could afford them in a technologically
primitive environment kept me from delving deeply into
semiconductor physics when I started teaching myself electronics
over 40 years ago. I had advanced Math in college, but lack of
practice has made me very rusty. You're probably much better at
that.

Your own experiences sound very much like mine, except that
you _did_ something with yourself in this area when I did
not. Something I very much respect in you and disrespect in
me. I grew up poor enough that I had to literally live in
homes without walls and work the fields as a laborer child
(before laws today now prevent that, sadly in some ways good
in others) in order to eat and survive. So I understand
"shortage of funds" in my very gut. Perhaps what differed a
little is that I also was living near Portland and there was
a library system I could access, riding my bicycle as a kid.
And I would sometimes even take a bus and use the university
library (particularly the 5th floor where the science
subjects were located.) I scored an 800 on the math section
of my SAT and was rewarded with entry into a university
scholars program at PSU. However, I had to work to pay for
the classes and books and in the end I simply couldn't handle
all of it on my own. Without a dad (he died when I was 7)
and no family to help out, I couldn't manage to do everything
and get by at school, too. So I dropped out well before the
1st year completed. Everything I know is self-taught. It's
a commitment.

I have been honored by being asked by Portland State
University to temporarily teach as an adjunct professor,
though. And I did that for a few years until they could find
their replacement professors. I enjoyed it and I think I did
well. When I visited the department, last year after some
dozen years of absence, I was greeted in the hallways by many
others who I sometimes barely remembered with sincere smiles
and talks of those days. So I must have made some kind of
impression there.

Maybe a difference is that I've made the study of mathematics
a centerpiece for me. Besides, it's central to the work I do
so I can't really ignore it. But since I love studying it, I
would do it, anyway. None of this means I'm properly trained
in it. However, even these days I get to spend time almost
every month or two with sit-down time with an active
physicist to get some additional education in Lie Groups or
catastrophe theory or reflection spaces and manifolds, and so
on. I find I really love both finite and infinite group
theory work.
Over the years, I developed my own shortcuts and approximations
using mostly basic algebra, trigonometry and bits of calculus
here and there, blended with empirical formulas.

And here, most likely, is our fundamental difference. I
cannot remember things without understanding their deep
details. I lump this to my "autism." (I have two disabled
children on the spectrum, the youngest is almost exactly like
I was at his age.) When I took calculus at college, it was
all a blur trying to remember what was called what and how
they applied. However, if I _understood_ it deeply, could
picture it well, I could re-derive almost anything on the
spot when I needed it for a test. In other words, while most
of the other students appeared to simply take notes and keep
track of details (and shortcuts) many of which they'd
remember, I couldn't work like that. My memory was _zero_
for names of people, and similarly for names of math
formulas. I had to understand them viscerally and "see" them
well, in order to be able to remember the concept. However,
I still couldn't rememeber the specific "formula." Just the
concept -- the visualization, the image. That wouldn't
provide me with an answer to a problem, merely an approach
that "seemed right." So I would simply use that image to
guide me in re-deriving the formula from scratch, every time.
The upshot was that I took longer than most in completing my
tests, because I spent so much additional time quickly
running through the derivations of the rules I needed, but
where I answered the problems I got them right.

I've never been satisfied, as a result of my own limitations
here, to memorize shortcuts and approximations. It doesn't
give me "sight." They are useless to me, since I cannot use
them for any other derivations, since they are themselves
only blunted tools for specific purposes that cannot be used
to extrapolate anywhere else. Which then forces me to depend
upon a memory I don't have. What I need is to _see_ the
physics itself so that I can then derive those approximations
and shortcuts on the fly, deduced to the specific situation
I'm facing at the time.
In any case, the Shockley equation seems to hold fairly well in
practice for the purpose of bias regulation within the
temperature range normally encountered.

No, it doesn't. Because the SIGN is wrong!! The Vbe doesn't
rise with rising temperature, it falls.
Temperature tracking with
simple circuits like diodes in series or a Vbe multiplier cannot
be more than approximate.

That seems to be what I'm getting. One "lucky" circumstance
seems to be that the Vbe multiplier is supposed to produce
about two Vbes with k=2 in k*Vbe, just when there are two
Vbe's in the output structure. That way, if the Vbe in the
multiplier moves around with temperature, the multiplier
doubles it in just the right way to handle the actual two
Vbes in the output pair. If it had been needed to set k=3 or
k=1.5 or k= anything else, there'd have been a problem again
because they wouldn't vary together.

But this brings up the other problem I am talking about. If
Eg isn't the same figure, the slope over temperature for the
Vbe multiplier and the output BJTs won't be the same slope.
That means they can intersect at some temperature, but never
really be right anywhere else at all.

Worse than this is the fact that PNPs are used on one side
and NPNs on the other. They _cannot_ possibly vary their
Vbes in matched ways. It's got to be a nasty problem. And
it seems to argue, in my mind, for some modified version of a
quasicomplementary structure on the output. What argues
against it so much is that, again, the driving structure
before the quasi structure is driving two kinds of quadrants
and this means the cross-over area _must_ be nasty looking,
indeed.

Because of that, I searched around and found out that there
is a correction structure to fix the quasi crossover problem.
It appears to use something called a Baxandall diode, though
for now I haven't learned the details of how it does what it
does.
Such a device can sense only the
heatsink temperature

But which quadrant do you decide to attach it closer to?
and,.except under long-term static
conditions, that temp will almost always be different from Tj of
the output devices. That Tj is what needs to be tracked and when
the output transistors are pumping out audio power, that
difference can be tens of degrees.

I can believe it!
I've seen R3 used in that position too, but never gave it much
thought until you brought it up. Offhand I still can't see a
reason for it either. Maybe for stability against a local
oscillation? Perhaps taking some time to think about it will
bring some revelation. Or someone else can save us the trouble
and enlighten us.

It is often a small value, 10s of Ohms. It might just be
what you are talking about because I've often seen people
talk about needing a 33 ohm base resistor on emitter follower
BJTs to snub high frequency oscillations. So you might be on
the right track there. Hopefully, someone else does know and
will feel like saying.
A possibility. But I wouldn't go out on a limb and call it that
:)

Hehe. :)

Jon
 
J

Jon Kirwan

Assume:
X = T^3 * Isat * e^(q*Eg/(k*Tnom))
Y = Tnom^3 * Ic * e^(q*Eg/(k*T))

Sorry, should be consistent in terms with:
X = T^3 * Isat * e^(q*Eg/(k*Tn))
Y = Tn^3 * Ic * e^(q*Eg/(k*T))

Jon
 
J

Jon Kirwan

Because of that, I searched around and found out that there
is a correction structure to fix the quasi crossover problem.
It appears to use something called a Baxandall diode, though
for now I haven't learned the details of how it does what it
does.

Here's some articles I found on the web by a single author on
audio amplifier design:

http://www.planetanalog.com/article/printableArticle.jhtml?articleID=205207238&printable=true
http://www.planetanalog.com/article/printableArticle.jhtml?articleID=205601405&printable=true
http://www.planetanalog.com/article/printableArticle.jhtml?articleID=205801115&printable=true
http://www.planetanalog.com/article/printableArticle.jhtml?articleID=205917273&printable=true
http://www.planetanalog.com/article/printableArticle.jhtml?articleID=206103226&printable=true

The last one of the above links __mentions__ the Baxandall
diode.

In looking at those, there is this one also listed at the
bottom of the last article above. I haven't read this one
yet, but include it just the same:

http://www.planetanalog.com/article/printableArticle.jhtml?articleID=205202120&printable=true

I need to read all of these, I suppose.

Jon
 
P

Paul E. Schoen

Jon Kirwan said:

This stuff is a bit too intense for my taste, but a Dogpile search of
Baxandall Diode turned up this manual for an amplifier that uses tubes and
transistors, and has a Baxandall diode in the output stage. Its stated
purpose was to "improve symmetry" and appears to add an additional diode
drop for the PNP-NPN pair to match the NPN-NPN darlington on the top side.

http://www.wimdehaan.nl/downloads/technicalmanualnishiki41.pdf

This article states that a Baxandall diode made little change in linearity:
http://www.embedded.com/design/206801065?printable=true

There is also a Baxandall Tone Control circuit which is discussed here:
http://digitalcommons.calpoly.edu/eesp/14/

For my own purposes, distortion of anything less than about 1% is probably
not worth paying for or striving to achieve. For audio, my ears are not all
that good and I would probably welcome distortion in the form of non-linear
frequency response to compensate for degraded sensitivity at the high end.

And there is also the argument that any sound that is naturally produced
will have some significant distortion that is actually part of the
listener's experience. Whatever the acoustics in a given auditorium may be,
they contribute to the waveshape as it is received by the listener's ears,
and it varies depending on where one is seated. Sometimes added distortion,
such as an echo, may enhance the enjoyment of the music, and coloration due
to an imperfect amplifier might just as easily be perceived as pleasant
rather than objectionable. It is in fact distortion that causes an
audiophool to prefer the "warm" sound of a tube amplifier over a laboratory
grade solid state amplifier.

I am more impressed with amplifiers that are extremely efficient, such as
PWM amps. And for some types of test equipment that I have designed, I had
to deal with maintaining phase shift to better than 1 degree into a range
of inductive, resistive, or capacitive loads, and with outputs of power
line frequencies of 45-450 Hz, for voltage sources up to 300 VAC, and
current sources up to 100 amperes, at 50 VA to 300 VA or higher. And they
had to be able to withstand overloads and short circuits.

Paul
 
J

Jon Kirwan

This stuff is a bit too intense for my taste,

It's gravy to me. If only I had the good sense to be able to
tell if it is being comprehensively and accurately stated.
but a Dogpile search of
Baxandall Diode turned up this manual for an amplifier that uses tubes and
transistors, and has a Baxandall diode in the output stage. Its stated
purpose was to "improve symmetry" and appears to add an additional diode
drop for the PNP-NPN pair to match the NPN-NPN darlington on the top side.

http://www.wimdehaan.nl/downloads/technicalmanualnishiki41.pdf

I think that very same purpose is what I took it to mean,
too. In a quasi-complementary output stage, the gain curve
(less than 1 everywhere) shown over output would have to show
an interesting tweak, midrange. In class-B especially, from
one side it would look one way, from the other, somewhat
different. Simply because the two quadrants just aren't the
same structure. One uses two NPNs, the other an NPN and a
PNP. That weirdness in gain has to translate to distortion
of some kind. The fix, I'd read, is to use a diode on the
complementary NPN/PNP side (and a resistor in parallel, I
gather.) Supposedly, it flatten out the gain curve in just
the right amount to balance things pretty well. It's an
interesting point, if true, because the quasi-complementary
output stage is attractive in that it can use the exact same
NPN part number for both quadrants' output BJTs.

I need to post up some different output structures in the
wane hope someone will help me walk through an analysis of
them.
This article states that a Baxandall diode made little change in linearity:
http://www.embedded.com/design/206801065?printable=true

Thanks for the article. Same author!! If I read closely
enough what he is saying, he is saying that the Baxandall
diode adds little _in the case of class-A operation_. In the
case of class-B, I think he argues it is worth having!

He writes,

"The choice of class A output topology is now simple.
For best performance, use the CFP. Apart from greater
basic linearity, the effects of output device
temperature on Iq are servoed out by local feedback,
as in class B. For utmost economy, use the quasi
complementary with two NPN devices: these need only a
low Vce(max) for a typical class A amp, so here is an
opportunity to recoup some of the money spent on
heatsinking.

"The rules are different from class B; the simple quasi
configuration will give first class results with
moderate NFB, and adding a Baxandall diode to simulate
a complementary emitter follower stage makes little
difference to linearity."

I think I represented the meaning of these two paragraphs,
accurately. And if you look closely at Figure 4, you will
see that there are two curves -- both are for quasi-
complementary outputs. However, one of them is class-B --
the really nasty-looking one. That one cries out for fixing
and is exactly what I was just talking about, above!!! So
the Baxandall diode really seems to be useful in allowing one
to _select_ class-B operation without having to pay much for
it. Makes class-B lots more attractive with quasi-
complementary outputs.

Let me know if you think otherwise.
There is also a Baxandall Tone Control circuit which is discussed here:
http://digitalcommons.calpoly.edu/eesp/14/

I'd need to download that thing to read it, I guess. For
now, I merely suspect that Baxandall writes about ideas from
time to time and isn't known for one thing.
For my own purposes, distortion of anything less than about 1% is probably
not worth paying for or striving to achieve. For audio, my ears are not all
that good and I would probably welcome distortion in the form of non-linear
frequency response to compensate for degraded sensitivity at the high end.

I don't have a number in mind because I'm _very_ ignorant
about what I'd care about and what I wouldn't. I _do_ know
one thing.... I really _hate_ the 10% THD computer speaker
systems. That much I do know. There is a place in hell for
people who pawn those things off as amplifiers with a nickel.
And there is also the argument that any sound that is naturally produced
will have some significant distortion that is actually part of the
listener's experience.

hehe. I'm imagining Mister Magoo right now and what he'd
consider "good." ;) With Magoo as the "listener" ....
Alfred E Neuman's "What, me worry?" comes to mind regarding
any amplifier system.

Slightly more seriously, best of all would be that we somehow
analyze each and every person's brain's responses to sound,
in real time if possible, from the conscious interpretation
back through to the cochlea and the transducers nearby, to
the environment around it, and use a DSP to process the
content first before driving a speaker system, at all.

I expect to be dead before that happens, though.

I'm good ignoring "listener's experience" and focusing on a
more objective measure of some kind, letting the chips fall.
Whatever the acoustics in a given auditorium may be,
they contribute to the waveshape as it is received by the listener's ears,
and it varies depending on where one is seated. Sometimes added distortion,
such as an echo, may enhance the enjoyment of the music, and coloration due
to an imperfect amplifier might just as easily be perceived as pleasant
rather than objectionable. It is in fact distortion that causes an
audiophool to prefer the "warm" sound of a tube amplifier over a laboratory
grade solid state amplifier.

Those arguments are "beyond my pay grade." I'll just retreat
to something I can actually compute.
I am more impressed with amplifiers that are extremely efficient, such as
PWM amps. And for some types of test equipment that I have designed, I had
to deal with maintaining phase shift to better than 1 degree into a range
of inductive, resistive, or capacitive loads, and with outputs of power
line frequencies of 45-450 Hz, for voltage sources up to 300 VAC, and
current sources up to 100 amperes, at 50 VA to 300 VA or higher. And they
had to be able to withstand overloads and short circuits.

I like efficiency as one goal. Especially as I'm getting to
understand just how much power can be wasted without much
value. This 10W thing, if we are talking about class-A and
planning for 6db overhead (4X) as someone I read in one of
those articles saying about it, might mean a 40W capability
into 8 ohms, rails that are way out there and power waste
that starts to look like a toaster. So I'm beginning to get
my head turned 'round even at 10W!! Cripes, that spec is
rapidly becoming something I'm beginning to respect a lot
more and to realize that I might have landed on a number that
is better for teaching than I'd first imagined it to be.

Jon
 
P

pimpom

David said:
If I had a split power supply I would *always* get rid of the
output
capacitor. It is not difficult to get the output DC to withing
50mv of
gnd. A weird thing I have noticed, and I think you would have
noticed
it sooner, is that no one, even audio "golden ears" pay serious
attention to the output cap. They just stick a plain old
electrolytic
of no particular type (some times it's a bipolar) in the
output, make
it bigger than needed for the LF -3db corner and call it
"good". It
would seem that some attention should be paid to "ripple"
current at
frequencies like 20khz etc, so some low esr caps would seem
mandatory.
That music has relatively less high frequency components is the
only
reason I can think of that this very lax approach might work.

If I may inject a comment here: I strongly support the idea of
avoiding an output coupling capacitor. I always use a split-PS,
OCL configuration unless some other consideration makes it
necessary to use a single-ended PS.

The comment about DC offset at the output terminal reminds me of
an experience I had more than 20 years ago. I was asked to spruce
up the P.A system at our state legislators' main session hall.
One of the things I did was to replace the old tube power amp
with my own design. I built four 60-watt amps (3 in use, one
spare) using 2N3055 BJTs in quasi-complementary configuration (I
couldn't easily get true complementary pairs then). Since the
existing system distributed audio power to dozens of small
speakers, inside and outside the hall, over a standard 100-volt
line, I integrated a 4-ohm input, 100V output transformer in my
amps.

When I first tested the system, one output transistor each in two
of the amplifiers warmed up quickly even without any output - not
actually hot, but warmer than they should be. After a few moments
of puzzlement, I traced the culprit to slight DC offset at the
output terminal. It was only a small fraction of a volt and
wouldn't have mattered with direct coupling to a speaker. But the
DC resistance of the primary winding of the output transformer
was so low (a fraction of an ohm) that it forced one of the
output transistors to draw a substantil amount of DC current at
idle.

I further traced the cause of offset to poorly matched
transistors at the input differential stage. I didn't include
provision for manual balancing of the static DC level, so I tried
out a few transistors for the input stage until I got a pair that
matched closely enough to reduce the offset to within a millivolt
or so (there was no hope of obtaining a factory-matched pair).

I know this has no direct relevance to the discussion, but I was
partly reminiscing and partly thinking that it may not be a bad
idea to give a real-life example of how easy it is to overlook
something.
 
P

Paul E. Schoen

pimpom said:
If I may inject a comment here: I strongly support the idea of avoiding
an output coupling capacitor. I always use a split-PS, OCL configuration
unless some other consideration makes it necessary to use a single-ended
PS.

The comment about DC offset at the output terminal reminds me of an
experience I had more than 20 years ago. I was asked to spruce up the P.A
system at our state legislators' main session hall. One of the things I
did was to replace the old tube power amp with my own design. I built
four 60-watt amps (3 in use, one spare) using 2N3055 BJTs in
quasi-complementary configuration (I couldn't easily get true
complementary pairs then). Since the existing system distributed audio
power to dozens of small speakers, inside and outside the hall, over a
standard 100-volt line, I integrated a 4-ohm input, 100V output
transformer in my amps.

When I first tested the system, one output transistor each in two of the
amplifiers warmed up quickly even without any output - not actually hot,
but warmer than they should be. After a few moments of puzzlement, I
traced the culprit to slight DC offset at the output terminal. It was
only a small fraction of a volt and wouldn't have mattered with direct
coupling to a speaker. But the DC resistance of the primary winding of
the output transformer was so low (a fraction of an ohm) that it forced
one of the output transistors to draw a substantil amount of DC current
at idle.

I further traced the cause of offset to poorly matched transistors at the
input differential stage. I didn't include provision for manual balancing
of the static DC level, so I tried out a few transistors for the input
stage until I got a pair that matched closely enough to reduce the offset
to within a millivolt or so (there was no hope of obtaining a
factory-matched pair).

I know this has no direct relevance to the discussion, but I was partly
reminiscing and partly thinking that it may not be a bad idea to give a
real-life example of how easy it is to overlook something.

Something else to consider is a bridge output connection. You just need two
output stages, invert the phase to one, and make sure the DC voltages on
each are balanced. Another bonus is that you can get nearly 24 volts P-P
with a 24 VDC single supply. You can get very close to the rails if the
driver stage uses a slightly higher power supply voltage, so you can
optimize efficiency but at the cost of distortion (clipping).

At one time I considered making an amplifier with a dynamically adjustable
power supply so that the rails would always be just a couple of volts above
the peak output signal. It would probably be workable for a high-power
signal generator where some clipping can be tolerated as the output is
increased, but for music or other complex signals that vary unpredictably
in amplitude, there would need to be a delay in the signal long enough to
allow the power supply to adjust to what will be needed. At low
frequencies, it might be possible even to have the power supply track the
waveshape and even higher efficiency could be obtained.

I have used serial analog delay lines which are basically a bucket brigade
of switched capacitors, clocked higher than the maximum frequency required.
I designed a phase-shifting circuit for power line frequency, using an IC
that was sold at Radio Shack at the time, an SAD1024
http://www.geofex.com/sad1024.htm. I think I clocked it at a rate which
produced a 90 degree phase shift at 60 Hz, which would be 1024/0.00416 =
246 KHz. But it was prone to distortion, and it was not long before the IC
was discontinued and replaced with a dual 512 stage device that was even
worse.

There are much better ways to accomplish such feats now, but in 1980 or so
there were not many alternatives. Now the way to do it might be to digitize
the signal and then use a circular buffer to achieve whatever delay is
needed. Probably 16 bit audio sampled at 44 kHz so you can get about a 1.5
second delay with a 16 bit x 64k word memory. But with all that, probably a
PWM amp would be the way to go.

Just draining the brain through my fingers and the keyboard into
cyberspace...

Paul
 
P

Paul E. Schoen

David Eather said:
The adjustable power supply approach has been dubbed type "H", but like
type "G", it is almost certainly dying now... (which is sad I think - it
does cut down lines of investigation for originality and inspiration)

In the search for optimal efficiency:

http://www.irf.com/product-info/datasheets/data/irs2092.pdf

(irf - the worlds most experimenter unfriendly company)

Here's what I found for Class H, which used a power supply boost under
certain high output conditions:
http://www.nxp.com/documents/data_sheet/TDA1562Q_ST_SD.pdf

But it has been discontinued, and . I didn't realize that it had already
been implemented and had a class letter. I came up with the idea in the
early 80s IIRC. And I also tried to design a switching amplifier a few
years later. Maybe they were novel ideas then and I should have pursued
them.

My idea for the switching amplifier planned to use, basically, two
programmable switching supplies driven by the input signal and its inverse.
But it ignored the necessity of supplying both positive and negative
current. The answer, of course, was an H-bridge configuration. And I think
sometimes that is known as Type H, as opposed to Class H.

Another engineer at the time thought that he could just pass a pulse-width
modulated high frequency signal through a ferrite transformer rated at
perhaps 200 VA and 40 kHz. It would be modulated by the 60 Hz nominal
signal that was to be provided as an output, and would use capacitors and
inductors to filter out the carrier. But the only way for this to work at
all would be to rectify the output, resulting in half of the waveform. And
that would also saturate the core. So it was doomed from the start, but he
actually had transformers made and PC boards built before he could be
convinced that it had a fatal error.

That IRF part sounds like a real beast. There's pretty much no need to
design a power amplifier from discrete parts if you can get that IC for
about $5 and some $2 MOSFETs to make a reliable, rugged, and efficient
amplifier. Their development kit is $200 but that's not bad for a 250W x 2
channel amp.

I guess that's what you mean about them being experimenter unfriendly. No
reason to design and breadboard and fiddle around with something if it's
already been done essentially to perfection..

Paul
 
J

Jon Kirwan

A 30 volt CT transformer with 15% regulation and 7% mains over-voltage,
less voltage drop for the diode bride would give rails of +/- 25.

I'm not entirely familiar with the formal terms, but I figure
the 15% regulation you mentioned above must mean that the
ripple voltage goes from 100% to 85%. In general, the
equation for the angle would be something like:

angle = arcsin( 1 - Rf*(1-Vd/Vpk) )

This is with Rf being the ripple factor (not in percent
terms, obviously) and Vd being the sum of the diode drops
(full wave would be something like 2V) and Vpk being the
sqrt(2)*Vrms. At least, that's what the equation works out
for me on paper. (I can develop out here, if needed.)

The peak diode current happens just as the first moment of
conduction (which is neatly defined by the ripple factor, if
I understand you) and would be something like:

Ipk = 2*pi*f*C*Vpk*cos( angle )

Since the cos(arcsin(x)) is just sqrt(1-x^2), the computation
looks like:

Ipk = 2*pi*f*C*Vpk*SQRT(1-(1-Rf*(1-Vd/Vpk))^2)

There's probably some other adjustments to nail it, but that
probably gets somewhat close.

For a full wave bridge (I know, I think you were talking
about a half wave, but let's go with this for a moment) with
Vpk=25.2*SQRT(2) and Vd=2V (for two diodes in conduction in
the bridge) and Rf=.15 and f=60 (US-centric) and let's say a
C=2200uF, that gets something like Ipk = 15.2A.

None of this takes into account the average load current or
peak load currents. It just assumes that the ripple factor
is somehow known to be correct. I started out assuming that
the average load current would be defined entirely by the
droop (which is, of course, based upon the ripple factor
assumption) and the time from the peak of a previous cycle to
the point at which the above angle occurs and conduction
again begins. But it is complicated slightly by the fact
that the transformer supplies the entire current draw (if it
can) for a short part of the cycle _after_ the peak, as its
slope is less than the droop slope of the cap. I played with
accounting for all that but then decided that in most
practical cases the droop is hopefully not too excessive and
if not, then the angle after the peak isn't that much.. maybe
5 to 8 degrees or so... So I decided to ignore it and just
rely upon the capacitor's droop only:

Iave = C*Vpk*Rf/((pi/2+angle)/(2*pi*f))

So in the case just mentioned, I get Iave = 1.7A. I take it
that Ipk can easily be a factor of 9 or 10 greater. Also, I
note that it would probably be helpful to have nifty charts
of some kind to help pick off details like these.

Can you expand a little on what you were talking about,
though? Was that a half wave suggestion? I'm not sure I can
make sense of the rails, if so. If not, then I'd still
appreciate some of the calculations so that I can sure I
follow all of it.

I'm guessing that if a rail is to have a minimum of 25V on it
at the bottom of the ripple, and you are talking about 15%
regulation, the peak is going to be 29.4V -- not counting the
diode drops. Add 1V for that and it's 30.4V. Another 7% on
top would be 32.7V for the peak. RMS would be that figure
divided by sqrt(2), wouldn't it? Or 23.1Vrms or so?

So would that suggest two of the cheap 25.2Vrms transformers
and plan on rails still slightly higher?

Oh, crap. The VA rating. That's another one to consider.
Later, I guess.
A dual 12.6 volt transformer would give a minimum (worst case with
transformer at full load and mains 7% under voltage) of 16.something
volts meaning big filter caps if you were serious at getting 10 watts.
One of the reasons to go PSU first I think. (Also I live in a tiny
jerk-water town where no one knows what a custom made transformer is let
alone where you can get one wound)

I suppose the old filament-type 12.6VAC transformers must be
common everywhere. I also see that Radio Shack (yes, I'm
holding my nose for a moment) still carries some "commodity"
type 25.2VAC CT 2.0A rated transformers for about US$10.
Their 12.6VAC CT 3.0A transformers are priced identically.

So what's considered to be generally available?

Jon
 
J

Jon Kirwan

There's another question that comes to mind regarding the
output stage. A lot of talk seems to revolve around
"crossover distortion." Seems almost very first thing folks
talk about when discussing class of operation if not also at
other times.

Seems to me that in a three-rail power supply situation
without an output capacitor involved, the crossover takes
place near the midpoint (ground) voltage between the rails,
at a time when current into the speaker load is also near
zero. (I'm neglecting any thoughts about inductance in the
speaker and physical coupling into the air, for now.) In
other words, where power at the speaker is near zero. Is it
really that important to consider?

I was looking at that terrible large scale gain plot for the
quasicomplementary output stage on the web site recently
mentioned in the thread (the lower curve in Figure 4 on this
link):

http://www.embedded.com/design/206801065?printable=true

(It's not that terrible of a plot, as the variation is from
..96 to .98 with the "normal" middle at .97.)

What's experience say here? Is it really so terrible as to
worry too much about something that takes place near zero
voltage, anyway? I'm just questioning the concern, for now.
I have no understanding about it, at all. Just wondering.

Jon
 
J

Jon Kirwan

But it is complicated slightly by the fact
that the transformer supplies the entire current draw (if it
can) for a short part of the cycle _after_ the peak, as its
slope is less than the droop slope of the cap.

I overstated this. The capacitor does supply _some_ along
with the transformer windings during this short phase, as the
voltage on the cap is also declining with it.

Jon
 
P

Paul E. Schoen

Jon Kirwan said:
I'm not entirely familiar with the formal terms, but I figure
the 15% regulation you mentioned above must mean that the
ripple voltage goes from 100% to 85%. In general, the
equation for the angle would be something like:

Transformers are specified with a percentage regulation which means the
change in voltage from no load to full load conditions. A small,
inexpensive transformer might have 15% regulation so that the 30VCT unit
would have a 15% higher output voltage with no load, or 34.5 VRMS or 48.8
P-P. Mains voltage may vary +/- 7% or 120 VAC +/- 8, or 112 to 128 VAC. At
the high end of this range the tranny puts out about 52.2 V P-P. Assuming a
FWB rectifier and the CT as reference, with 0.7 V diode drop, you get 25.4
volts peak.

If you put a capacitor on the output, it eventually charges to the peak
voltage. This is the high limit that must be considered for design. It may
not be exact, and probably will be a bit lower, because a power transformer
is usually designed to operate in partial saturation, so the output will
not increase linearly above its design rating.

Under load, the output will drop, caused by the effects of primary and
secondary coil resistance as well as magnetic effects. These will cause
heating over a period of time, and the coil resistance will increase,
adding to the effect until a point of equilibrium is reached based on the
ambient conditions and removal of heat via conduction, convection, and
radiation.

Large power transformers, high quality audio transformers, and
instrumentation transformers are designed with perhaps 1% or 2% regulation,
which is usually accomplished by using more copper and iron, and also using
special cooling mechanisms such as oil flow and forced air.

angle = arcsin( 1 - Rf*(1-Vd/Vpk) )

This is with Rf being the ripple factor (not in percent
terms, obviously) and Vd being the sum of the diode drops
(full wave would be something like 2V) and Vpk being the
sqrt(2)*Vrms. At least, that's what the equation works out
for me on paper. (I can develop out here, if needed.)

The peak diode current happens just as the first moment of
conduction (which is neatly defined by the ripple factor, if
I understand you) and would be something like:

Ipk = 2*pi*f*C*Vpk*cos( angle )

Since the cos(arcsin(x)) is just sqrt(1-x^2), the computation
looks like:

Ipk = 2*pi*f*C*Vpk*SQRT(1-(1-Rf*(1-Vd/Vpk))^2)

There's probably some other adjustments to nail it, but that
probably gets somewhat close.

Maybe it is useful to work out these equations to get a concept of what is
going on, but I prefer a more empirical method which may involve initial
rough estimates and prototyping and bench testing, as well as LTSpice
simulation. The simulator includes the equations that determine the
performance of the circuit, and may also include the effects of losses and
heating and temperature change. But usually I just use approximations and
best guesses of final operating conditions such as temperature, and use
parameters such as internal resistance based on these figures. Then it is
time to build the circuit and do real world bench testing.

[snip]
Can you expand a little on what you were talking about,
though? Was that a half wave suggestion? I'm not sure I can
make sense of the rails, if so. If not, then I'd still
appreciate some of the calculations so that I can sure I
follow all of it.

I'm guessing that if a rail is to have a minimum of 25V on it
at the bottom of the ripple, and you are talking about 15%
regulation, the peak is going to be 29.4V -- not counting the
diode drops. Add 1V for that and it's 30.4V. Another 7% on
top would be 32.7V for the peak. RMS would be that figure
divided by sqrt(2), wouldn't it? Or 23.1Vrms or so?

So would that suggest two of the cheap 25.2Vrms transformers
and plan on rails still slightly higher?

Oh, crap. The VA rating. That's another one to consider.
Later, I guess.

I sense a lack of a real direction or intended purpose for this project. As
an academic exercise and learning experience, throwing all sorts of ideas
into the pot is worthwhile. But when it comes to the actual task of
building something useful, whether for production or a one-off hobby
project, it comes down to the three factors I offer. I can build it well, I
can build it quickly, and I can build it cheaply. Pick any TWO!

I suppose the old filament-type 12.6VAC transformers must be
common everywhere. I also see that Radio Shack (yes, I'm
holding my nose for a moment) still carries some "commodity"
type 25.2VAC CT 2.0A rated transformers for about US$10.
Their 12.6VAC CT 3.0A transformers are priced identically.

So what's considered to be generally available?

Certainly this depends on your location as well as your budget (time and/or
money) and criteria for the design. If you plan to go the cheapest monetary
route for a one-off project, look for locally available freebies in a
junkyard, flea markets, Hamfests, eBay, and www.freecycle.com. You also
must consider time and transportation or shipping expenses, which can be
high for items like transformers.

You must also balance what is readily available with what you actually need
for your project. If you have certain constraints and absolute design
criteria, you may be forced into a narrow range of what is acceptable. At
some point, you may need to modify a salvaged transformer or wind your own
(or have one custom made). There are many off-the-shelf transformers
available at reasonable cost, so it would be rare to need a custom design,
but sometimes it is the only option. You can do a lot with a MOT if you
don't mind spending the time messing with it.

And you can also get toroid transformer kits that have the primary already
wound, and you just add your own secondary. See www.toroid.com. They have
kits from 80VA ($52) to 1400VA ($110). I used four of the largest ones to
make a circuit breaker test set with an output of 2000 amperes at 2.8 volts
continuous, and the good regulation allowed it to provide pulses of over
12,000 amperes. If you find any equipment with toroid transformers, by all
means salvage them. You can also use Variacs and Powerstats and their
equivalents to make high power transformers. I have about a dozen damaged
units rated at 240 VAC at 8 amps, or 2 kVA, and I had plans to use them for
a 24 kVA test set, 4000 amps at 6 volts. Here are pictures of a 10 kVA test
set I designed for www.etiinc.com, using toroids:

http://www.smart.net/~pstech/PI2000-1-small.JPG
http://www.smart.net/~pstech/PI2000-2-small.JPG
http://www.smart.net/~pstech/PI2aux-5a.JPG

But I have digressed, and this thread has digressed from the discussion of
amplifiers to power supplies (which is related, of course), and line
powered transformers (which may not be the best choice). However, at some
point one must decide if this is to be an actual project or just an
academic discussion, and then proceed to get some parts and put something
together and plug it in. It can be done using as many "free" parts as
possible, or from the standpoint of what is the most cost-effective
overall, and in either case one must have a clear view of the end result.

Paul
 
J

Jon Kirwan

I'm not entirely familiar with the formal terms, but I figure
the 15% regulation you mentioned above must mean that the
ripple voltage goes from 100% to 85%. In general, the
equation for the angle would be something like:

angle = arcsin( 1 - Rf*(1-Vd/Vpk) )

This is with Rf being the ripple factor (not in percent
terms, obviously) and Vd being the sum of the diode drops
(full wave would be something like 2V) and Vpk being the
sqrt(2)*Vrms. At least, that's what the equation works out
for me on paper. (I can develop out here, if needed.)

The peak diode current happens just as the first moment of
conduction (which is neatly defined by the ripple factor, if
I understand you) and would be something like:

Ipk = 2*pi*f*C*Vpk*cos( angle )

Since the cos(arcsin(x)) is just sqrt(1-x^2), the computation
looks like:

Ipk = 2*pi*f*C*Vpk*SQRT(1-(1-Rf*(1-Vd/Vpk))^2)

There's probably some other adjustments to nail it, but that
probably gets somewhat close.

For a full wave bridge (I know, I think you were talking
about a half wave,

Actually, the only thing you could have been talking about is
a full wave bridge + a CT transformer to get to two rails and
ground. I knew that was the only way to get there, too, and
that's why I was staying on a bridge form of it. But I was
sadly not thinking clearly about your writing when I read it.
That's entirely my fault, of course.
but let's go with this for a moment) with
Vpk=25.2*SQRT(2) and Vd=2V (for two diodes in conduction in
the bridge) and Rf=.15 and f=60 (US-centric) and let's say a
C=2200uF, that gets something like Ipk = 15.2A.

None of this takes into account the average load current or
peak load currents. It just assumes that the ripple factor
is somehow known to be correct. I started out assuming that
the average load current would be defined entirely by the
droop (which is, of course, based upon the ripple factor
assumption) and the time from the peak of a previous cycle to
the point at which the above angle occurs and conduction
again begins. But it is complicated slightly by the fact
that the transformer supplies the entire current draw (if it
can) for a short part of the cycle _after_ the peak, as its
slope is less than the droop slope of the cap. I played with
accounting for all that but then decided that in most
practical cases the droop is hopefully not too excessive and
if not, then the angle after the peak isn't that much.. maybe
5 to 8 degrees or so... So I decided to ignore it and just
rely upon the capacitor's droop only:

Iave = C*Vpk*Rf/((pi/2+angle)/(2*pi*f))

So in the case just mentioned, I get Iave = 1.7A. I take it
that Ipk can easily be a factor of 9 or 10 greater. Also, I
note that it would probably be helpful to have nifty charts
of some kind to help pick off details like these.

Can you expand a little on what you were talking about,
though? Was that a half wave suggestion? I'm not sure I can
make sense of the rails, if so. If not, then I'd still
appreciate some of the calculations so that I can sure I
follow all of it.

I'm guessing that if a rail is to have a minimum of 25V on it
at the bottom of the ripple, and you are talking about 15%
regulation, the peak is going to be 29.4V -- not counting the
diode drops. Add 1V for that and it's 30.4V. Another 7% on
top would be 32.7V for the peak. RMS would be that figure
divided by sqrt(2), wouldn't it? Or 23.1Vrms or so?

I should have used 2V here, too. A bridge is the way to go
and that's two drops. No need to suddenly insert a half wave
thing here when massaging the numbers around. So it should
be something more like 31.4V, with 7% more being 33.8V with
margin added. And that is about 23.9Vrms, I think.

Luckily, it doesn't change the thought about 25.2Vrms xfrmrs.
(You might, of course.)

Jon
 
Top