John Fields wrote:
---
After reading your post again, I think you're talking pie in the
sky. Sure, you can lower the power supply voltage to get an
operating window with less loss, but you still have to waste the
energy to make the bright LEDs as dim as the dull ones to start out
with if you use the same gross power supply for them all, and since
you're not talking PWM or some reactive scheme you're talking DC, so
you're stuck. It's still inelegant brute force, but you're just
using a smaller sledgehammer.
Then you've still got the problem of making the lamps' luminance
track as they're being dimmed, so you need an array of scalable
voltage controlled current sources which will vary the current into
the lamps depending on some nebulous open-loop calibration done at
what? one point?
If you want to go with that, then you've got to admit that all your
scheme does is trim the waste, not get rid of it.
No pies in my sky: it's engineering.
It's true that using 7 scalable voltage-controlled current sources as
series pass elements is wasteful but the waste is minor. It would be
best to have 7 voltage controlled switching current regulators but the
loss with the simpler series regulators is easier to implement with very
little additional loss. Calibration would be done at one point since
this will be a first-order match of brightness with the proportional
currents providing a close approximation to proportional luminance
values; the differences in the current/luminance curves at that point
are second-order mismatches.
The DC supply can be tuned to a voltage slightly higher than the largest
Vf LED (and series regulator) would require at 100% rated current. Done
properly, the source is a switching regulator that self-adjusts to have
only one gate drive near (rather than at) the positive rail, the rest at
lower drive voltages. If you increased the supply voltage, the gate
drive levels would simply reduce and the Vds values increase by the same
amount the supply increases. Nowhere was it specified that the supply
had to be a dumb, fixed voltage but even then a 5V supply would produce
good results in all cases.
If a relatively large current sense resistor (as current sense resistors
go) of 100 milliOhm were used (100mW is hotter than I like) the voltage
drop lost to sensing would be 100 mV. The MOSFETs in series with the
worst-case Vf LED should have a drop of about 200 mV at the gate-source
voltage provided by a rail-to-rail output op amp. The worst case for
*loss* would have all LEDs at 100% current but one of the LEDs at the
highest Vf, the rest at the lowest. But in any case the overall *power*
dissipated in each LED/regulator combination would be less than or equal
to 100% rated current at the worst Vf of the 7 LEDs plus 300 mV for the
pass elements. The only way to improve on the loss in the "more
efficient" lower Vf LEDs would be to have individual switchers for each
LED, still had for less than $4.50 each.
This is not brute force. This is engineering tradeoff. You can have a
power draw for 7 LEDs that's 7 times the power draw of the largest Vf
device with simple pass regulators (1 MOSFET, 1 OpAmp channel, a current
sense resistor and 2 set resistors for the OpAmp) presenting one voltage
that controls 7 channels. Or you can use 1A switchers per-channel for
90% efficiency overall; these still need to servo to fixed percentages.
In most cases where the Vf is similar for the multiple devices, the
efficiencies of the two approaches will differ by only a few percent.
When you suggest 30% efficiency rather than 85%-95%, you can see where
"brute force" easily loses out. The brute force approach you
recommended would produce extremely poor matching results in a system
where some LEDs have a Vf of 3.03V and others are 4.47V, the range of Vf
at 700 mA as published in the Lumileds datasheet; the current ratios
would no longer match as the system is dimmed and brightened because of
the large disparity.