Maker Pro
Maker Pro

digital low pass filter output errors

J

Jamie

Hi,

I am using a digital lowpass filter fed with random real numbers, and
once in awhile the output of the filter will spike to zero for awhile,
while the input numbers stay positive. In a 5minute period with a
1200Hz sample rate and 50:1 cutoff ratio (24Hz -3dB output) there are
about 10 spikes to zero on the output. Any ideas what could cause the
filter to spike like this? I've tried floats and doubles and both cause
spikes.

cheers,
Jamie

Here is the filter code:

public float lowPassFilter50to1(float next_input_value)
{
//ie. 1st order, 1kHz -3dB lowpass, 50kHz sampling filter

this.next_input_value = next_input_value;
float gain = (float)16.89454484;

xv[0] = xv[1];
xv[1] = next_input_value / gain;

yv[0] = yv[1];
yv[1] = (xv[0] + xv[1]) + ((float)0.8816185924 * yv[0]);

next_output = yv[1];

return next_output;

}
 
M

Martin Brown

Hi,

I am using a digital lowpass filter fed with random real numbers, and
once in awhile the output of the filter will spike to zero for awhile,
while the input numbers stay positive. In a 5minute period with a 1200Hz
sample rate and 50:1 cutoff ratio (24Hz -3dB output) there are about 10
spikes to zero on the output. Any ideas what could cause the filter to
spike like this? I've tried floats and doubles and both cause spikes.

cheers,
Jamie

The most likely cause is that something is trampling over your arrays
containing xv and yv. yv is incidentally redundant - you only need to
preserve next_output. I would be inclined to test the program with
random numbers of the form 17+(rand() % 1000) that way if it ever goes
to zero you know that something has trampled over your working memory.

I suspect the problem lies in how the variables have been declared.

Regards,
Martin Brown
Here is the filter code:

public float lowPassFilter50to1(float next_input_value)
{
//ie. 1st order, 1kHz -3dB lowpass, 50kHz sampling filter

this.next_input_value = next_input_value;
float gain = (float)16.89454484;

xv[0] = xv[1];
xv[1] = next_input_value / gain;

yv[0] = yv[1];
yv[1] = (xv[0] + xv[1]) + ((float)0.8816185924 * yv[0]);

next_output = yv[1];

return next_output;

}
 
J

Jamie

I can only second that. The structure of the filter is not immediately
springing to mind, but it seems odd from here.

And -- Java???

Hi,

I think you guys are right, I am using C# (managed code). I did a lot
of breakpoint testing, and noticed some strange behaviour when copying
one array to another. From experience I'd have to say the odds are
still it is a bug on my end though :)

cheers,
Jamie
 
J

Jamie

I can only second that. The structure of the filter is not immediately
springing to mind, but it seems odd from here.

And -- Java???

Hi,

Well after lots of trial and error I found it is a threading error with
two threads accessing the same data array. The filter seems to work ok
now. I generated the filter from here:
http://www-users.cs.york.ac.uk/~fisher/mkfilter/

cheers,
Jamie
 
M

Martin Brown

Thread collisions can happen in any language. It's kind of funny that a
"C with training wheels" variant like C-pound or Java would fail to be
thread-safe.

It is indeed. Even stranger is that it would appear from this that it
sets variables to zero before storing a new measurement in it.

Multiple thread reads from a variable don't do any harm, but writes or
read modify write from more than one thread is always dangerous unless
they are protected from each other by mutex or some other mechanism.
But if thread safety were easy, practical and convenient, then it would
have been widespread since the '70's (And before you say it -- yes, there
have been thread safe languages since then. But none have gained
widespread acceptance, now have they?)

Sadly no and the wheel keeps getting reinvented. Some architectures now
have indivisible read modify write instructions which do help a bit.

I don't think it is taught properly or to enough people.

Regards,
Martin Brown
 
C

Clifford Heath

Multithread programming is fun, and if you're adequately paranoid, it
isn't so hard.

That sounds like a statement from the 0.99% group of the population.

99% of programmers believe they don't know how to do multi-threaded programming.
Of the remaining 1%, 99% actually don't know how to do it :).

The problem with it is you cannot test the reliability - all the testing in the
world can be done but the program may still fail in the field, for all but the
most simplistic programs at least. MT programs must be correct by design, and
there's no way in the world that can be considered "not so hard".

Clifford Heath.
 
J

Jamie

Thread collisions can happen in any language. It's kind of funny that a
"C with training wheels" variant like C-pound or Java would fail to be
thread-safe.

I guess you've never taken C# for a test drive :) I originally started
with C/C++ years ago, which is superior to C# if you need low level
hardware access, or performance critical code, however I have been
amazed at the elegance of C# for working with complex data structures
compared to C/C++.

cheers,
Jamie
 
J

Jamie

That sounds like a statement from the 0.99% group of the population.

99% of programmers believe they don't know how to do multi-threaded
programming.
Of the remaining 1%, 99% actually don't know how to do it :).

The problem with it is you cannot test the reliability - all the testing
in the
world can be done but the program may still fail in the field, for all
but the
most simplistic programs at least. MT programs must be correct by
design, and
there's no way in the world that can be considered "not so hard".

Clifford Heath.

Hi,

I'm a noob with multithreading, but am currently working with it, so
here is my preliminary idea for "bulletproof" multithreading that should
be able to meet most of my future multithreading needs I think.
Basically its an asynchronous data driven model where the threads run
independently at their required frequency and are scheduled by a
dedicated high frequency task manager thread. All data transfers
between threads are locked, and for high performance locks that have
more than one data consumer there can be a middle-man thread (running
synchronous to the data producer thread) to mirror the data (ie. one
copy per data consumer) to reduce thread lock delays. All data
producing threads lock as short as possible section of code which copies
the data into the shared memory.

cheers,
Jamie
 
C

Clifford Heath

You can't test quality into any product whatsoever--hardware or software.

No. But some of the BDD/TDD practices got a long way, if for no other
reason than that they force you to come up with a definition of
correctness before writing the code - and ensure that the code continues
to meet that definition. It doesn't mean that it's the right definition,
or that the code won't exhibit "Heisenbugs", but it helps. A lot. You
hardware types discovered that a long way before the software world did :).
I've been writing multihreaded code since OS/2 2.0 came out, in early
1992, so I've been round the block a few times.
But you can get it right if you're willing to work at it, and your
management has the patience.

You're right, it can be done. But the number of applications that actually
need it is much smaller than the number of people who don't know that :).
It's a good thing some folk can get it right, or we wouldn't have any real
operating systems of DBMS, and only poor web servers... but despite the
relentless move to parallel hardware, there won't be a wholesale move to
parallel software any time soon.

Clifford Heath.
 
M

Martin Brown

That sounds like a statement from the 0.99% group of the population.

99% of programmers believe they don't know how to do multi-threaded
programming.

I'd put that number today that 90% admit to not knowing how to MT :(

And it is more serious now because most machines now are multicore and
multithreaded (often used rather badly with race conditions present).
Of the remaining 1%, 99% actually don't know how to do it :).

The problem with it is you cannot test the reliability - all the testing
in the
world can be done but the program may still fail in the field, for all
but the
most simplistic programs at least.

It is amazing just how long a few cycles of thread vulnerability in a
large program can last in a pre-emptive multitasking environment without
ever being hit. However, when it finally does happen the chaos that it
can cause is serious since the data is no longer trustworthy.
MT programs must be correct by
design, and
there's no way in the world that can be considered "not so hard".

It isn't quite so hard if you design for cooperative multitasking (with
the emphasis on *design*) or have a paranoid defensive approach to save
and restore everything on all context switches. It is much harder in
pre-emptive multitasking when a task switch may occur at any time and
Murphy's Law applies (eventually).

I recall one early compiler library on OS/2 with a failure to store FPU
settings in one system thread that would sometimes randomise the
rounding causing interesting intermittent drift of unmodified GUI
numeric values (and as a result made complex numerical code give non
standard answers when compared to the reference text based system).

And even when you have all the realtime MT aspects right you can still
get fun and games in a memory constrained system with long term
fragmentation of available memory into blocks that are "just too small".

Regards,
Martin Brown
 
W

Warren

Clifford Heath expounded in
That sounds like a statement from the 0.99% group of the
population.

99% of programmers believe they don't know how to do
multi-threaded programming. Of the remaining 1%, 99%
actually don't know how to do it :).

The problem with it is you cannot test the reliability -
all the testing in the world can be done but the program
may still fail in the field, ...
Clifford Heath.

That is why "tasks" were built _into_ the language Ada, with
standard (task) thread-safe operations. It is still possible
to get it wrong but you have to work harder at it. It is far
superior safety wise to Java, C# etc.

So for applications where life and limb are on the line
(flight control etc.), the Ada subset SPARK (with added
restrictions) are used so that they can actually _prove_ that
the code is correct. All of this is regularly discussed over
in comp.lang.ada.

Warren
 
No. But some of the BDD/TDD practices got a long way, if for no other
reason than that they force you to come up with a definition of
correctness before writing the code - and ensure that the code continues
to meet that definition. It doesn't mean that it's the right definition,
or that the code won't exhibit "Heisenbugs", but it helps. A lot. You
hardware types discovered that a long way before the software world did :).

Management still hasn't learned. We're lucky if there is a spec by the end of
the program. If something doesn't work as product management decides it
should (whenever they decide how is should work), guess who gets the blame?
*Some* hardware types have discovered that it's useful to decide what you're
building before...
You're right, it can be done. But the number of applications that actually
need it is much smaller than the number of people who don't know that :).
It's a good thing some folk can get it right, or we wouldn't have any real
operating systems of DBMS, and only poor web servers... but despite the
relentless move to parallel hardware, there won't be a wholesale move to
parallel software any time soon.

I sure hope not. Let them get in-line code working first.
 
C

Clifford Heath

...the Ada subset SPARK (with added
restrictions) are used so that they can actually _prove_ that
the code is correct.

I studied with a bloke who got his masters degree proving a program
correct (a Pascal pretty-printer, for reference). He got his degree
despite the proven program crashing when fed its own executable as
input.

His program was fine, but the case statement implementation by the
compiler didn't take account of sign extension in characters, so as
soon as it got a -ve char, it indexed backwards off the start of the
jump table.

You're right that some languages make it easy for ignorant folk to
do harm, but ignorant folk always will anyhow. Good languages merely
add succinctness and clarity, not really relevant safeguards, which
are easy to create in any language.
 
W

Warren

Clifford Heath expounded in
...
You're right that some languages make it easy for ignorant
folk to do harm, but ignorant folk always will anyhow. Good
languages merely add succinctness and clarity, not really
relevant safeguards, which are easy to create in any
language.

Ada is more than just a "good language". It is the software
_engineer's_ language. It is very well thought out with an
emphasis on engineering for correctness. Every language has a
few design warts but Ada is really the only practical choice
when code absolutely must work.

And the SPARK folks really do prove that their software is
going to do what it is supposed to do. This is essential for
air traffic control centers and fly by wire systems etc. They
live with some pretty severe restrictions to do this- for
example no dynamic memory allocation is permitted, since a
memory request is potentially a failure point (it's also too
difficult to prove that it will never fail).

In my own limited experience, every program I have ported from
C or C++ to Ada has been shown by the _compiler_ to still have
a subtle bug or two. This is at the _compiling_ stage, long
before testing even begins. This is the best phase to discover
problems.

One subtle case that sticks out in my memory was dealing with
16-bit integer sound samples. I forget now why it was
necessary to invert the sample signal, but it was.

The compiler dutifully pointed out that if the sample variable
S when inverted as -S was problematic. It pointed out that if
S was -32768 (16-bit signed) then -S is not representable as
+32768 and that an exception would be raised at run time were
this allowed to happen.

In the original C code -32768 gets inverted as -32768. So an
incorrect value slips through the system in thousands of lines
of code.

Of course a sharp programmer would catch this as it is being
coded but no programmer is sharp all the time and never as
sharp as the compiler for all details involved.

This example is not a serious error in sound sample code but
it sure would be irritating- with the programmer wondering
where on earth is that glitch coming from? Worse, the problem
might be so rare that it goes unnoticed for a long time.

But in a life and death scenario, this kind of error is
completely unacceptable.

</soapbox>

Warren
 
D

Dennis

One subtle case that sticks out in my memory was dealing with
16-bit integer sound samples. I forget now why it was
necessary to invert the sample signal, but it was.

The compiler dutifully pointed out that if the sample variable
S when inverted as -S was problematic. It pointed out that if
S was -32768 (16-bit signed) then -S is not representable as
+32768 and that an exception would be raised at run time were
this allowed to happen.

In the original C code -32768 gets inverted as -32768. So an
incorrect value slips through the system in thousands of lines
of code.

Of course a sharp programmer would catch this as it is being
coded but no programmer is sharp all the time and never as
sharp as the compiler for all details involved.

Remember C is designed for "flexibility" and "performance" NOT
correctness. If correct results are a concern that is up to the
"application". I remember one processor where they wanted to support C
and had to do strange things to the run time to counteract the hardware,
which would give an exception on overflow. They had to ignore overflow
errors so that you would get the "right" result in C.

(The main use of this machine was business data processing where correct
answers are important and they want to know about errors.)
 
W

Warren

Dennis expounded in
Remember C is designed for "flexibility" and "performance"
NOT correctness.

I don't reminding -- but at the time, C was a big step in the
right direction for correctness over B. As a B programmer at
the time, C took a lot of adjusting to!

B was derived from BCPL, both C's predecessors. In B,
everything was a word (36-bit word on the Honeywell L66),
which could be used as an integer. You performed function
calls to move (Honeywell 9-bit) byte strings from one word
array to another.

In B, there was almost no type checking (there was virtually
no types!). The only other type was a float, which had it's
own special operators for the purpose. IIRC, there were a few
cases where the use of the wrong operator would provoke an
error. Otherwise there was no type checking at all.
If correct results are a concern that is
up to the "application".

But the application is built upon it's development tools. The
debatable thing then is how much should the development tools
protect the developer from mistakes?

Leaving Ada out of it, some tools saved the programmers from
themselves (like Pascal): array bounds checks, defined string
operations etc.

Other tools like C, were focused on empowerment, giving you
the gun to shoot yourself in the foot, if that is what you
asked for.

You could apply the empowerment argument to assembly language
as well. Yet, people have learned to move on from assembly-
though in fairness, the portability of C was a big factor in
this. Lack of portability is why NT came out while OS/2 was
left in the dust (OS/2 being written in assembly).
(The main use of this machine was business data processing
where correct answers are important and they want to know
about errors.)

It has been argued that Ada would excel in that arena. There
are a few (European?) financial institutions using it. You
might only need one hand to count them, however.

I think the language would have gained more traction had there
been free/affordable compilers for the non-military world when
C was born. Now there is gnat (gcc-ada), which is available
for free. You can get support from adacore.com if you need it.

But the world has formed a big foundation in C today and
increasingly C++ (the mess that it is). It is probably too
late to make big changes there now.

However, at the application level, you do have the option of
choosing your poison for each new project. But other factors
like existing libraries often decide the choice.

Warren
 
J

Jamie

I must have been lucky! I have many of several out as open source,
and use them too, never a problem.
What I do have problems with is programs like xine (that is a multi-threaded media player),
sometimes take ages to react to a button press, one thread waiting for the other I guess.
I have often had to kill it by opening an other xterm and typing
killall -KILL xine, because it would not respond to keys anymore.
Never contributed to xine other than a suggestion..
Most webbrowsers (not lynx I think) are multi-threaded.
In fact most programs these days that have GUIs.
It is just simple logic, no mysteries.


Hi,

Could you recommend a good way to make a "real time" task manager using
threads for the tasks? I would like to be able to specify the desired
frequency that each thread is called, ie, from 1Hz to 10kHz. I am
thinking of using a central task manager thread for this which runs full
out polling the current time to determine when to start each thread.
That is not as efficient as an interrupt based task manager, but I'm not
sure if such a thing exists on Windows 7 at least.

cheers,
Jamie
 
J

josephkk

Hi,

Could you recommend a good way to make a "real time" task manager using
threads for the tasks? I would like to be able to specify the desired
frequency that each thread is called, ie, from 1Hz to 10kHz. I am
thinking of using a central task manager thread for this which runs full
out polling the current time to determine when to start each thread.
That is not as efficient as an interrupt based task manager, but I'm not
sure if such a thing exists on Windows 7 at least.

cheers,
Jamie
Monotonic rate schedulers are anathema to modern mainstream desktop OS's.
If you really want to do something like that you really should go to a
proper real time OS. That said, i think that there are some sorta real
time adaptations of many regular desktop OS's. In the MS world there is
the possibility intercepting the RTC interrupt at about 18.2 Hz. In the
RT Linux world you can access a similar interval timer with some more
flexibility.

?-)
 
Top