Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

[Phys-l] whither convolution



Hopefully I won't get tied up in terminology here, which is not the point of my question, namely, why is convolution the correct way to describe the way one function "acts" on another?

When considering a "black box," one observes a given output, given a known input. The transfer function is expressed mathematically as a convolution:

output(t) = Integral[ h(t') x input(t-t'), dt' ]

Why is it that we don't naively claim

output(t) = h(t) x input(t)

and leave it at that? Why is it that a reversed sliding average, an approach that seems highly non-intuitive, correct?

I know about things like the convolution providing us the framework for the very useful notion of impulse response etc, but I don't think my answer should be "that's just the way it works." However, is it possible that the convolution exists merely to mathematically support the concept of impulse response?

When I work with detector responses, analog electrical transfer functions, time-domain signal processing, to name a few, I know that the convolution approach is the fundamental framework, but I don't know why it is correct on a fundamental level.

In reading about convolution on wikipedia, I'm lead to the more general idea of integral transforms, and the desirability of expressing a function as a sum of more simple basis functions. And of course I'm familiar with limits and converting a sum into an integral. OK, it's a connection, but seems uninspiring. A convolution is a very specific form of integral transform, and most examples of integral transforms (eg Fourier, Laplace) deal with transforming from space to inverse space; the convolution transfers from space to the same space (t-space, in my example above).

Thoughts?


Stefan Jeglinski