Chronology Current Month Current Thread Current Date
[Year List] [Month List (current year)] [Date Index] [Thread Index] [Thread Prev] [Thread Next] [Date Prev] [Date Next]

Re: [Phys-L] calc probs for physics



On 12/21/2013 01:31 PM, Paul Lulai wrote:

Does anyone have a favorite resource from which they select
calc-based problems for physics students?

This is in addition to previous answers, and continues the
practice of identifying entire /categories/ of calculus
problems. The theme for today is "fun with integrals".

1) Every sum is an integral ... not just similar, but formally
the same, in accordance with Lebesgue measure theory. The
students in the introductory class won't have heard of Lebesgue
-- and indeed have just barely heard of integrals -- but you
don't need to mention the theory. Just treat discrete sums
as a special kind of integral and move on. If that's too
much of a leap for you, then here's plan B: Every time you
write an interesting sum, write the corresponding integral.
For example, the weighted average (aka weighted mean) is

∑ x_i µ_i
⟨X⟩ = ------------- [1a]
∑ µ_i

where the x_i are the members of the set X and µ_i is the
assigned weight. The corresponding integral is

∫ x dµ
⟨X⟩ = --------- [1b]
∫ dµ

2) Anything involving an /average/ or a /moment/ is a
calculus problem. Sometimes the distribution is so simple
that finding the average is trivial, in which case you can
make it nontrivial by choosing a slightly more complicated
distribution.

You can also make things more interesting by doing a /weighted/
average -- as we saw in item (1) above -- rather than the
unweighted averages that are grossly over-represented in the
usual textbooks.
a) This is relevant, because in the real world almost
every average is a weighted average.
b) The structure of equation [1] provides some insight
into the definition of "average". I speak of equation [1]
because I consider [1a] and [1b] to be the same equation.

There are gazillions of examples in this category, i.e.
lots of different averages and moments. In particular,
when doing data analysis, often some things need to be
weighted more heavily. Raw data points don't have error
bars and don't have weight, but cooked data blobs do.
Also, the /model/ will treat some data points differently,
due to leverage and whatnot.

3) When integrating, we don't necessarily integrate over
space. The variable could be much more interesting than
that.

Example: The center of mass is given by

∫ r dm
CM = --------- [2]
∫ dm

where we integrate over all elements of the mass distribution.
The denominator is just the total mass, but we write equation [2]
in standard form to make a point. Compare equation [1b].
Motivation: A pilot is responsible for calculating the CM of
the aircraft before takeoff. If you get this wrong, you could
get into a situation where you can take off but you can't land
without crashing ... or maybe you can't even take off without
immediately crashing.


Example: The moment of inertia for rotation in the xy plane
is:
I_xy = ∫ r_xy^2 dm [3]

where r_xy is the projection of the position vector (r)
onto the plane of rotation, and once again we integrate
over all elements of the mass distribution. If you know
the mass density as a function of position you can use the
chain rule to convert eq. [3] into an integral over all
space, but that is not the best way to think of what's
going on. Equation [3] is the natural and smart way to
think about it.


Example: The electric dipole moment is

p := ∫ r dq [4a]

or equivalently

p := ∑ r_i dq_i [4b]

where [4b] is a special case of [4a], suitable for a
discrete charge distribution. Yet again we are not
integrating over space, but rather integrating over
the distribution of charge.


Example: Suppose that rather than a charge distribution
or a mass distribution, we have a /probability/ distribution.
The standard deviation is the square root of the variance,
and the plain-vanilla variance is second moment of the
distribution (relative to the mean). So it's yet another
integral. For details, see
http://www.av8n.com/physics/probability-intro.htm#sec-def-stdev

The larger point is that according to the modern (post-1933)
view of the subject, probability is best formulated in terms
of measure theory. That means the foundation for probability
is identical to the foundation for integrals. Roughly speaking,
it's all just weighted sums. For details, see
http://www.av8n.com/physics/probability-intro.htm

Motivation: Probability is indispensable for doing anything
resembling modern-day physics:
-- data analysis ("error bars" et cetera)
-- thermodynamics
-- quantum mechanics
-- transport (diffusion, Brownian motion, etc.)
-- friction
-- etc. etc. etc.

More importantly: There are gazillions of other examples in
this category, where we integrate over something other than
"dx" or "dt".

4) There are some textbooks out there that toss students into
the deep end of the pool, asking them to /use/ probability
ideas that have not been explained, implicitly assuming that
students already have a good grasp of what probability is.
In my experience this is a very bad assumption. According
to the &@$# standards, probability is "supposed" to be part
of the curriculum starting in 6th grade
http://www.corestandards.org/Math/Content/SP
but AFAICT this has no effect, and real students have little
or no clue about probability.

My point for today is that if you formulate probability the
same way you formulate integrals, it is a fine example of the
spiral approach: It reinforces and deepens the understanding
of both things. It is simultaneously simpler and more powerful.

It seems unfair that the physics department would get stuck
teaching basic math ideas such as calculus and probability,
but for the moment we have to play the hand we are dealt.

============

5) Thermodynamics is basically wall-to-wall calculus, so it is
an endless source of calculus problems.

Just pointing that out is not particularly responsive to the
original question, insofar as it requires /more/ calculus
than is appropriate for the introductory physics course.
Specifically, most of thermo requires multivariate calculus:
partial derivatives and all that.

On the other hand, there are some corners of thermodynamics
where plain old one-dimensional calculus suffices. For
example, the heat capacity Cp is the partial derivative of
the enthalpy with respect to temperature (at constant pressure).
If you restrict attention to situations where the pressure is
always constant, then for practical purposes this reduces to
a plain old one-dimensional total derivative.

This provides some thought-provoking exercises. For example,
you cannot plot the Cp of water over any span of temperature
that encompasses the freezing point or boiling point, because
there is a delta-function in the Cp at those points. Delta
functions are hard to plot. On the other hand, if you plot
the enthalpy itself as a function of temperature, everything
is fine. The slope of the curve is the heat capacity. The
enthalpy has a vertical /step/ at the phase-change points.
The enthalpy is not a differentiable function of temperature
at these points -- indeed it is not even a function -- but you
can still draw the graph just fine. You can parameterize the
graph in terms of two functions, H(S) and T(S). The pictures
and some more discussion can be found here:
http://www.av8n.com/physics/phase-transition-heat.htm

The integral is nontrivial if the Cp is changing as a function
of T, which in fact it is.

So I hope this is now somewhat responsive to the original
question: Thermodynamics is presumably already part of the
course, so we are not adding anything, not overburdening the
already-crowded schedule. Instead we are leveraging calculus
ideas to simplify, clarify, and unify stuff that is already
part of the course.

=========

6) Here's another situation where a stepwise function arises.

Executive summary: Everybody tends to think of probability
in terms of the probability density distribution dP(x), which
is fine as far is it goes ... but there are additional good
things you can do with the /cumulative/ probability distribution
P(x).

In particular, suppose x is a continuous variable. There is
some source-distribution, some ideal "population" distribution
over x ... but this is /unknown/ to us. All we can get our
hands on is a finite "sample" of x-values. In the real world,
it is common to encounter unknown, highly non-Gaussian
distributions. For a simple example, see
http://www.av8n.com/physics/cart-convergence.htm
especially
http://www.av8n.com/physics/cart-convergence.htm#fig-cart-overtaken-location-density-1000

In terms of measure theory, each of the N sample-points has
1/Nth of the measure. Each one is a delta function with
weight 1/N. Now one might have hoped that as N becomes
large, the sample density distribution will converge to the
population density distribution, but this hope is in vain.
Each delta function is too high by a factor of infinity in
places where we have sample data, and too small by a factor
of infinity everywhere else, and increasing the number of
sample-points will not make the problem go away.

Fortunately, the sample /cumulative/ distribution does
converge just fine to the population /cumulative/ distribution.

You can kinda maybe sort make the density look like it
converges by using a histogram of something similar, but
it requires skill (or a lucky guess) to get the bin-size
right. If you are not sufficiently skillful and/or lucky,
there could be structure in the distribution that does
not show up in the histogram. In contrast, procedure
for plotting the cumulative distribution is completely
cut-and-dried. It is a staircase function, with vertical
risers (of height 1/N) at each x-value where we have sample
data, and horizontal treads in between.

So my advice is: Every time you plot a probability density
distribution, plot the cumulative probability distribution
also. The incremental effort is negligible compared to
the effort involved in acquiring the data to begin with,
and the incremental information and insight is substantial.

Note: When using a spreadsheet to plot a staircase function,
you need to duplicate each abscissa. That sounds super-
obvious, but it isn't the first thing your average student
would think of. Hint: arrange the data as N-by-2 regions
rather than plain N-by-1 vectors. For details on this, see
http://www.av8n.com/physics/spreadsheet-tips.htm#sec-stairsteps
There are examples (with diagrams) at
http://www.av8n.com/physics/spreadsheet-tips.htm#sec-prob-dist

For more about the convergence of distributions, see
http://www.av8n.com/physics/probability-intro.htm#sec-convergence

Again the idea is /not/ to add anything to the course, but
rather to leverage tools that have already been learned so
as to make things simultaneously easier and better.
-- easier because sharp tools require less effort than
dull tools;
-- better because re-using the tools improves retention;
-- better because making connections between ideas makes
each of the ideas more useful, more broadly applicable;
-- et cetera.............