Chronology | Current Month | Current Thread | Current Date |
[Year List] [Month List (current year)] | [Date Index] [Thread Index] | [Thread Prev] [Thread Next] | [Date Prev] [Date Next] |
///
- download jpeg photo of constellation (Ursa Major and minor so far, sans
connecting lines and names, of course) from wherever I can find it - some
from Hubble site
- load the images into Iris
- Using the 'aperture photometry' feature, measure several "background"
spots, record the intensity (the sum of the intensities of the pixels in
the circle). Average this out, record as "Background Level"
- Bring the aperture over a star in the constellation, record the
intensity. Repeat for all of the stars in the constellatoin (using same
size aperture each time).
- Subtract background level from each star's intensity, calculate
"Instrumental Magnitude" as -2.5log(intensity)
- Pick one star, find the difference between its accepted magnitude and my
instrumental magnitude, call the difference the "offset," and add this to
the magnitudes of the other stars, giving me "calculated magnitudes."
I've also tried several algorithms to find an "average offest," though
this doesn't really decrease my % difference much from the other offset
method.
This works pretty well (<5-6% difference) with dimmer stars (2-6
magnitude), but not as well with magnitude <2 stars (up to 30% at times) .
Any ideas where I'm butchering the physics? I'm a little suspicious
about a few things in particular:
- should I keep the same aperture size each time, or calculate some sort
of intensity per pixel, as the bright stars often appear larger than the
dim ones?
Josh Gates