The rabbit hole goes deep.

I haven’t been able to image in a while due to work travel and weather, but I did get to spend some time out at the HAS observatory this past weekend getting checked out to use the facility. I have been able to access the dark site (which is very nice) for some time now, but it’s about 110 miles (one way) from my house and takes a little planning and commitment to head out for a night… especially when the HAAS dark site is half the distance from me.  

The HAS observatory has three tracking telescopes, but only one of them has go-to capability. They have a 12.5″ f/5 that is very easy to use and mounted such that most targets are viewable without any sort of step ladder to reach the eyepiece… even for short guys like me. They have another 12.5″ f/7 that is on extended loan from NASA. This scope is massive and the optical quality is quite amazing. It is also easy to operate, but due to the enormity of this behemoth, it requires a ladder to put the seeing-balls onto the eyepiece. Lastly, they have a 14″ SCT that is fork-mounted and connected to a computer running an older version of Software Bisque’s The Sky for control. All three scopes are very nice and housed in a permanent observatory building with a nine ton retractable metal roof. 

I had the opportunity to get checked out for solo access to the observatory and had a great time learning the systems and procedures for operation. The 14″ is really the only scope set up for reasonable imaging and I don’t know how to tie an autoguider into that system, so I likely won’t be using it for imaging any time soon. It is nice to know that I can access the facility now for visual astronomy and learning with some fantastic tools. 

Astronomy, and specifically astrophotography, is a journey in learning and developing skills. I’m not sure there is a way to truly master the observation and appreciation of a subject asymptotically infinite. My brain doesn’t compute on the scale of the universe. Like most technical hobbies, the tools and toys evolve with our skillset. I met a friendly fellow enthusiast on the field over the course of the evening that picked up astronomy nearly a decade and a half ago and had an impressive showing of astrophotography pr0n on display. I’ll just leave this here:


M81 redux.

It is a fairly common practice among astrophotographers (and especially newbies like me) to go back to old sets of subs and re-integrate the data for practice running through the different steps of post-processing workflow. Most of the time this is to try applying new skills or a shift in technique, but sometimes it’s just because the weather isn’t cooperating to allow new imaging to take place. I did realize this past week that I was leaving a step out of my PI workflow that may or may not make a difference in the linear FIT file I was working with from BatchPreProcessing to final image. That step was the Cosmetic Correction application to the integration step. This is intended to account for hot and cold pixels in the camera subs and remove them while stacking. I don’t believe this really helped me here because my camera is fairly new and the subs are literally the very first subs I ever captured of a deep space object. Nonetheless, I reworked this M81 and M82 data again. If you compare it to the original attempt I made back in May, this is an improvement in my opinion. I think the real answer for making it better is to go re-shoot this target and improve the RAWs with which I’m working.

May 3, 2014 data re-processed
May 3, 2014 data re-processed

Bits and bytes basics.

Astrophotography has come a long, long way since the introduction of digital photography in general. The ability for a camera to record the RAW sensor data to a file and subsequently manipulating that file and all it’s buddies on a computer using specialized software has given the novice access to the stars.

I notice that it is very common for folks that are familiar with capturing an image with a camera to ask questions like, “how long was the exposure?” or “what settings and lens did you use?”, assuming the frame was created in a similar fashion to the one just taken of the family pet. I take a lot of photos of my pets.

The truth of the matter is that the light reaching Earth from distant stars has traveled a mind-boggling distance to do so. The speed of light is 186,282 miles per second. When you take this speed and calculate how far light can travel in one year, you get 5.87849981 x 1012 miles per year… that is a lot of miles. Now consider that a deep space object like Messier 65 is about 35 million light years away, that means when you point your camera at that galaxy, the photon born in a star there, that is entering the front element of your camera lens on the way to your sensor, has been travelling 2.05747493 x 1020 miles. That is over 200,000,000,000,000,000,000 miles! By the time the light from those distant stars arrives to the very spot you are standing, it is more of a trickle than a roaring rapid.

If you imagine leaving a pail out in a drizzling rain for 5 minutes versus leaving a pail in the same drizzle for 5 hours, it’s obvious which yields the most collected water. Analogous to this, if you leave the shutter open on your camera longer, you will collect more of the photons drizzling in from millions of light years away. In advanced astrophotography, it is fairly common to use CCD imagers that use TEC or similar cooling to keep the imaging sensor very cold relative to the ambient temperature thus eliminating almost all the gain noise associated with sensor heat. Shutter times up to 20 or 30 minutes are not uncommon in those cases. I am using a standard, uncooled DSLR, and the sensor gets quite hot during long exposures. I’ve seen it as high as 109F with exposures in the 3 to 5 minute range. This creates a lot of noise in the image data even with today’s latest chip technology. So we compensate for this by taking a larger quantity of exposures, just not leaving the shutter open as long, and then stacking them all together like making a sandwich to get a single representation to develop into our final image. Instead of putting the pail out for 5 hours, we are putting many pails out for 5 minutes and then dumping them all together in a larger bucket for a similar result. The more sub-exposures you capture, the better your signal to noise ratio in the collimated data – up to a point. There is a diminishing return once you get to a certain number. This is one of the nuances that falls more into the art than science side of astrophotography and experience goes a long way in simplifying what could be derived with math into something that is tribal knowledge.

So how do we actually take the images? First off, understand that most cameras will allow you to program the shutter to be open up to 30 seconds without going to what is called a “bulb” exposure. Since most AP subs will be greater than 30 seconds, you will need a way to take bulb exposures. Assuming your camera can do this (most, if not all, modern DSLRs have this capability), you will not want to sit there and hold the shutter release for minutes at a time over hours of iterations, so the most basic tool you could use would be an intervalometer shutter release. Almost all astrophotographers however, use a computer interface to the DSLR’s digital port and software on their computer to control this function. There are many options available in both the open source/freeware community as well as numerous commercial programs for purchase. I shoot with a Canon DSLR and currently my needs are fairly simple, so I went with a very popular camera control software platform called BackYard EOS  (BYE). This allows me to set up imaging runs where I can specify the parameters in the software and then execute the batch job to the camera. It works very well and is quite affordable.

Another software tool that may or may not be required depending on how long your exposures are and the focal length of the imaging optics will be some flavor of tracking assistance software. We are all moving together through space quite rapidly… around 1000 miles per hour actually. The Earth is spinning on her axis as we orbit around the Sun and this motion adds some complication to photographing objects in the night sky. While also in motion, they are relatively stationary to us due to the immense distance between us.

If you have ever taken a long exposure on a clear night from a tripod, you probably have seen “star trails” in your image. This is an artifact of the Earth’s rotation blurring the distant stars within our field of view. In astrophotography, it is termed “field rotation”. The single-most important tool in counteracting field rotation in your long exposure images is the mount (the tripod-lookin’ thing) that everything sits atop. There is a specialized type of mount called German Equitorial Mount (GEM) that moves along the right ascension and declination axis (as opposed to ranging in altitude and azimuth) using a mechanical progression opposing the field movement caused by our spinning planet. The come in all sorts of capacity and quality versions, but basically all work on the same principle. You align them to our polar axis and provide information about where you are on Earth (latitude and longitude) and what time of year and day it is and let them do their magic.

The GEM will take care of most of your field rotation woes, but there is a way to improve the performance of the GEM with something called auto-guiding. Auto-guiding is a technique where you use a separate camera and specialized software to “lock on” to a star in the sky. The software analyzes the directional movement of the star compared to what is expect by the tracking mount and then issues correctional feedback to the mount via a loop into the mount control system. The camera can be through an additional telescope (aptly called a guide-scope) or through the imaging scope itself (off-axis guiding or OAG), but the principle is the same.

Like the image capture software, there are many choices for auto-guiding software. I use a very popular freeware program called PHD Guiding. It is incredibly easy to set up and use. In fact, PHD actually stands for “Push Here, Dummy” because it is that simple to get going. This is likely why it is so popular in the astrophotography community.

Since this post is intended to be a introduction to the tools that I have decided to use in learning to image, I want to clarify that the image capture software and auto-guiding software are intended to make life easier in the imaging process and improve the quality of your long exposure data with better tracking, but are absolutely not required to capture data to process. However, you must have some sort of software to take all the individual RAW subs from the camera and stack them together for development into an image. I started out in the very beginning (not that long ago at all) with a program called Deep Sky Stacker. Deep Sky Stacker (DSS) is free and very easy to use. It actually works quite well. I ended up switching to a more robust tool for two reasons: 1) I knew I would end up there before long and I may as well start learning how to use it… and 2) I had trouble stretching images out of DSS in another program.

When you stack all your RAW subs (I’ll make another post about how this works later), you end up with a linear image that must be “stretched” to bring out the detail in the data and converted to a non-linear image format that can be viewed normally on the web, etc. When DSS finished “stacking” the image, the tools to “stretch” the image were very basic and didn’t allow much control, so I would try to save the image as a TIF and import into Photoshop CC to finish the stretch. I just never could convert the 32-bit TIF to 16-bit color space without completely jacking up the cosmetic aesthetic of the image. Some of my earliest AP photos were a result of this struggle.

Continuing the same vein of many, many options available to choose from, this software tool is no different. I selected an extensible astrophotography image processing environment called PixInsight as my tool of choice and it has really changed the way I look at workflow and what is truly capable when working with RAW sensor data from a series of subs. PixInsight (PI) is one of the most powerful software environments I’ve even been exposed to and I’ve not even begun to scratch the surface of it’s capability. I went back and reprocessed the original baby-step images with PI and saw dramatic improvements in the level of data I could draw out of the linear stack. I also have nearly eliminated the need for another stage of edit (eg. PhotoShop CC) and can contain 100% of my post processing workflow to the PI environment. I am amazed by this platform every time I launch the application.

While I foresee my choice in camera control and image capture software evolving over time as I learn more and my physical toolset improves, I see PixInsight meeting all my needs for years to come on the post processing side of the house.

So there are the software tools that I currently use to create images of the night sky: BackYard EOS, PHD Guiding, and PixInsight.

Apple of my eye.

Pages on the calendar turn and practice is few and far between. One significant occurrence is my ability to access both dark site observatory locations on my own (eg. I was granted solo access). The biggest upside here is that when the weather is nice, I can elect to exchange sleep for the long drive to/from the site and get a little exposure time in without coordinating with a larger group.
Last night, the moon was Waxing Gibbous, so it was around 85% of full. This makes even being at a “dark site” not so dark. None the less, it was a beautiful night. The temperature eventually dropped to the high 70’s with a slight breeze.
Initially I attempted 10 subs on the Veil nebula, but the moonlight prevented any deep imaging of fainter targets. I decided to try for a brighter DSO… one you can actually make out as a fuzzy blur with binoculars just to the side of the constellation Sagitta (Latin for “arrow” and not related in any way to the larger constellation Sagittarius). Messier 27, or the Dumbbell Nebula (NGC 6853) is a planetary nebula in the constellation Vulpecula, at a distance of about 1,360 light years. This object was the first planetary nebula to be discovered; by Charles Messier in 1764. It is sometimes also referred to as the Apple Core Nebula.


Messier 27
Messier 27

Practice and lessons learned.

Continuing on with the Messier list, this is my first and humble attempt at M57 – The Ring Nebula (NGC 6720). This planetary nebula in the northern constellation of Lyra was discovered by the French astronomer Antoine Darquier de Pellepoix in January 1779. At about 2,283 light years from Earth, this nebula was formed when a shell of ionized gas was expelled into the surrounding interstellar medium by a red giant star passing through the last stage in its evolution before becoming a white dwarf. The planetary nebula nucleus, barely visible as a spec in my image, was discovered by Hungarian astronomer Jenő Gothard on September 1, 1886. Within the last two thousand years, the central star of the Ring Nebula has left the asymptotic giant branch after exhausting its supply of hydrogen fuel. Thus it no longer produces its energy through nuclear fusion and, in evolutionary terms, it is now becoming a compact white dwarf star.


Messier 57
Messier 57


M51 imaged again through the f/7 optics with longer, but less exposures. The sensor pushed to 109F, so the noise was just silly, but you can see a little more depth in the dust lanes. I prefer the color and star density in the f/4 image, but clearly more data integration would fix this… I’m not really happy with this one, but it’s worth documenting progress and lessons learned.

Messier 51
Messier 51


So we come to the primary target of the week… Messier 8, the Lagoon Nebula (NGC 6523). This emission nebula is a giant interstellar cloud in the constellation Sagittarius and was discovered by Giovanni Hodierna before 1654. The Lagoon Nebula is estimated to be somewhere between four and six thousand light years from the Earth and is about 110 light years across along its longest dimension.

Messier 8
Messier 8

Tunnel vision.

My first night out imaging with the new telescope and filters was an eye opener for me. Focusing was a little more challenging and the added interconnections not only added weight to be factored in the overall balance, but also some flexure potential in the optical path. The biggest unexpected thing was the serious vignetting of the frame edges due to the light passing through the 1.25″ filter wheel onto a full frame 24mm x 36mm sensor. This is just something I’m going to have to deal with for a bit until I can find a better solution. Smaller targets being centered has become more important and larger targets are not as desirable due to the limitation in my FOV. I’m not certain at this point if spending the extra money on a 2″ filter rig would have prevented this or not. 

My first stab at grabbing a couple of targets ended with 50/50 results. I initially tried to image M63, but the data just didn’t pan out. I had something terribly wrong and didn’t notice until I did the integration later.

I did manage a few short subs of M13. This is my first attempt at Messier 13 (NGC 6205), the Great Globular Cluster in Hercules and also my first integration with the new telescope. NGC 6207 is also visible in the lower left of the frame, so I intentionally didn’t crop this out. M13 was discovered by Edmond Halley in 1714, and cataloged by Charles Messier on June 1, 1764. M13 is about 145 light-years in diameter, and it is composed of several hundred thousand stars, the brightest of which is the variable star V11 with an apparent magnitude of 11.95. M13 is located 25,100 light-years from Earth.



Messier 13
Messier 13

Let there be dark!

The inability to image at home… or at least near home… is limiting my progress. I’m fairly patient, but getting out once a month is challenging. I am hungry to implement new knowledge I’ve learned through self-study and driving two to four hours a couple nights a month to hopefully practice is frustrating. Several weeks had gone by since my last imaging session out at Columbus and I hatched a plan at some point along that lull to “fix” my situation by accelerating my learning curve. I was going to leverage astronomy filters designed to attenuate certain wavelengths of light and allow the unhindered passage of other wavelengths of light in my imaging train. This is commonly called narrow-band imaging (NB). There was only one problem – no AP filters were made for my camera or lens. I even wrote the most prolific filter manufacturers asking for guidance and they confirmed my options were limited. 

My long term plan was always to move into some sort of dedicated imaging telescope with an array of filters and a cooled CCD camera. I knew that is where I was headed… eventually. I didn’t think the telescope decision and purchase would be driven by the want for filters though… cart, horse, chicken, egg… this was happening. 

I elected to purchase a refractor because it was closest to what I was familiar with in the “normal” photographic world and I wouldn’t have to deal with mirrors, collimation, and to a lesser extent, cooldowns, right away. I opted for a triplet to minimize chromatic aberration and flatten the field a little bit. I also decided to stick with Orion due to their tremendous customer service. I didn’t realize at the time the can of worms I was opening up, but this is clearly one of the ways to learn this hobby.    By doing.

The list went something like this:

  • Orion EON 130mm ED Triplet Apochromatic Refractor
  • 5-Position Manual Filter Wheel (1.25″)
  • 1.25″ SkyGlow Imaging Filter 
  • EZ Finder Deluxe II red/green (ordered this for visual)
  • 90mm Tube Rings (replacing the wonky screws)
  • 1.25″ Xtra Narrowband O-III 
  • 1.25″ Xtra Narrowband S-II
  • 1.25″ Xtra Narrowband H-Alpha
  • Orion T-ring for Canon EOS Camera
  • Orion 2″ Zero-Profile Prime Focus Camera Adapter
Orion EON 130mm APO Triplet
Orion EON 130mm APO Triplet

Go West, young man.

I’m a big believer in not reinventing the wheel unless absolutely necessary. The best way to be successful at something is to find someone that is already very successful at it and follow in their footsteps. Learning from the mistakes of others is far less painful than learning from your own. So, when I deciding I was learning how to do this and do it well, I looked for local resources that I could plug into for Tribal Knowledge™ and assistance. Of the three major astronomy clubs in the area (that I knew of), I joined two:

One of the benefits of being a member of a local club is that they usually have access to some form of dark site for viewing (or imaging) away from the light dome of the city. These are no different. The HAAS dark site to the north is closer (Huntsville) at about 52 miles one way while the HAS dark site is west of me (Columbus) and over 100 miles one way! This really impacts the time you have to be productive when you take into account the time it takes to set all the gear up, polar align the mount, and do a star alignment. All the targets I’ve worked on thusfar (all three… woohoo) have been imaged at the HAAS observatory location, but after a month of inaction, I had a chance to image at the HAS site.

I had some trouble with star alignment that delayed data acquisition. The moon was bright and clouds were starting to roll in by the time I actually worked out the alignment. It wasn’t optimal by any means, but at least there were a few frames to be had… 26 lights and 30 darks totaling about 2.6 hours of total integration (give or take). My first attempt at Messier 101, the Pinwheel Galaxy, in wide field. Note NGC5485, NGC5474, NGC5473, NGC5457, NGC5443, and NGC5422 are also visible in this frame. Noise was a real issue here… as it has been in almost every integration. I’m learning tips and tricks along the way, but this gain noise due to sensor heat (CMOS) in my DSLR is going to have to be dealt with at some point.

Messier 101
Messier 101

Some New InSight.

The month of May was brutal with travel and weather challenges, so no imaging is done. I continue to learn with reading and observing others from afar… vicariously imaging through others and visualizing to improve. Two things that really jumped out at me when integrating the few subs I’d managed accumulate from the outing the month prior were:

  1. I wasn’t getting enough data. I had too few lights and even fewer darks. I had no bias and no flats. This is a skill I need to work on.
  2. My tools were lacking. Deep Sky Stacker wasn’t going to be my long term data integration solution and the 32-bit to 16-bit conversion in Photoshop was a point I was losing a lot of information that needed to be eliminated. 

I applied for and downloaded a trial of the popular and immensely powerful astrophotography toolset PixInsight. Leveraging some really nice video tutorials over at Harry’s Astroshed, I went back to the original 55 sub-frames I took on May 3 of Messier 51 and started completely from scratch. With this same hour worth of RAW data, I was able to produce a much better result. There were significantly more steps in the integration, but it was well worth it. Capturing a glimpse of what can be done with the rudimentary information I had, it really made me want to improve the quantity of the data set and lower the signal to noise for next time around. 

I did the same for the other two targets and feel I can see some improvement in those as well. Wanting to get some feedback and have a place to share my results, I started placing my revisions on Astrobin. My respect for some of these imagers has grown exponentially as I peel back the layers of the onion and see just how little I’ve progressed and how far there is to go. Exciting to be assured!

Messier 81 and 82
Messier 81 and 82


Messier 51
Messier 51


Messier 65 and the Leo Triplet
Messier 65 and the Leo Triplet