M81 redux.

It is a fairly common practice among astrophotographers (and especially newbies like me) to go back to old sets of subs and re-integrate the data for practice running through the different steps of post-processing workflow. Most of the time this is to try applying new skills or a shift in technique, but sometimes it’s just because the weather isn’t cooperating to allow new imaging to take place. I did realize this past week that I was leaving a step out of my PI workflow that may or may not make a difference in the linear FIT file I was working with from BatchPreProcessing to final image. That step was the Cosmetic Correction application to the integration step. This is intended to account for hot and cold pixels in the camera subs and remove them while stacking. I don’t believe this really helped me here because my camera is fairly new and the subs are literally the very first subs I ever captured of a deep space object. Nonetheless, I reworked this M81 and M82 data again. If you compare it to the original attempt I made back in May, this is an improvement in my opinion. I think the real answer for making it better is to go re-shoot this target and improve the RAWs with which I’m working.

May 3, 2014 data re-processed
May 3, 2014 data re-processed

Bits and bytes basics.

Astrophotography has come a long, long way since the introduction of digital photography in general. The ability for a camera to record the RAW sensor data to a file and subsequently manipulating that file and all it’s buddies on a computer using specialized software has given the novice access to the stars.

I notice that it is very common for folks that are familiar with capturing an image with a camera to ask questions like, “how long was the exposure?” or “what settings and lens did you use?”, assuming the frame was created in a similar fashion to the one just taken of the family pet. I take a lot of photos of my pets.

The truth of the matter is that the light reaching Earth from distant stars has traveled a mind-boggling distance to do so. The speed of light is 186,282 miles per second. When you take this speed and calculate how far light can travel in one year, you get 5.87849981 x 1012 miles per year… that is a lot of miles. Now consider that a deep space object like Messier 65 is about 35 million light years away, that means when you point your camera at that galaxy, the photon born in a star there, that is entering the front element of your camera lens on the way to your sensor, has been travelling 2.05747493 x 1020 miles. That is over 200,000,000,000,000,000,000 miles! By the time the light from those distant stars arrives to the very spot you are standing, it is more of a trickle than a roaring rapid.

If you imagine leaving a pail out in a drizzling rain for 5 minutes versus leaving a pail in the same drizzle for 5 hours, it’s obvious which yields the most collected water. Analogous to this, if you leave the shutter open on your camera longer, you will collect more of the photons drizzling in from millions of light years away. In advanced astrophotography, it is fairly common to use CCD imagers that use TEC or similar cooling to keep the imaging sensor very cold relative to the ambient temperature thus eliminating almost all the gain noise associated with sensor heat. Shutter times up to 20 or 30 minutes are not uncommon in those cases. I am using a standard, uncooled DSLR, and the sensor gets quite hot during long exposures. I’ve seen it as high as 109F with exposures in the 3 to 5 minute range. This creates a lot of noise in the image data even with today’s latest chip technology. So we compensate for this by taking a larger quantity of exposures, just not leaving the shutter open as long, and then stacking them all together like making a sandwich to get a single representation to develop into our final image. Instead of putting the pail out for 5 hours, we are putting many pails out for 5 minutes and then dumping them all together in a larger bucket for a similar result. The more sub-exposures you capture, the better your signal to noise ratio in the collimated data – up to a point. There is a diminishing return once you get to a certain number. This is one of the nuances that falls more into the art than science side of astrophotography and experience goes a long way in simplifying what could be derived with math into something that is tribal knowledge.

So how do we actually take the images? First off, understand that most cameras will allow you to program the shutter to be open up to 30 seconds without going to what is called a “bulb” exposure. Since most AP subs will be greater than 30 seconds, you will need a way to take bulb exposures. Assuming your camera can do this (most, if not all, modern DSLRs have this capability), you will not want to sit there and hold the shutter release for minutes at a time over hours of iterations, so the most basic tool you could use would be an intervalometer shutter release. Almost all astrophotographers however, use a computer interface to the DSLR’s digital port and software on their computer to control this function. There are many options available in both the open source/freeware community as well as numerous commercial programs for purchase. I shoot with a Canon DSLR and currently my needs are fairly simple, so I went with a very popular camera control software platform called BackYard EOS  (BYE). This allows me to set up imaging runs where I can specify the parameters in the software and then execute the batch job to the camera. It works very well and is quite affordable.

Another software tool that may or may not be required depending on how long your exposures are and the focal length of the imaging optics will be some flavor of tracking assistance software. We are all moving together through space quite rapidly… around 1000 miles per hour actually. The Earth is spinning on her axis as we orbit around the Sun and this motion adds some complication to photographing objects in the night sky. While also in motion, they are relatively stationary to us due to the immense distance between us.

If you have ever taken a long exposure on a clear night from a tripod, you probably have seen “star trails” in your image. This is an artifact of the Earth’s rotation blurring the distant stars within our field of view. In astrophotography, it is termed “field rotation”. The single-most important tool in counteracting field rotation in your long exposure images is the mount (the tripod-lookin’ thing) that everything sits atop. There is a specialized type of mount called German Equitorial Mount (GEM) that moves along the right ascension and declination axis (as opposed to ranging in altitude and azimuth) using a mechanical progression opposing the field movement caused by our spinning planet. The come in all sorts of capacity and quality versions, but basically all work on the same principle. You align them to our polar axis and provide information about where you are on Earth (latitude and longitude) and what time of year and day it is and let them do their magic.

The GEM will take care of most of your field rotation woes, but there is a way to improve the performance of the GEM with something called auto-guiding. Auto-guiding is a technique where you use a separate camera and specialized software to “lock on” to a star in the sky. The software analyzes the directional movement of the star compared to what is expect by the tracking mount and then issues correctional feedback to the mount via a loop into the mount control system. The camera can be through an additional telescope (aptly called a guide-scope) or through the imaging scope itself (off-axis guiding or OAG), but the principle is the same.

Like the image capture software, there are many choices for auto-guiding software. I use a very popular freeware program called PHD Guiding. It is incredibly easy to set up and use. In fact, PHD actually stands for “Push Here, Dummy” because it is that simple to get going. This is likely why it is so popular in the astrophotography community.

Since this post is intended to be a introduction to the tools that I have decided to use in learning to image, I want to clarify that the image capture software and auto-guiding software are intended to make life easier in the imaging process and improve the quality of your long exposure data with better tracking, but are absolutely not required to capture data to process. However, you must have some sort of software to take all the individual RAW subs from the camera and stack them together for development into an image. I started out in the very beginning (not that long ago at all) with a program called Deep Sky Stacker. Deep Sky Stacker (DSS) is free and very easy to use. It actually works quite well. I ended up switching to a more robust tool for two reasons: 1) I knew I would end up there before long and I may as well start learning how to use it… and 2) I had trouble stretching images out of DSS in another program.

When you stack all your RAW subs (I’ll make another post about how this works later), you end up with a linear image that must be “stretched” to bring out the detail in the data and converted to a non-linear image format that can be viewed normally on the web, etc. When DSS finished “stacking” the image, the tools to “stretch” the image were very basic and didn’t allow much control, so I would try to save the image as a TIF and import into Photoshop CC to finish the stretch. I just never could convert the 32-bit TIF to 16-bit color space without completely jacking up the cosmetic aesthetic of the image. Some of my earliest AP photos were a result of this struggle.

Continuing the same vein of many, many options available to choose from, this software tool is no different. I selected an extensible astrophotography image processing environment called PixInsight as my tool of choice and it has really changed the way I look at workflow and what is truly capable when working with RAW sensor data from a series of subs. PixInsight (PI) is one of the most powerful software environments I’ve even been exposed to and I’ve not even begun to scratch the surface of it’s capability. I went back and reprocessed the original baby-step images with PI and saw dramatic improvements in the level of data I could draw out of the linear stack. I also have nearly eliminated the need for another stage of edit (eg. PhotoShop CC) and can contain 100% of my post processing workflow to the PI environment. I am amazed by this platform every time I launch the application.

While I foresee my choice in camera control and image capture software evolving over time as I learn more and my physical toolset improves, I see PixInsight meeting all my needs for years to come on the post processing side of the house.

So there are the software tools that I currently use to create images of the night sky: BackYard EOS, PHD Guiding, and PixInsight.

Apple of my eye.

Pages on the calendar turn and practice is few and far between. One significant occurrence is my ability to access both dark site observatory locations on my own (eg. I was granted solo access). The biggest upside here is that when the weather is nice, I can elect to exchange sleep for the long drive to/from the site and get a little exposure time in without coordinating with a larger group.
Last night, the moon was Waxing Gibbous, so it was around 85% of full. This makes even being at a “dark site” not so dark. None the less, it was a beautiful night. The temperature eventually dropped to the high 70’s with a slight breeze.
Initially I attempted 10 subs on the Veil nebula, but the moonlight prevented any deep imaging of fainter targets. I decided to try for a brighter DSO… one you can actually make out as a fuzzy blur with binoculars just to the side of the constellation Sagitta (Latin for “arrow” and not related in any way to the larger constellation Sagittarius). Messier 27, or the Dumbbell Nebula (NGC 6853) is a planetary nebula in the constellation Vulpecula, at a distance of about 1,360 light years. This object was the first planetary nebula to be discovered; by Charles Messier in 1764. It is sometimes also referred to as the Apple Core Nebula.

 

Messier 27
Messier 27

Practice and lessons learned.

Continuing on with the Messier list, this is my first and humble attempt at M57 – The Ring Nebula (NGC 6720). This planetary nebula in the northern constellation of Lyra was discovered by the French astronomer Antoine Darquier de Pellepoix in January 1779. At about 2,283 light years from Earth, this nebula was formed when a shell of ionized gas was expelled into the surrounding interstellar medium by a red giant star passing through the last stage in its evolution before becoming a white dwarf. The planetary nebula nucleus, barely visible as a spec in my image, was discovered by Hungarian astronomer Jenő Gothard on September 1, 1886. Within the last two thousand years, the central star of the Ring Nebula has left the asymptotic giant branch after exhausting its supply of hydrogen fuel. Thus it no longer produces its energy through nuclear fusion and, in evolutionary terms, it is now becoming a compact white dwarf star.

 

Messier 57
Messier 57

 

M51 imaged again through the f/7 optics with longer, but less exposures. The sensor pushed to 109F, so the noise was just silly, but you can see a little more depth in the dust lanes. I prefer the color and star density in the f/4 image, but clearly more data integration would fix this… I’m not really happy with this one, but it’s worth documenting progress and lessons learned.

Messier 51
Messier 51

 

So we come to the primary target of the week… Messier 8, the Lagoon Nebula (NGC 6523). This emission nebula is a giant interstellar cloud in the constellation Sagittarius and was discovered by Giovanni Hodierna before 1654. The Lagoon Nebula is estimated to be somewhere between four and six thousand light years from the Earth and is about 110 light years across along its longest dimension.

Messier 8
Messier 8