Bits and bytes basics.

Astrophotography has come a long, long way since the introduction of digital photography in general. The ability for a camera to record the RAW sensor data to a file and subsequently manipulating that file and all it’s buddies on a computer using specialized software has given the novice access to the stars.

I notice that it is very common for folks that are familiar with capturing an image with a camera to ask questions like, “how long was the exposure?” or “what settings and lens did you use?”, assuming the frame was created in a similar fashion to the one just taken of the family pet. I take a lot of photos of my pets.

The truth of the matter is that the light reaching Earth from distant stars has traveled a mind-boggling distance to do so. The speed of light is 186,282 miles per second. When you take this speed and calculate how far light can travel in one year, you get 5.87849981 x 1012 miles per year… that is a lot of miles. Now consider that a deep space object like Messier 65 is about 35 million light years away, that means when you point your camera at that galaxy, the photon born in a star there, that is entering the front element of your camera lens on the way to your sensor, has been travelling 2.05747493 x 1020 miles. That is over 200,000,000,000,000,000,000 miles! By the time the light from those distant stars arrives to the very spot you are standing, it is more of a trickle than a roaring rapid.

If you imagine leaving a pail out in a drizzling rain for 5 minutes versus leaving a pail in the same drizzle for 5 hours, it’s obvious which yields the most collected water. Analogous to this, if you leave the shutter open on your camera longer, you will collect more of the photons drizzling in from millions of light years away. In advanced astrophotography, it is fairly common to use CCD imagers that use TEC or similar cooling to keep the imaging sensor very cold relative to the ambient temperature thus eliminating almost all the gain noise associated with sensor heat. Shutter times up to 20 or 30 minutes are not uncommon in those cases. I am using a standard, uncooled DSLR, and the sensor gets quite hot during long exposures. I’ve seen it as high as 109F with exposures in the 3 to 5 minute range. This creates a lot of noise in the image data even with today’s latest chip technology. So we compensate for this by taking a larger quantity of exposures, just not leaving the shutter open as long, and then stacking them all together like making a sandwich to get a single representation to develop into our final image. Instead of putting the pail out for 5 hours, we are putting many pails out for 5 minutes and then dumping them all together in a larger bucket for a similar result. The more sub-exposures you capture, the better your signal to noise ratio in the collimated data – up to a point. There is a diminishing return once you get to a certain number. This is one of the nuances that falls more into the art than science side of astrophotography and experience goes a long way in simplifying what could be derived with math into something that is tribal knowledge.

So how do we actually take the images? First off, understand that most cameras will allow you to program the shutter to be open up to 30 seconds without going to what is called a “bulb” exposure. Since most AP subs will be greater than 30 seconds, you will need a way to take bulb exposures. Assuming your camera can do this (most, if not all, modern DSLRs have this capability), you will not want to sit there and hold the shutter release for minutes at a time over hours of iterations, so the most basic tool you could use would be an intervalometer shutter release. Almost all astrophotographers however, use a computer interface to the DSLR’s digital port and software on their computer to control this function. There are many options available in both the open source/freeware community as well as numerous commercial programs for purchase. I shoot with a Canon DSLR and currently my needs are fairly simple, so I went with a very popular camera control software platform called BackYard EOS  (BYE). This allows me to set up imaging runs where I can specify the parameters in the software and then execute the batch job to the camera. It works very well and is quite affordable.

Another software tool that may or may not be required depending on how long your exposures are and the focal length of the imaging optics will be some flavor of tracking assistance software. We are all moving together through space quite rapidly… around 1000 miles per hour actually. The Earth is spinning on her axis as we orbit around the Sun and this motion adds some complication to photographing objects in the night sky. While also in motion, they are relatively stationary to us due to the immense distance between us.

If you have ever taken a long exposure on a clear night from a tripod, you probably have seen “star trails” in your image. This is an artifact of the Earth’s rotation blurring the distant stars within our field of view. In astrophotography, it is termed “field rotation”. The single-most important tool in counteracting field rotation in your long exposure images is the mount (the tripod-lookin’ thing) that everything sits atop. There is a specialized type of mount called German Equitorial Mount (GEM) that moves along the right ascension and declination axis (as opposed to ranging in altitude and azimuth) using a mechanical progression opposing the field movement caused by our spinning planet. The come in all sorts of capacity and quality versions, but basically all work on the same principle. You align them to our polar axis and provide information about where you are on Earth (latitude and longitude) and what time of year and day it is and let them do their magic.

The GEM will take care of most of your field rotation woes, but there is a way to improve the performance of the GEM with something called auto-guiding. Auto-guiding is a technique where you use a separate camera and specialized software to “lock on” to a star in the sky. The software analyzes the directional movement of the star compared to what is expect by the tracking mount and then issues correctional feedback to the mount via a loop into the mount control system. The camera can be through an additional telescope (aptly called a guide-scope) or through the imaging scope itself (off-axis guiding or OAG), but the principle is the same.

Like the image capture software, there are many choices for auto-guiding software. I use a very popular freeware program called PHD Guiding. It is incredibly easy to set up and use. In fact, PHD actually stands for “Push Here, Dummy” because it is that simple to get going. This is likely why it is so popular in the astrophotography community.

Since this post is intended to be a introduction to the tools that I have decided to use in learning to image, I want to clarify that the image capture software and auto-guiding software are intended to make life easier in the imaging process and improve the quality of your long exposure data with better tracking, but are absolutely not required to capture data to process. However, you must have some sort of software to take all the individual RAW subs from the camera and stack them together for development into an image. I started out in the very beginning (not that long ago at all) with a program called Deep Sky Stacker. Deep Sky Stacker (DSS) is free and very easy to use. It actually works quite well. I ended up switching to a more robust tool for two reasons: 1) I knew I would end up there before long and I may as well start learning how to use it… and 2) I had trouble stretching images out of DSS in another program.

When you stack all your RAW subs (I’ll make another post about how this works later), you end up with a linear image that must be “stretched” to bring out the detail in the data and converted to a non-linear image format that can be viewed normally on the web, etc. When DSS finished “stacking” the image, the tools to “stretch” the image were very basic and didn’t allow much control, so I would try to save the image as a TIF and import into Photoshop CC to finish the stretch. I just never could convert the 32-bit TIF to 16-bit color space without completely jacking up the cosmetic aesthetic of the image. Some of my earliest AP photos were a result of this struggle.

Continuing the same vein of many, many options available to choose from, this software tool is no different. I selected an extensible astrophotography image processing environment called PixInsight as my tool of choice and it has really changed the way I look at workflow and what is truly capable when working with RAW sensor data from a series of subs. PixInsight (PI) is one of the most powerful software environments I’ve even been exposed to and I’ve not even begun to scratch the surface of it’s capability. I went back and reprocessed the original baby-step images with PI and saw dramatic improvements in the level of data I could draw out of the linear stack. I also have nearly eliminated the need for another stage of edit (eg. PhotoShop CC) and can contain 100% of my post processing workflow to the PI environment. I am amazed by this platform every time I launch the application.

While I foresee my choice in camera control and image capture software evolving over time as I learn more and my physical toolset improves, I see PixInsight meeting all my needs for years to come on the post processing side of the house.

So there are the software tools that I currently use to create images of the night sky: BackYard EOS, PHD Guiding, and PixInsight.

Leave a Reply