Galaxy Zoo Talk

New GOODS GZ images, a science question

  • JeanTate by JeanTate

    In Karen's New Images on Galaxy Zoo, Part 1 blog post, the caption for the first Figure reads:

    Comparison of the different sets of images from the GOODS survey taken with the Hubble Space Telescope. The left shows shallower images from GZH with only 2 sets of exposures; the right shows the new, deeper images with 5 sets of exposures now being classified.

    Here is that Figure:

    enter image description here

    To my eyes, the colors are very different, as is - obviously - the resolution.

    In what bands/filters were the two CANDELS and five GOODS data ('exposures') taken?

    How - in some detail - are the flux (ADU?) values from the Hubble data (FITS?) transformed to the PNG color values in the GZ GOODS images we get to classify?

    Also, given that its the Hubble which took the data in both programs, why does the resolution seem so much better? As far as I know, longer exposures (integration times) will change resolution only marginally (resolution is set by the size of the telescope mirror and the wavelength of the band/filter). What am I missing?

    Posted

  • klmasters by klmasters scientist, admin

    One that I think @vrooje could answer better than me, so I hope this tags her to notice it. 😃

    Posted

  • vrooje by vrooje admin, scientist in response to JeanTate's comment.

    In what bands/filters were the two CANDELS and five GOODS data ('exposures') taken?

    The GOODS images were taken in 4 bands of the HST Advanced Camera for Surveys (ACS): B, V, I, and z'. In the HST ACS instrument handbook these would be called F435W, F606W, F775W, and F850LP, respectively. (Note: some other surveys use F814W for their I band instead.) Those filter names contain information about the central wavelength and the width of the filter: for example, B is centered at 435 nm, and is a wide-band filter. z' is centered at 850 nm and is a "long pass" filter, which is wider than W.

    The 5-epoch images are created by combining all 4 filters into an RGB image. @kwwillett has the details on the precise numbers used, but it was an asinh stretch and I think it was B for blue, V for green, and an average of I and z' for red (Kyle can correct me if I've remembered wrong - this is off the top of my head).

    Surrounding the GOODS-South field is another, much larger field called the GEMS field, which observed at a much shallower depth in only the V and z' bands. In the GEMS data release they wanted to include the GOODS-South area, but they didn't re-observe it, because it had already been observed at a deeper total exposure than their larger field. For continuity of image depth GEMS used only the first epoch of the GOODS-South observations in only those bands in their survey. The GEMS survey images, including those that were actually in the GOODS-South field, were transformed to RGB color images using just the V and z' filters and added to Galaxy Zoo: Hubble when that project started. The full-depth GOODS-South images were not added, and the GOODS-North images only used the V and z' bands to make their color images.

    The recent addition to GZ is that the full-depth images have been added in both GOODS-North and GOODS-South using all 4 bands to make the color images.

    CANDELS mostly uses a different camera on HST to perform its imaging: the WFPC3/infrared channel. It takes images in 2 filters: what we call J and H, or F125W and F160W. Note that in this case 125 actually refers to 1250 nm, and 160 to 1600 nm. (I know; don't shoot the messenger! 😃) To make a 3-color image set for GZ, those images were combined with the ACS I-band images from each field as the blue channel. Between the new CANDELS images and the ones that have already been classified by the volunteers, only the depth is different -- the color prescription didn't change.

    Also, given that its the Hubble which took the data in both programs, why does the resolution seem so much better? As far as I know, longer exposures (integration times) will change resolution only marginally (resolution is set by the size of the telescope mirror and the wavelength of the band/filter). What am I missing?

    You're right that the resolution isn't actually better between the left and right columns in the image above -- I think it just looks that way because the noise is greatly reduced in the deeper images, and the extra color information probably helps too.

    Hope that helps!

    Cheers,
    -Brooke

    Posted

  • KWillett by KWillett scientist, admin, translator

    Hi Jean,

    The colors are slightly different - values for the scaling change slightly depending on, among other things, the noise background (which you can see is significantly lower in the new images). The bands are the same for all images, however.

    • For GOODS, the images are made from four bands with the Hubble filters corresponding to rest-frame B, V, I, and Z. These are combined into the color composite by using B for the red channel, a weighted mix of V and I for the blue channel, and Z for the green channel.
    • For CANDELS, the images made from two cameras on Hubble; the WFC3 F160W, F125W, and ACS F814W filters are red, green and blue channels respectively.

    The ADU to PNG color values use several existing algorithms. Briefly, there's a non-linear scaling applied to the pixel range, conversions of the color of saturated pixels, and scaling of the individual bands. Parameters controlling all of these are mostly done manually by scientists; we used a wide range of these and settled on the set that revealed the most detail (and suppressed the noise) for the largest number of galaxies in the sample.

    The resolution is the same, since the mirror size and wavelength hasn't changed; however, the ability to see low surface-brightness features and point sources will improve with deeper imaging simply because the noise in the images is going down.

    Posted

  • JeanTate by JeanTate

    Thanks @vrooje and @KWillett.

    I now understand that "2 sets of exposures" and "5 sets of exposures" refers not to the numbers of filters used, but to different (sets of) observations, (quite) different epochs, sets of Hubble 'orbits' at quite different times.

    Which raises this question, informed by my recent participation in Snapshot Supernova: how was variability handled? Even if there are no supernovae in any of the fields, at least some of the galaxies have AGNs, and at least some of those will have varied (in flux) between exposures.

    It seems the colors are Luptonized, right? And as vrooje said, you used an asinh transform (or something similar).

    These are combined into the color composite by using B for the red channel, a weighted mix of V and I for the blue channel, and Z for the green channel

    Wow! So the color assignment is not chromatic (short wavelength mapped to blue, mid to green, long to red), but composite (the order does not preserve wavelength order). I must say that, to me, the images are quite strange ... for example, what look like star-forming regions appear blue (or some shade of violet/purple), and nuclei/bulges yellow (or some shade of orange), yet per the composite color mapping used, it's actually the other way round!

    The resolution is the same, since the mirror size and wavelength hasn't changed

    And neither has the camera (I forgot to include that in my initial post; a camera can change the apparent resolution).

    however, the ability to see low surface-brightness features and point sources will improve with deeper imaging simply because the noise in the images is going down

    Indeed. And that's quite clear from these images, once you get used to the colors being different. I had not appreciated just how close to the noise floor most parts of most objects in GZH are.

    Posted

  • vrooje by vrooje admin, scientist

    I suspect there's a typo in Kyle's above note and that the mapping is B for blue, V+I for green, and z' for red?

    Variability isn't considered at all by us here - the images are just added for Galaxy Zoo, and indeed for most science uses in these fields. However the individual epochs are still available (publicly as well, on a legacy archive somewhere such as MAST) and they are used in some science investigations by others. The GOODS survey was designed with an observational cadence that was likely to detect supernovae and also be useful for AGN variability studies, and papers have been written (though not by me) on both topics.

    Posted

  • JeanTate by JeanTate in response to vrooje's comment.

    Thanks Brooke.

    I suspect there's a typo in Kyle's above note and that the mapping is B for blue, V+I for green, and z' for red?

    That's what I thought too; perhaps Kyle could confirm?

    Variability isn't considered at all by us here - the images are just added for Galaxy Zoo, and indeed for most science uses in these fields.

    Thanks. I wonder, then, if extreme variability is detectable in this round of CANDELS/GOODS images?

    At the zero-th level, a supernova caught at maximum is ~as bright as its host galaxy; assume it appears in only one - of five - epochs, then it should appear as a fairly bright point-source (compared to the host galaxy) in a GOODS image (in this incarnation of GZ). The SED of a supernova means it likely won't have a strong color in a GOODS image, so such a point-source won't be startlingly colorful.

    Might be helpful to those zooites who like to search for supernovae in GZ images ...

    Posted

  • JeanTate by JeanTate in response to vrooje's comment.

    Taking a closer look, I don't think so ...

    I suspect there's a typo in Kyle's above note and that the mapping is B for blue, V+I for green, and z' for red?

    I haven't quite figured out how to copy the relevant stuff here yet, but the two sets of histograms of the R, G, and B channels, for a cutout selection from the bottom object (the one which looks a bit like a face-on spiral) are very different; among other things, the histograms of the deeper GOODS image are ~bimodal while the shallower ones are not.

    Also, as your eyes tell you, many of the 'blue' regions in the shallow image have much higher red channel values in the deeper image. To me that points to the mapping for the deeper images being composite, not chromatic (as Kyle wrote).

    Posted

  • KWillett by KWillett scientist, admin, translator

    Sorry for not seeing this - both @vrooje and @JeanTate are correct in looking closely at the color schemes, and there was a typo in my earlier reply. The mapping we used for the RGB channels is:

    • R = z band (F850LP)
    • G = i band (F775W)
    • B = b + v bands (F435W andF606W)

    All the filter names above refer to the ACS camera onboard Hubble.

    If anyone is interested in experimenting with the color balance and making your own JPEG images, please do! The FITS images are available from the MAST archive system as high-level science products, which anyone can download. As for making the JPEGs, I've posted the relevant parts of my Python code here. It doesn't include absolutely everything, but you can see how we map filters to different channels and the sorts of operations we do to scale and adjust balance.

    Posted

  • JeanTate by JeanTate in response to KWillett's comment.

    Thanks Kyle.

    So the mapping is chromatic, yay! 😃

    Posted