What is the “lost light” in this unusual Hubble Deep Sky image?












13












$begingroup$


The Daily Galaxy article “The Lost Hubble” –New! Deepest Image of the Universe Ever Taken says:




To produce the image a group of researchers from the Instituto de Astrofísica de Canarias (IAC) led by Alejandro S. Borlaff used original HUDF images from the Hubble Space Telescope. After improving the process of combining several images the group was able to recover a large quantity of light from the outer zones of the largest galaxies in the HUDF. Recovering this light, emitted by the stars in these outer zones, was equivalent to recovering the light from a complete galaxy (“smeared out” over the whole field) and for some galaxies this missing light shows that they have diameters almost twice as big as previously measured.




The image looks really strange, what is going on? Is there a technical article associated with this work?



enter image description here










share|improve this question









$endgroup$

















    13












    $begingroup$


    The Daily Galaxy article “The Lost Hubble” –New! Deepest Image of the Universe Ever Taken says:




    To produce the image a group of researchers from the Instituto de Astrofísica de Canarias (IAC) led by Alejandro S. Borlaff used original HUDF images from the Hubble Space Telescope. After improving the process of combining several images the group was able to recover a large quantity of light from the outer zones of the largest galaxies in the HUDF. Recovering this light, emitted by the stars in these outer zones, was equivalent to recovering the light from a complete galaxy (“smeared out” over the whole field) and for some galaxies this missing light shows that they have diameters almost twice as big as previously measured.




    The image looks really strange, what is going on? Is there a technical article associated with this work?



    enter image description here










    share|improve this question









    $endgroup$















      13












      13








      13





      $begingroup$


      The Daily Galaxy article “The Lost Hubble” –New! Deepest Image of the Universe Ever Taken says:




      To produce the image a group of researchers from the Instituto de Astrofísica de Canarias (IAC) led by Alejandro S. Borlaff used original HUDF images from the Hubble Space Telescope. After improving the process of combining several images the group was able to recover a large quantity of light from the outer zones of the largest galaxies in the HUDF. Recovering this light, emitted by the stars in these outer zones, was equivalent to recovering the light from a complete galaxy (“smeared out” over the whole field) and for some galaxies this missing light shows that they have diameters almost twice as big as previously measured.




      The image looks really strange, what is going on? Is there a technical article associated with this work?



      enter image description here










      share|improve this question









      $endgroup$




      The Daily Galaxy article “The Lost Hubble” –New! Deepest Image of the Universe Ever Taken says:




      To produce the image a group of researchers from the Instituto de Astrofísica de Canarias (IAC) led by Alejandro S. Borlaff used original HUDF images from the Hubble Space Telescope. After improving the process of combining several images the group was able to recover a large quantity of light from the outer zones of the largest galaxies in the HUDF. Recovering this light, emitted by the stars in these outer zones, was equivalent to recovering the light from a complete galaxy (“smeared out” over the whole field) and for some galaxies this missing light shows that they have diameters almost twice as big as previously measured.




      The image looks really strange, what is going on? Is there a technical article associated with this work?



      enter image description here







      data-analysis hubble-telescope deep-sky-observing image-processing






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 25 at 16:15









      uhohuhoh

      5,45621660




      5,45621660






















          3 Answers
          3






          active

          oldest

          votes


















          8












          $begingroup$

          Let me see if I can explain the main aim and accomplishment of this work.



          First off: the picture you're puzzling over is a "luminance RGB" image, in which the bright regions are represented by color (a sort of pseudo-true color using near-infrared images), with the second faintest parts represented with black and the very faintest parts with white. The latter aren't qite "garbage", as Hobbes suggests in their answer, but they are relatively speaking the noisiest parts of the image, so there's little real information be found there.



          This paper (Borlaff et al.; see the link in Hobbes's answer) is about re-processing near-infrared HST images originally taken about ten years ago as part of the Ultra Deep Field. Previous processing of these images (e.g., Koekemoer et al. 2013 ["HUDF12"] and Illingworth et al. 2013 ["XDF"]) was focused on getting information about the smallest, faintest galaxies, which are mostly the really distant, high-redshift galaxies. Because of this, the crucial stage of sky subtraction had some biases: in particular, it tended to treat the faint outer regions of the large, nearer galaxies as part of the sky to be subtracted. This is actually fine for the analysis of the small, distant galaxies, but means that if you do want to analyze the outer regions (outer disks, faint stellar halos, remnants of merger structures, etc.) of the larger, closer galaxies, you have the problem that their outer regions are over-subtracted (hence the "missing light") and thus unmeasurable.



          (The "sky" being subtracted is a combination of emission from certain atoms in the tenuous outer atmosphere above HST, sunlight scattered from dust grains in the inner solar system, and the so-called "extragalactic background" = the combined light from unresolved distant galaxies.)



          The abstract mentions four improvements the new study implemented when they re-processed the HST images: "1) creation of new absolute sky flat fields, 2) extended persistence models, 3) dedicated sky background subtraction and 4) robust co-adding."



          I would suggest that the third item is perhaps the most important: they implement a method which does not subtract off the faint outer regions of the larger galaxies, and thus the resulting images still have information about the outer parts of these galaxies.



          The plot below (extracted from Figure 20 of the paper) illustrates the kind of improvement they were after. It shows the surface brightness (in the F105W near-infrared filter) of one of the largest galaxies (a giant elliptical -- I think it's the big, round, yellow galaxy in the lower middle of the color image) as a function of radius (measured in elliptical annuli). The red triangles were measured using the XDF-processed image, the blue squares used the HUDF12-processed image, and the black points use the newly re-processed image produced as part of this paper [ABYSS]. You can see that the XDF points drop off at a radius of about 55 kpc, the HUDF12 points fall off at around 90 kpc -- but the light from this galaxy can be traced out to 140 kpc in the ABYSS-reprocessed image. This also means you can potentially analyze structure in the outer part of the galaxy out to ~ 150 kpc, which wouldn't be possible with the previous versions of the image.



          enter image description here
          (I should point out that I'm friends with, and have co-authored papers with, a couple of the authors, so I may be a bit biased -- but I think this is really impressive work!)






          share|improve this answer











          $endgroup$









          • 2




            $begingroup$
            +n! Thank you for taking the time to write this up, this is exactly what I needed to read, thus my up vote of n-factorial. After reading through once or twice more, I can go back to the paper more comfortably. My guess is that they used quite a bit of image data in order to characterize these effects before finally generating this version of the Ultra Deep Field. It probably took quite a bit of patience and discipline.
            $endgroup$
            – uhoh
            Jan 25 at 23:57





















          8












          $begingroup$

          When you plug the lead researcher's name into Arxiv, the first search result is The missing light of the Hubble Ultra Deep Field.



          enter image description here



          3 main steps:





          • Creation of sky flat fields for the four filters. This process is fully described in Sect. 2.4.


          – Creation of a catalogue of all WFC3/IR datasets that may affect our mosaics (including calibration exposures) to generate a set of improved persistence models for each exposure of the HUDF. We detail this process in Sect. 2.5.



          – Download and reduction of all the WFC3/IR datasets that include observations using the F105W, F125W, F140W and F160W filters on the HUDF.




          Flat sky field:




          In order to measure the relative sensitivity of the pixels of a detector (flat field), the optimal process would be to observe a uniform external source of light.




          Basically they're trying to remove all sources of noise from the image, in an attempt to make faint signals appear in places where that signal has been overwhelmed by noise.



          Persistence models:




          A known effect that affects HgCdTe IR array detectors (as is the case of the WFC3/IR) is persistence. Persistence shows up as an afterglow on the pixels that were exposed to a bright source of light in a previous exposure.



          The current method of persistence correction of WFC3/IR con- sists in modeling the number of electrons that would be created by persistence in each pixel by all the previous exposures (up to a certain time) that were taken before the one to correct (Long et al. 2012).



          During long exposures, sky background can vary noticeably, introducing a non-linear component to the count rates calculated by calwf3.



          We individually estimate and subtract the sky background emission from each readout of the intermediate ima.fits files.



          In order to avoid systematic biases due to the presence of de- fects in some regions of the detector, we created a manual data quality mask to flag those regions were the flat field cannot fully correct the differences in sensitivity.




          More image processing to remove sky background:




          In this Section we describe the methods used to remove the sky background from the individual exposures and the final mosaics of the HUDF.




          Image alignment:




          As a consequence, when comparing images from different visits, it is usual to see that they are not exactly aligned. In order to exploit the full resolving capabilities of WFC3, we need to carefully re-align the images of different visits to a single reference world coordinate system solution (WCS hereafter).




          and as a final step, image combination.



          Result:




          The XDF version of the HUDF WFC3/IR mosaics is dominated by a systematic bias in the form of a significant oversubtraction of the sky background around the objects with large angular size. A similar result (to a lesser extent) is obtained for the HUDF12. We successfully recover a significant amount of over-subtracted diffuse light around the largest objects of the HUDF, not detected by the previous versions of the mosaics.




          Summary:



          They've processed the images to bring out details in the galaxies. In the space between the galaxies, the image processing gives garbage results (the white areas), but they've managed to bring out detail on the edge of the galaxies that was hidden before.






          share|improve this answer











          $endgroup$









          • 1




            $begingroup$
            I've made an attempt at summarizing the paper, but this is far outside my expertise.
            $endgroup$
            – Hobbes
            Jan 25 at 17:31






          • 1




            $begingroup$
            "Basically they're trying to remove all sources of noise from the image" -- that's not actually possible. What they're trying to do is remove systematic instrumental variations due to differences in the sensitivity of different parts of the detector and differences in how the optics distribute the light. If you don't do this, you'll have an image with distortions in the brightness that aren't due to the actual astronomical sources.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:18






          • 2




            $begingroup$
            "In the space between the galaxies, the image processing gives garbage results (the white areas)" -- the white isn't quite "garbage", it's just the faintest parts of the image (no extended light from the bright galaxies). It will be dominated by Poisson noise, so there won't be much useful information.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:19



















          0












          $begingroup$

          In response to a couple comments that Hobbes' answer is a bit thick, how about:



          To reduce noise effects, the team did flat-fielding adjustment and then summed multiple exposures, thus allowing weak signals to add while noise effects cancelled out.



          That's the TL;DR which leaves out a lot of really cool methods of identifying "true dark" and noise patches vs. reliable signals (stars or galaxies or whatever).






          share|improve this answer









          $endgroup$









          • 3




            $begingroup$
            If your answer is basically just a summary/"TLDR" of another answer, you should probably leave it as a comment on that answer or suggest it as an edit to that answer. Answers should generally stand on their own as answers to the question, not merely summarize or repeat another existing answer.
            $endgroup$
            – V2Blast
            Jan 25 at 22:05











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "514"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fastronomy.stackexchange.com%2fquestions%2f29327%2fwhat-is-the-lost-light-in-this-unusual-hubble-deep-sky-image%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          3 Answers
          3






          active

          oldest

          votes








          3 Answers
          3






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          8












          $begingroup$

          Let me see if I can explain the main aim and accomplishment of this work.



          First off: the picture you're puzzling over is a "luminance RGB" image, in which the bright regions are represented by color (a sort of pseudo-true color using near-infrared images), with the second faintest parts represented with black and the very faintest parts with white. The latter aren't qite "garbage", as Hobbes suggests in their answer, but they are relatively speaking the noisiest parts of the image, so there's little real information be found there.



          This paper (Borlaff et al.; see the link in Hobbes's answer) is about re-processing near-infrared HST images originally taken about ten years ago as part of the Ultra Deep Field. Previous processing of these images (e.g., Koekemoer et al. 2013 ["HUDF12"] and Illingworth et al. 2013 ["XDF"]) was focused on getting information about the smallest, faintest galaxies, which are mostly the really distant, high-redshift galaxies. Because of this, the crucial stage of sky subtraction had some biases: in particular, it tended to treat the faint outer regions of the large, nearer galaxies as part of the sky to be subtracted. This is actually fine for the analysis of the small, distant galaxies, but means that if you do want to analyze the outer regions (outer disks, faint stellar halos, remnants of merger structures, etc.) of the larger, closer galaxies, you have the problem that their outer regions are over-subtracted (hence the "missing light") and thus unmeasurable.



          (The "sky" being subtracted is a combination of emission from certain atoms in the tenuous outer atmosphere above HST, sunlight scattered from dust grains in the inner solar system, and the so-called "extragalactic background" = the combined light from unresolved distant galaxies.)



          The abstract mentions four improvements the new study implemented when they re-processed the HST images: "1) creation of new absolute sky flat fields, 2) extended persistence models, 3) dedicated sky background subtraction and 4) robust co-adding."



          I would suggest that the third item is perhaps the most important: they implement a method which does not subtract off the faint outer regions of the larger galaxies, and thus the resulting images still have information about the outer parts of these galaxies.



          The plot below (extracted from Figure 20 of the paper) illustrates the kind of improvement they were after. It shows the surface brightness (in the F105W near-infrared filter) of one of the largest galaxies (a giant elliptical -- I think it's the big, round, yellow galaxy in the lower middle of the color image) as a function of radius (measured in elliptical annuli). The red triangles were measured using the XDF-processed image, the blue squares used the HUDF12-processed image, and the black points use the newly re-processed image produced as part of this paper [ABYSS]. You can see that the XDF points drop off at a radius of about 55 kpc, the HUDF12 points fall off at around 90 kpc -- but the light from this galaxy can be traced out to 140 kpc in the ABYSS-reprocessed image. This also means you can potentially analyze structure in the outer part of the galaxy out to ~ 150 kpc, which wouldn't be possible with the previous versions of the image.



          enter image description here
          (I should point out that I'm friends with, and have co-authored papers with, a couple of the authors, so I may be a bit biased -- but I think this is really impressive work!)






          share|improve this answer











          $endgroup$









          • 2




            $begingroup$
            +n! Thank you for taking the time to write this up, this is exactly what I needed to read, thus my up vote of n-factorial. After reading through once or twice more, I can go back to the paper more comfortably. My guess is that they used quite a bit of image data in order to characterize these effects before finally generating this version of the Ultra Deep Field. It probably took quite a bit of patience and discipline.
            $endgroup$
            – uhoh
            Jan 25 at 23:57


















          8












          $begingroup$

          Let me see if I can explain the main aim and accomplishment of this work.



          First off: the picture you're puzzling over is a "luminance RGB" image, in which the bright regions are represented by color (a sort of pseudo-true color using near-infrared images), with the second faintest parts represented with black and the very faintest parts with white. The latter aren't qite "garbage", as Hobbes suggests in their answer, but they are relatively speaking the noisiest parts of the image, so there's little real information be found there.



          This paper (Borlaff et al.; see the link in Hobbes's answer) is about re-processing near-infrared HST images originally taken about ten years ago as part of the Ultra Deep Field. Previous processing of these images (e.g., Koekemoer et al. 2013 ["HUDF12"] and Illingworth et al. 2013 ["XDF"]) was focused on getting information about the smallest, faintest galaxies, which are mostly the really distant, high-redshift galaxies. Because of this, the crucial stage of sky subtraction had some biases: in particular, it tended to treat the faint outer regions of the large, nearer galaxies as part of the sky to be subtracted. This is actually fine for the analysis of the small, distant galaxies, but means that if you do want to analyze the outer regions (outer disks, faint stellar halos, remnants of merger structures, etc.) of the larger, closer galaxies, you have the problem that their outer regions are over-subtracted (hence the "missing light") and thus unmeasurable.



          (The "sky" being subtracted is a combination of emission from certain atoms in the tenuous outer atmosphere above HST, sunlight scattered from dust grains in the inner solar system, and the so-called "extragalactic background" = the combined light from unresolved distant galaxies.)



          The abstract mentions four improvements the new study implemented when they re-processed the HST images: "1) creation of new absolute sky flat fields, 2) extended persistence models, 3) dedicated sky background subtraction and 4) robust co-adding."



          I would suggest that the third item is perhaps the most important: they implement a method which does not subtract off the faint outer regions of the larger galaxies, and thus the resulting images still have information about the outer parts of these galaxies.



          The plot below (extracted from Figure 20 of the paper) illustrates the kind of improvement they were after. It shows the surface brightness (in the F105W near-infrared filter) of one of the largest galaxies (a giant elliptical -- I think it's the big, round, yellow galaxy in the lower middle of the color image) as a function of radius (measured in elliptical annuli). The red triangles were measured using the XDF-processed image, the blue squares used the HUDF12-processed image, and the black points use the newly re-processed image produced as part of this paper [ABYSS]. You can see that the XDF points drop off at a radius of about 55 kpc, the HUDF12 points fall off at around 90 kpc -- but the light from this galaxy can be traced out to 140 kpc in the ABYSS-reprocessed image. This also means you can potentially analyze structure in the outer part of the galaxy out to ~ 150 kpc, which wouldn't be possible with the previous versions of the image.



          enter image description here
          (I should point out that I'm friends with, and have co-authored papers with, a couple of the authors, so I may be a bit biased -- but I think this is really impressive work!)






          share|improve this answer











          $endgroup$









          • 2




            $begingroup$
            +n! Thank you for taking the time to write this up, this is exactly what I needed to read, thus my up vote of n-factorial. After reading through once or twice more, I can go back to the paper more comfortably. My guess is that they used quite a bit of image data in order to characterize these effects before finally generating this version of the Ultra Deep Field. It probably took quite a bit of patience and discipline.
            $endgroup$
            – uhoh
            Jan 25 at 23:57
















          8












          8








          8





          $begingroup$

          Let me see if I can explain the main aim and accomplishment of this work.



          First off: the picture you're puzzling over is a "luminance RGB" image, in which the bright regions are represented by color (a sort of pseudo-true color using near-infrared images), with the second faintest parts represented with black and the very faintest parts with white. The latter aren't qite "garbage", as Hobbes suggests in their answer, but they are relatively speaking the noisiest parts of the image, so there's little real information be found there.



          This paper (Borlaff et al.; see the link in Hobbes's answer) is about re-processing near-infrared HST images originally taken about ten years ago as part of the Ultra Deep Field. Previous processing of these images (e.g., Koekemoer et al. 2013 ["HUDF12"] and Illingworth et al. 2013 ["XDF"]) was focused on getting information about the smallest, faintest galaxies, which are mostly the really distant, high-redshift galaxies. Because of this, the crucial stage of sky subtraction had some biases: in particular, it tended to treat the faint outer regions of the large, nearer galaxies as part of the sky to be subtracted. This is actually fine for the analysis of the small, distant galaxies, but means that if you do want to analyze the outer regions (outer disks, faint stellar halos, remnants of merger structures, etc.) of the larger, closer galaxies, you have the problem that their outer regions are over-subtracted (hence the "missing light") and thus unmeasurable.



          (The "sky" being subtracted is a combination of emission from certain atoms in the tenuous outer atmosphere above HST, sunlight scattered from dust grains in the inner solar system, and the so-called "extragalactic background" = the combined light from unresolved distant galaxies.)



          The abstract mentions four improvements the new study implemented when they re-processed the HST images: "1) creation of new absolute sky flat fields, 2) extended persistence models, 3) dedicated sky background subtraction and 4) robust co-adding."



          I would suggest that the third item is perhaps the most important: they implement a method which does not subtract off the faint outer regions of the larger galaxies, and thus the resulting images still have information about the outer parts of these galaxies.



          The plot below (extracted from Figure 20 of the paper) illustrates the kind of improvement they were after. It shows the surface brightness (in the F105W near-infrared filter) of one of the largest galaxies (a giant elliptical -- I think it's the big, round, yellow galaxy in the lower middle of the color image) as a function of radius (measured in elliptical annuli). The red triangles were measured using the XDF-processed image, the blue squares used the HUDF12-processed image, and the black points use the newly re-processed image produced as part of this paper [ABYSS]. You can see that the XDF points drop off at a radius of about 55 kpc, the HUDF12 points fall off at around 90 kpc -- but the light from this galaxy can be traced out to 140 kpc in the ABYSS-reprocessed image. This also means you can potentially analyze structure in the outer part of the galaxy out to ~ 150 kpc, which wouldn't be possible with the previous versions of the image.



          enter image description here
          (I should point out that I'm friends with, and have co-authored papers with, a couple of the authors, so I may be a bit biased -- but I think this is really impressive work!)






          share|improve this answer











          $endgroup$



          Let me see if I can explain the main aim and accomplishment of this work.



          First off: the picture you're puzzling over is a "luminance RGB" image, in which the bright regions are represented by color (a sort of pseudo-true color using near-infrared images), with the second faintest parts represented with black and the very faintest parts with white. The latter aren't qite "garbage", as Hobbes suggests in their answer, but they are relatively speaking the noisiest parts of the image, so there's little real information be found there.



          This paper (Borlaff et al.; see the link in Hobbes's answer) is about re-processing near-infrared HST images originally taken about ten years ago as part of the Ultra Deep Field. Previous processing of these images (e.g., Koekemoer et al. 2013 ["HUDF12"] and Illingworth et al. 2013 ["XDF"]) was focused on getting information about the smallest, faintest galaxies, which are mostly the really distant, high-redshift galaxies. Because of this, the crucial stage of sky subtraction had some biases: in particular, it tended to treat the faint outer regions of the large, nearer galaxies as part of the sky to be subtracted. This is actually fine for the analysis of the small, distant galaxies, but means that if you do want to analyze the outer regions (outer disks, faint stellar halos, remnants of merger structures, etc.) of the larger, closer galaxies, you have the problem that their outer regions are over-subtracted (hence the "missing light") and thus unmeasurable.



          (The "sky" being subtracted is a combination of emission from certain atoms in the tenuous outer atmosphere above HST, sunlight scattered from dust grains in the inner solar system, and the so-called "extragalactic background" = the combined light from unresolved distant galaxies.)



          The abstract mentions four improvements the new study implemented when they re-processed the HST images: "1) creation of new absolute sky flat fields, 2) extended persistence models, 3) dedicated sky background subtraction and 4) robust co-adding."



          I would suggest that the third item is perhaps the most important: they implement a method which does not subtract off the faint outer regions of the larger galaxies, and thus the resulting images still have information about the outer parts of these galaxies.



          The plot below (extracted from Figure 20 of the paper) illustrates the kind of improvement they were after. It shows the surface brightness (in the F105W near-infrared filter) of one of the largest galaxies (a giant elliptical -- I think it's the big, round, yellow galaxy in the lower middle of the color image) as a function of radius (measured in elliptical annuli). The red triangles were measured using the XDF-processed image, the blue squares used the HUDF12-processed image, and the black points use the newly re-processed image produced as part of this paper [ABYSS]. You can see that the XDF points drop off at a radius of about 55 kpc, the HUDF12 points fall off at around 90 kpc -- but the light from this galaxy can be traced out to 140 kpc in the ABYSS-reprocessed image. This also means you can potentially analyze structure in the outer part of the galaxy out to ~ 150 kpc, which wouldn't be possible with the previous versions of the image.



          enter image description here
          (I should point out that I'm friends with, and have co-authored papers with, a couple of the authors, so I may be a bit biased -- but I think this is really impressive work!)







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jan 26 at 10:15

























          answered Jan 25 at 21:14









          Peter ErwinPeter Erwin

          4,397724




          4,397724








          • 2




            $begingroup$
            +n! Thank you for taking the time to write this up, this is exactly what I needed to read, thus my up vote of n-factorial. After reading through once or twice more, I can go back to the paper more comfortably. My guess is that they used quite a bit of image data in order to characterize these effects before finally generating this version of the Ultra Deep Field. It probably took quite a bit of patience and discipline.
            $endgroup$
            – uhoh
            Jan 25 at 23:57
















          • 2




            $begingroup$
            +n! Thank you for taking the time to write this up, this is exactly what I needed to read, thus my up vote of n-factorial. After reading through once or twice more, I can go back to the paper more comfortably. My guess is that they used quite a bit of image data in order to characterize these effects before finally generating this version of the Ultra Deep Field. It probably took quite a bit of patience and discipline.
            $endgroup$
            – uhoh
            Jan 25 at 23:57










          2




          2




          $begingroup$
          +n! Thank you for taking the time to write this up, this is exactly what I needed to read, thus my up vote of n-factorial. After reading through once or twice more, I can go back to the paper more comfortably. My guess is that they used quite a bit of image data in order to characterize these effects before finally generating this version of the Ultra Deep Field. It probably took quite a bit of patience and discipline.
          $endgroup$
          – uhoh
          Jan 25 at 23:57






          $begingroup$
          +n! Thank you for taking the time to write this up, this is exactly what I needed to read, thus my up vote of n-factorial. After reading through once or twice more, I can go back to the paper more comfortably. My guess is that they used quite a bit of image data in order to characterize these effects before finally generating this version of the Ultra Deep Field. It probably took quite a bit of patience and discipline.
          $endgroup$
          – uhoh
          Jan 25 at 23:57













          8












          $begingroup$

          When you plug the lead researcher's name into Arxiv, the first search result is The missing light of the Hubble Ultra Deep Field.



          enter image description here



          3 main steps:





          • Creation of sky flat fields for the four filters. This process is fully described in Sect. 2.4.


          – Creation of a catalogue of all WFC3/IR datasets that may affect our mosaics (including calibration exposures) to generate a set of improved persistence models for each exposure of the HUDF. We detail this process in Sect. 2.5.



          – Download and reduction of all the WFC3/IR datasets that include observations using the F105W, F125W, F140W and F160W filters on the HUDF.




          Flat sky field:




          In order to measure the relative sensitivity of the pixels of a detector (flat field), the optimal process would be to observe a uniform external source of light.




          Basically they're trying to remove all sources of noise from the image, in an attempt to make faint signals appear in places where that signal has been overwhelmed by noise.



          Persistence models:




          A known effect that affects HgCdTe IR array detectors (as is the case of the WFC3/IR) is persistence. Persistence shows up as an afterglow on the pixels that were exposed to a bright source of light in a previous exposure.



          The current method of persistence correction of WFC3/IR con- sists in modeling the number of electrons that would be created by persistence in each pixel by all the previous exposures (up to a certain time) that were taken before the one to correct (Long et al. 2012).



          During long exposures, sky background can vary noticeably, introducing a non-linear component to the count rates calculated by calwf3.



          We individually estimate and subtract the sky background emission from each readout of the intermediate ima.fits files.



          In order to avoid systematic biases due to the presence of de- fects in some regions of the detector, we created a manual data quality mask to flag those regions were the flat field cannot fully correct the differences in sensitivity.




          More image processing to remove sky background:




          In this Section we describe the methods used to remove the sky background from the individual exposures and the final mosaics of the HUDF.




          Image alignment:




          As a consequence, when comparing images from different visits, it is usual to see that they are not exactly aligned. In order to exploit the full resolving capabilities of WFC3, we need to carefully re-align the images of different visits to a single reference world coordinate system solution (WCS hereafter).




          and as a final step, image combination.



          Result:




          The XDF version of the HUDF WFC3/IR mosaics is dominated by a systematic bias in the form of a significant oversubtraction of the sky background around the objects with large angular size. A similar result (to a lesser extent) is obtained for the HUDF12. We successfully recover a significant amount of over-subtracted diffuse light around the largest objects of the HUDF, not detected by the previous versions of the mosaics.




          Summary:



          They've processed the images to bring out details in the galaxies. In the space between the galaxies, the image processing gives garbage results (the white areas), but they've managed to bring out detail on the edge of the galaxies that was hidden before.






          share|improve this answer











          $endgroup$









          • 1




            $begingroup$
            I've made an attempt at summarizing the paper, but this is far outside my expertise.
            $endgroup$
            – Hobbes
            Jan 25 at 17:31






          • 1




            $begingroup$
            "Basically they're trying to remove all sources of noise from the image" -- that's not actually possible. What they're trying to do is remove systematic instrumental variations due to differences in the sensitivity of different parts of the detector and differences in how the optics distribute the light. If you don't do this, you'll have an image with distortions in the brightness that aren't due to the actual astronomical sources.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:18






          • 2




            $begingroup$
            "In the space between the galaxies, the image processing gives garbage results (the white areas)" -- the white isn't quite "garbage", it's just the faintest parts of the image (no extended light from the bright galaxies). It will be dominated by Poisson noise, so there won't be much useful information.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:19
















          8












          $begingroup$

          When you plug the lead researcher's name into Arxiv, the first search result is The missing light of the Hubble Ultra Deep Field.



          enter image description here



          3 main steps:





          • Creation of sky flat fields for the four filters. This process is fully described in Sect. 2.4.


          – Creation of a catalogue of all WFC3/IR datasets that may affect our mosaics (including calibration exposures) to generate a set of improved persistence models for each exposure of the HUDF. We detail this process in Sect. 2.5.



          – Download and reduction of all the WFC3/IR datasets that include observations using the F105W, F125W, F140W and F160W filters on the HUDF.




          Flat sky field:




          In order to measure the relative sensitivity of the pixels of a detector (flat field), the optimal process would be to observe a uniform external source of light.




          Basically they're trying to remove all sources of noise from the image, in an attempt to make faint signals appear in places where that signal has been overwhelmed by noise.



          Persistence models:




          A known effect that affects HgCdTe IR array detectors (as is the case of the WFC3/IR) is persistence. Persistence shows up as an afterglow on the pixels that were exposed to a bright source of light in a previous exposure.



          The current method of persistence correction of WFC3/IR con- sists in modeling the number of electrons that would be created by persistence in each pixel by all the previous exposures (up to a certain time) that were taken before the one to correct (Long et al. 2012).



          During long exposures, sky background can vary noticeably, introducing a non-linear component to the count rates calculated by calwf3.



          We individually estimate and subtract the sky background emission from each readout of the intermediate ima.fits files.



          In order to avoid systematic biases due to the presence of de- fects in some regions of the detector, we created a manual data quality mask to flag those regions were the flat field cannot fully correct the differences in sensitivity.




          More image processing to remove sky background:




          In this Section we describe the methods used to remove the sky background from the individual exposures and the final mosaics of the HUDF.




          Image alignment:




          As a consequence, when comparing images from different visits, it is usual to see that they are not exactly aligned. In order to exploit the full resolving capabilities of WFC3, we need to carefully re-align the images of different visits to a single reference world coordinate system solution (WCS hereafter).




          and as a final step, image combination.



          Result:




          The XDF version of the HUDF WFC3/IR mosaics is dominated by a systematic bias in the form of a significant oversubtraction of the sky background around the objects with large angular size. A similar result (to a lesser extent) is obtained for the HUDF12. We successfully recover a significant amount of over-subtracted diffuse light around the largest objects of the HUDF, not detected by the previous versions of the mosaics.




          Summary:



          They've processed the images to bring out details in the galaxies. In the space between the galaxies, the image processing gives garbage results (the white areas), but they've managed to bring out detail on the edge of the galaxies that was hidden before.






          share|improve this answer











          $endgroup$









          • 1




            $begingroup$
            I've made an attempt at summarizing the paper, but this is far outside my expertise.
            $endgroup$
            – Hobbes
            Jan 25 at 17:31






          • 1




            $begingroup$
            "Basically they're trying to remove all sources of noise from the image" -- that's not actually possible. What they're trying to do is remove systematic instrumental variations due to differences in the sensitivity of different parts of the detector and differences in how the optics distribute the light. If you don't do this, you'll have an image with distortions in the brightness that aren't due to the actual astronomical sources.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:18






          • 2




            $begingroup$
            "In the space between the galaxies, the image processing gives garbage results (the white areas)" -- the white isn't quite "garbage", it's just the faintest parts of the image (no extended light from the bright galaxies). It will be dominated by Poisson noise, so there won't be much useful information.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:19














          8












          8








          8





          $begingroup$

          When you plug the lead researcher's name into Arxiv, the first search result is The missing light of the Hubble Ultra Deep Field.



          enter image description here



          3 main steps:





          • Creation of sky flat fields for the four filters. This process is fully described in Sect. 2.4.


          – Creation of a catalogue of all WFC3/IR datasets that may affect our mosaics (including calibration exposures) to generate a set of improved persistence models for each exposure of the HUDF. We detail this process in Sect. 2.5.



          – Download and reduction of all the WFC3/IR datasets that include observations using the F105W, F125W, F140W and F160W filters on the HUDF.




          Flat sky field:




          In order to measure the relative sensitivity of the pixels of a detector (flat field), the optimal process would be to observe a uniform external source of light.




          Basically they're trying to remove all sources of noise from the image, in an attempt to make faint signals appear in places where that signal has been overwhelmed by noise.



          Persistence models:




          A known effect that affects HgCdTe IR array detectors (as is the case of the WFC3/IR) is persistence. Persistence shows up as an afterglow on the pixels that were exposed to a bright source of light in a previous exposure.



          The current method of persistence correction of WFC3/IR con- sists in modeling the number of electrons that would be created by persistence in each pixel by all the previous exposures (up to a certain time) that were taken before the one to correct (Long et al. 2012).



          During long exposures, sky background can vary noticeably, introducing a non-linear component to the count rates calculated by calwf3.



          We individually estimate and subtract the sky background emission from each readout of the intermediate ima.fits files.



          In order to avoid systematic biases due to the presence of de- fects in some regions of the detector, we created a manual data quality mask to flag those regions were the flat field cannot fully correct the differences in sensitivity.




          More image processing to remove sky background:




          In this Section we describe the methods used to remove the sky background from the individual exposures and the final mosaics of the HUDF.




          Image alignment:




          As a consequence, when comparing images from different visits, it is usual to see that they are not exactly aligned. In order to exploit the full resolving capabilities of WFC3, we need to carefully re-align the images of different visits to a single reference world coordinate system solution (WCS hereafter).




          and as a final step, image combination.



          Result:




          The XDF version of the HUDF WFC3/IR mosaics is dominated by a systematic bias in the form of a significant oversubtraction of the sky background around the objects with large angular size. A similar result (to a lesser extent) is obtained for the HUDF12. We successfully recover a significant amount of over-subtracted diffuse light around the largest objects of the HUDF, not detected by the previous versions of the mosaics.




          Summary:



          They've processed the images to bring out details in the galaxies. In the space between the galaxies, the image processing gives garbage results (the white areas), but they've managed to bring out detail on the edge of the galaxies that was hidden before.






          share|improve this answer











          $endgroup$



          When you plug the lead researcher's name into Arxiv, the first search result is The missing light of the Hubble Ultra Deep Field.



          enter image description here



          3 main steps:





          • Creation of sky flat fields for the four filters. This process is fully described in Sect. 2.4.


          – Creation of a catalogue of all WFC3/IR datasets that may affect our mosaics (including calibration exposures) to generate a set of improved persistence models for each exposure of the HUDF. We detail this process in Sect. 2.5.



          – Download and reduction of all the WFC3/IR datasets that include observations using the F105W, F125W, F140W and F160W filters on the HUDF.




          Flat sky field:




          In order to measure the relative sensitivity of the pixels of a detector (flat field), the optimal process would be to observe a uniform external source of light.




          Basically they're trying to remove all sources of noise from the image, in an attempt to make faint signals appear in places where that signal has been overwhelmed by noise.



          Persistence models:




          A known effect that affects HgCdTe IR array detectors (as is the case of the WFC3/IR) is persistence. Persistence shows up as an afterglow on the pixels that were exposed to a bright source of light in a previous exposure.



          The current method of persistence correction of WFC3/IR con- sists in modeling the number of electrons that would be created by persistence in each pixel by all the previous exposures (up to a certain time) that were taken before the one to correct (Long et al. 2012).



          During long exposures, sky background can vary noticeably, introducing a non-linear component to the count rates calculated by calwf3.



          We individually estimate and subtract the sky background emission from each readout of the intermediate ima.fits files.



          In order to avoid systematic biases due to the presence of de- fects in some regions of the detector, we created a manual data quality mask to flag those regions were the flat field cannot fully correct the differences in sensitivity.




          More image processing to remove sky background:




          In this Section we describe the methods used to remove the sky background from the individual exposures and the final mosaics of the HUDF.




          Image alignment:




          As a consequence, when comparing images from different visits, it is usual to see that they are not exactly aligned. In order to exploit the full resolving capabilities of WFC3, we need to carefully re-align the images of different visits to a single reference world coordinate system solution (WCS hereafter).




          and as a final step, image combination.



          Result:




          The XDF version of the HUDF WFC3/IR mosaics is dominated by a systematic bias in the form of a significant oversubtraction of the sky background around the objects with large angular size. A similar result (to a lesser extent) is obtained for the HUDF12. We successfully recover a significant amount of over-subtracted diffuse light around the largest objects of the HUDF, not detected by the previous versions of the mosaics.




          Summary:



          They've processed the images to bring out details in the galaxies. In the space between the galaxies, the image processing gives garbage results (the white areas), but they've managed to bring out detail on the edge of the galaxies that was hidden before.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jan 25 at 17:29

























          answered Jan 25 at 16:27









          HobbesHobbes

          1,727714




          1,727714








          • 1




            $begingroup$
            I've made an attempt at summarizing the paper, but this is far outside my expertise.
            $endgroup$
            – Hobbes
            Jan 25 at 17:31






          • 1




            $begingroup$
            "Basically they're trying to remove all sources of noise from the image" -- that's not actually possible. What they're trying to do is remove systematic instrumental variations due to differences in the sensitivity of different parts of the detector and differences in how the optics distribute the light. If you don't do this, you'll have an image with distortions in the brightness that aren't due to the actual astronomical sources.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:18






          • 2




            $begingroup$
            "In the space between the galaxies, the image processing gives garbage results (the white areas)" -- the white isn't quite "garbage", it's just the faintest parts of the image (no extended light from the bright galaxies). It will be dominated by Poisson noise, so there won't be much useful information.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:19














          • 1




            $begingroup$
            I've made an attempt at summarizing the paper, but this is far outside my expertise.
            $endgroup$
            – Hobbes
            Jan 25 at 17:31






          • 1




            $begingroup$
            "Basically they're trying to remove all sources of noise from the image" -- that's not actually possible. What they're trying to do is remove systematic instrumental variations due to differences in the sensitivity of different parts of the detector and differences in how the optics distribute the light. If you don't do this, you'll have an image with distortions in the brightness that aren't due to the actual astronomical sources.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:18






          • 2




            $begingroup$
            "In the space between the galaxies, the image processing gives garbage results (the white areas)" -- the white isn't quite "garbage", it's just the faintest parts of the image (no extended light from the bright galaxies). It will be dominated by Poisson noise, so there won't be much useful information.
            $endgroup$
            – Peter Erwin
            Jan 25 at 21:19








          1




          1




          $begingroup$
          I've made an attempt at summarizing the paper, but this is far outside my expertise.
          $endgroup$
          – Hobbes
          Jan 25 at 17:31




          $begingroup$
          I've made an attempt at summarizing the paper, but this is far outside my expertise.
          $endgroup$
          – Hobbes
          Jan 25 at 17:31




          1




          1




          $begingroup$
          "Basically they're trying to remove all sources of noise from the image" -- that's not actually possible. What they're trying to do is remove systematic instrumental variations due to differences in the sensitivity of different parts of the detector and differences in how the optics distribute the light. If you don't do this, you'll have an image with distortions in the brightness that aren't due to the actual astronomical sources.
          $endgroup$
          – Peter Erwin
          Jan 25 at 21:18




          $begingroup$
          "Basically they're trying to remove all sources of noise from the image" -- that's not actually possible. What they're trying to do is remove systematic instrumental variations due to differences in the sensitivity of different parts of the detector and differences in how the optics distribute the light. If you don't do this, you'll have an image with distortions in the brightness that aren't due to the actual astronomical sources.
          $endgroup$
          – Peter Erwin
          Jan 25 at 21:18




          2




          2




          $begingroup$
          "In the space between the galaxies, the image processing gives garbage results (the white areas)" -- the white isn't quite "garbage", it's just the faintest parts of the image (no extended light from the bright galaxies). It will be dominated by Poisson noise, so there won't be much useful information.
          $endgroup$
          – Peter Erwin
          Jan 25 at 21:19




          $begingroup$
          "In the space between the galaxies, the image processing gives garbage results (the white areas)" -- the white isn't quite "garbage", it's just the faintest parts of the image (no extended light from the bright galaxies). It will be dominated by Poisson noise, so there won't be much useful information.
          $endgroup$
          – Peter Erwin
          Jan 25 at 21:19











          0












          $begingroup$

          In response to a couple comments that Hobbes' answer is a bit thick, how about:



          To reduce noise effects, the team did flat-fielding adjustment and then summed multiple exposures, thus allowing weak signals to add while noise effects cancelled out.



          That's the TL;DR which leaves out a lot of really cool methods of identifying "true dark" and noise patches vs. reliable signals (stars or galaxies or whatever).






          share|improve this answer









          $endgroup$









          • 3




            $begingroup$
            If your answer is basically just a summary/"TLDR" of another answer, you should probably leave it as a comment on that answer or suggest it as an edit to that answer. Answers should generally stand on their own as answers to the question, not merely summarize or repeat another existing answer.
            $endgroup$
            – V2Blast
            Jan 25 at 22:05
















          0












          $begingroup$

          In response to a couple comments that Hobbes' answer is a bit thick, how about:



          To reduce noise effects, the team did flat-fielding adjustment and then summed multiple exposures, thus allowing weak signals to add while noise effects cancelled out.



          That's the TL;DR which leaves out a lot of really cool methods of identifying "true dark" and noise patches vs. reliable signals (stars or galaxies or whatever).






          share|improve this answer









          $endgroup$









          • 3




            $begingroup$
            If your answer is basically just a summary/"TLDR" of another answer, you should probably leave it as a comment on that answer or suggest it as an edit to that answer. Answers should generally stand on their own as answers to the question, not merely summarize or repeat another existing answer.
            $endgroup$
            – V2Blast
            Jan 25 at 22:05














          0












          0








          0





          $begingroup$

          In response to a couple comments that Hobbes' answer is a bit thick, how about:



          To reduce noise effects, the team did flat-fielding adjustment and then summed multiple exposures, thus allowing weak signals to add while noise effects cancelled out.



          That's the TL;DR which leaves out a lot of really cool methods of identifying "true dark" and noise patches vs. reliable signals (stars or galaxies or whatever).






          share|improve this answer









          $endgroup$



          In response to a couple comments that Hobbes' answer is a bit thick, how about:



          To reduce noise effects, the team did flat-fielding adjustment and then summed multiple exposures, thus allowing weak signals to add while noise effects cancelled out.



          That's the TL;DR which leaves out a lot of really cool methods of identifying "true dark" and noise patches vs. reliable signals (stars or galaxies or whatever).







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 25 at 19:11









          Carl WitthoftCarl Witthoft

          1,601710




          1,601710








          • 3




            $begingroup$
            If your answer is basically just a summary/"TLDR" of another answer, you should probably leave it as a comment on that answer or suggest it as an edit to that answer. Answers should generally stand on their own as answers to the question, not merely summarize or repeat another existing answer.
            $endgroup$
            – V2Blast
            Jan 25 at 22:05














          • 3




            $begingroup$
            If your answer is basically just a summary/"TLDR" of another answer, you should probably leave it as a comment on that answer or suggest it as an edit to that answer. Answers should generally stand on their own as answers to the question, not merely summarize or repeat another existing answer.
            $endgroup$
            – V2Blast
            Jan 25 at 22:05








          3




          3




          $begingroup$
          If your answer is basically just a summary/"TLDR" of another answer, you should probably leave it as a comment on that answer or suggest it as an edit to that answer. Answers should generally stand on their own as answers to the question, not merely summarize or repeat another existing answer.
          $endgroup$
          – V2Blast
          Jan 25 at 22:05




          $begingroup$
          If your answer is basically just a summary/"TLDR" of another answer, you should probably leave it as a comment on that answer or suggest it as an edit to that answer. Answers should generally stand on their own as answers to the question, not merely summarize or repeat another existing answer.
          $endgroup$
          – V2Blast
          Jan 25 at 22:05


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Astronomy Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fastronomy.stackexchange.com%2fquestions%2f29327%2fwhat-is-the-lost-light-in-this-unusual-hubble-deep-sky-image%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to change which sound is reproduced for terminal bell?

          Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents

          Can I use Tabulator js library in my java Spring + Thymeleaf project?