How are satellite images mapped to terrain coordinates so precisely?












1















Many geoinformation systems have both satellite images and terrain features (houses, roads, etc) with coordinates. So they can show terrain features drawn over the satellite image or they can display an "it's here" mark on a satellite image given certain coordinates.



How does mapping happen (in layman terms)? A satellite flies around Earth and makes a gazillion satellite images, these shots are then sent to map producer. How does the map producer know which point on a satellite image correspond to which coordinate?










share|improve this question



























    1















    Many geoinformation systems have both satellite images and terrain features (houses, roads, etc) with coordinates. So they can show terrain features drawn over the satellite image or they can display an "it's here" mark on a satellite image given certain coordinates.



    How does mapping happen (in layman terms)? A satellite flies around Earth and makes a gazillion satellite images, these shots are then sent to map producer. How does the map producer know which point on a satellite image correspond to which coordinate?










    share|improve this question

























      1












      1








      1


      1






      Many geoinformation systems have both satellite images and terrain features (houses, roads, etc) with coordinates. So they can show terrain features drawn over the satellite image or they can display an "it's here" mark on a satellite image given certain coordinates.



      How does mapping happen (in layman terms)? A satellite flies around Earth and makes a gazillion satellite images, these shots are then sent to map producer. How does the map producer know which point on a satellite image correspond to which coordinate?










      share|improve this question














      Many geoinformation systems have both satellite images and terrain features (houses, roads, etc) with coordinates. So they can show terrain features drawn over the satellite image or they can display an "it's here" mark on a satellite image given certain coordinates.



      How does mapping happen (in layman terms)? A satellite flies around Earth and makes a gazillion satellite images, these shots are then sent to map producer. How does the map producer know which point on a satellite image correspond to which coordinate?







      coordinates satellite mapping






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked yesterday









      sharptoothsharptooth

      1512




      1512






















          2 Answers
          2






          active

          oldest

          votes


















          3














          You need to know four sets of information in order to precisely locate a pixel from the image to the ground.



          1) The position of the camera (three coordinates : X,Y,Z)



          2) The angle of view of the camera (three angles: omega, phi, kappa)



          3) The distance between the camera and the soil (or the scale of the picture)



          4) the geometry of the camera (e.g. the focal distance)



          The first two sets of parameters can be obtained from instrument onboard for a first approximation. For instance, a DGPS receiver provides precise X,Y,Z position, and for the angles there are inertial systems (INS). On satellites, there are also star trackers: the best satellites have a location accuracy of less than 5 meters only based on their instruments and the distance to the soil. For a more precise location, the parameters can be estimated based on a model and a set of points with known coordinates on the ground that can be precisely located on the image (called Ground Control Points). When you know the orientation of the sensor, you can build a line segment that goes from the focal point of the sensor and through the pixel location on the image toward the ground.



          The ground-sensor distance can be computed if you have a digital surface model (DSM, which is like a Digital Elevation Model + height of the objects). Then you compute the intersection between the ray from the sensor (you know the origin and the angles) and the surface on the ground. If you don't have an accurate and uptodate DSM, you can use a second image to solve the problem (like our brain does to see in 3D thanks to our 2 eyes).



          When you have the above mentioned infomations, you can compute the position of each pixel. For the sake of comleteness, you should also take into account some sources of errors including lens(or mirror) distortions (solved by calibration of the lens), Earth curvature and curved path through the atmosphere (there are also models for that), etc.






          share|improve this answer

































            1














            Raw satellite imagery is by no means spatially accurate, so a lot of work is being done to georeference the imagery correctly.

            Let us start with a smaller example - the aerial imagery business, which is similar to satellites, but somewhat closer to the ground. In order to get aerial imagery to "be in the right location", a series of 'ground control points' (GCPs) are established - in some cases these GCPs are old triangulation points from the traditional mapping days. Establishing a GCP is based around accurately determining the location (latitude, longitude and elevation) of a given object, using differential GPS, and making sure that the object can be seen in the acquired imagery.

            Once you have a bunch of such GCPs, it is possible to combine the accurate location information from that them and the fact that you can find the measures object in the imagery to accurately determine the georeference of the acquired image.

            With a basis in the above, images can be stitched together and a accurate map can be made. A similar approach is usable for satellite imagery, however, at times it is not required to establish a new set of physical GCPs, as you can rely on other people's work by simply georeferencing your satellite image to an accurate aerial dataset.



            Additional challenges lie in varying elevations across the imagery and getting GCPs in areas that are not easily accessible. Both of which can be dealt with by bringing in more data.






            share|improve this answer























              Your Answer








              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "79"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f310282%2fhow-are-satellite-images-mapped-to-terrain-coordinates-so-precisely%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              3














              You need to know four sets of information in order to precisely locate a pixel from the image to the ground.



              1) The position of the camera (three coordinates : X,Y,Z)



              2) The angle of view of the camera (three angles: omega, phi, kappa)



              3) The distance between the camera and the soil (or the scale of the picture)



              4) the geometry of the camera (e.g. the focal distance)



              The first two sets of parameters can be obtained from instrument onboard for a first approximation. For instance, a DGPS receiver provides precise X,Y,Z position, and for the angles there are inertial systems (INS). On satellites, there are also star trackers: the best satellites have a location accuracy of less than 5 meters only based on their instruments and the distance to the soil. For a more precise location, the parameters can be estimated based on a model and a set of points with known coordinates on the ground that can be precisely located on the image (called Ground Control Points). When you know the orientation of the sensor, you can build a line segment that goes from the focal point of the sensor and through the pixel location on the image toward the ground.



              The ground-sensor distance can be computed if you have a digital surface model (DSM, which is like a Digital Elevation Model + height of the objects). Then you compute the intersection between the ray from the sensor (you know the origin and the angles) and the surface on the ground. If you don't have an accurate and uptodate DSM, you can use a second image to solve the problem (like our brain does to see in 3D thanks to our 2 eyes).



              When you have the above mentioned infomations, you can compute the position of each pixel. For the sake of comleteness, you should also take into account some sources of errors including lens(or mirror) distortions (solved by calibration of the lens), Earth curvature and curved path through the atmosphere (there are also models for that), etc.






              share|improve this answer






























                3














                You need to know four sets of information in order to precisely locate a pixel from the image to the ground.



                1) The position of the camera (three coordinates : X,Y,Z)



                2) The angle of view of the camera (three angles: omega, phi, kappa)



                3) The distance between the camera and the soil (or the scale of the picture)



                4) the geometry of the camera (e.g. the focal distance)



                The first two sets of parameters can be obtained from instrument onboard for a first approximation. For instance, a DGPS receiver provides precise X,Y,Z position, and for the angles there are inertial systems (INS). On satellites, there are also star trackers: the best satellites have a location accuracy of less than 5 meters only based on their instruments and the distance to the soil. For a more precise location, the parameters can be estimated based on a model and a set of points with known coordinates on the ground that can be precisely located on the image (called Ground Control Points). When you know the orientation of the sensor, you can build a line segment that goes from the focal point of the sensor and through the pixel location on the image toward the ground.



                The ground-sensor distance can be computed if you have a digital surface model (DSM, which is like a Digital Elevation Model + height of the objects). Then you compute the intersection between the ray from the sensor (you know the origin and the angles) and the surface on the ground. If you don't have an accurate and uptodate DSM, you can use a second image to solve the problem (like our brain does to see in 3D thanks to our 2 eyes).



                When you have the above mentioned infomations, you can compute the position of each pixel. For the sake of comleteness, you should also take into account some sources of errors including lens(or mirror) distortions (solved by calibration of the lens), Earth curvature and curved path through the atmosphere (there are also models for that), etc.






                share|improve this answer




























                  3












                  3








                  3







                  You need to know four sets of information in order to precisely locate a pixel from the image to the ground.



                  1) The position of the camera (three coordinates : X,Y,Z)



                  2) The angle of view of the camera (three angles: omega, phi, kappa)



                  3) The distance between the camera and the soil (or the scale of the picture)



                  4) the geometry of the camera (e.g. the focal distance)



                  The first two sets of parameters can be obtained from instrument onboard for a first approximation. For instance, a DGPS receiver provides precise X,Y,Z position, and for the angles there are inertial systems (INS). On satellites, there are also star trackers: the best satellites have a location accuracy of less than 5 meters only based on their instruments and the distance to the soil. For a more precise location, the parameters can be estimated based on a model and a set of points with known coordinates on the ground that can be precisely located on the image (called Ground Control Points). When you know the orientation of the sensor, you can build a line segment that goes from the focal point of the sensor and through the pixel location on the image toward the ground.



                  The ground-sensor distance can be computed if you have a digital surface model (DSM, which is like a Digital Elevation Model + height of the objects). Then you compute the intersection between the ray from the sensor (you know the origin and the angles) and the surface on the ground. If you don't have an accurate and uptodate DSM, you can use a second image to solve the problem (like our brain does to see in 3D thanks to our 2 eyes).



                  When you have the above mentioned infomations, you can compute the position of each pixel. For the sake of comleteness, you should also take into account some sources of errors including lens(or mirror) distortions (solved by calibration of the lens), Earth curvature and curved path through the atmosphere (there are also models for that), etc.






                  share|improve this answer















                  You need to know four sets of information in order to precisely locate a pixel from the image to the ground.



                  1) The position of the camera (three coordinates : X,Y,Z)



                  2) The angle of view of the camera (three angles: omega, phi, kappa)



                  3) The distance between the camera and the soil (or the scale of the picture)



                  4) the geometry of the camera (e.g. the focal distance)



                  The first two sets of parameters can be obtained from instrument onboard for a first approximation. For instance, a DGPS receiver provides precise X,Y,Z position, and for the angles there are inertial systems (INS). On satellites, there are also star trackers: the best satellites have a location accuracy of less than 5 meters only based on their instruments and the distance to the soil. For a more precise location, the parameters can be estimated based on a model and a set of points with known coordinates on the ground that can be precisely located on the image (called Ground Control Points). When you know the orientation of the sensor, you can build a line segment that goes from the focal point of the sensor and through the pixel location on the image toward the ground.



                  The ground-sensor distance can be computed if you have a digital surface model (DSM, which is like a Digital Elevation Model + height of the objects). Then you compute the intersection between the ray from the sensor (you know the origin and the angles) and the surface on the ground. If you don't have an accurate and uptodate DSM, you can use a second image to solve the problem (like our brain does to see in 3D thanks to our 2 eyes).



                  When you have the above mentioned infomations, you can compute the position of each pixel. For the sake of comleteness, you should also take into account some sources of errors including lens(or mirror) distortions (solved by calibration of the lens), Earth curvature and curved path through the atmosphere (there are also models for that), etc.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited yesterday

























                  answered yesterday









                  radouxjuradouxju

                  40.2k142118




                  40.2k142118

























                      1














                      Raw satellite imagery is by no means spatially accurate, so a lot of work is being done to georeference the imagery correctly.

                      Let us start with a smaller example - the aerial imagery business, which is similar to satellites, but somewhat closer to the ground. In order to get aerial imagery to "be in the right location", a series of 'ground control points' (GCPs) are established - in some cases these GCPs are old triangulation points from the traditional mapping days. Establishing a GCP is based around accurately determining the location (latitude, longitude and elevation) of a given object, using differential GPS, and making sure that the object can be seen in the acquired imagery.

                      Once you have a bunch of such GCPs, it is possible to combine the accurate location information from that them and the fact that you can find the measures object in the imagery to accurately determine the georeference of the acquired image.

                      With a basis in the above, images can be stitched together and a accurate map can be made. A similar approach is usable for satellite imagery, however, at times it is not required to establish a new set of physical GCPs, as you can rely on other people's work by simply georeferencing your satellite image to an accurate aerial dataset.



                      Additional challenges lie in varying elevations across the imagery and getting GCPs in areas that are not easily accessible. Both of which can be dealt with by bringing in more data.






                      share|improve this answer




























                        1














                        Raw satellite imagery is by no means spatially accurate, so a lot of work is being done to georeference the imagery correctly.

                        Let us start with a smaller example - the aerial imagery business, which is similar to satellites, but somewhat closer to the ground. In order to get aerial imagery to "be in the right location", a series of 'ground control points' (GCPs) are established - in some cases these GCPs are old triangulation points from the traditional mapping days. Establishing a GCP is based around accurately determining the location (latitude, longitude and elevation) of a given object, using differential GPS, and making sure that the object can be seen in the acquired imagery.

                        Once you have a bunch of such GCPs, it is possible to combine the accurate location information from that them and the fact that you can find the measures object in the imagery to accurately determine the georeference of the acquired image.

                        With a basis in the above, images can be stitched together and a accurate map can be made. A similar approach is usable for satellite imagery, however, at times it is not required to establish a new set of physical GCPs, as you can rely on other people's work by simply georeferencing your satellite image to an accurate aerial dataset.



                        Additional challenges lie in varying elevations across the imagery and getting GCPs in areas that are not easily accessible. Both of which can be dealt with by bringing in more data.






                        share|improve this answer


























                          1












                          1








                          1







                          Raw satellite imagery is by no means spatially accurate, so a lot of work is being done to georeference the imagery correctly.

                          Let us start with a smaller example - the aerial imagery business, which is similar to satellites, but somewhat closer to the ground. In order to get aerial imagery to "be in the right location", a series of 'ground control points' (GCPs) are established - in some cases these GCPs are old triangulation points from the traditional mapping days. Establishing a GCP is based around accurately determining the location (latitude, longitude and elevation) of a given object, using differential GPS, and making sure that the object can be seen in the acquired imagery.

                          Once you have a bunch of such GCPs, it is possible to combine the accurate location information from that them and the fact that you can find the measures object in the imagery to accurately determine the georeference of the acquired image.

                          With a basis in the above, images can be stitched together and a accurate map can be made. A similar approach is usable for satellite imagery, however, at times it is not required to establish a new set of physical GCPs, as you can rely on other people's work by simply georeferencing your satellite image to an accurate aerial dataset.



                          Additional challenges lie in varying elevations across the imagery and getting GCPs in areas that are not easily accessible. Both of which can be dealt with by bringing in more data.






                          share|improve this answer













                          Raw satellite imagery is by no means spatially accurate, so a lot of work is being done to georeference the imagery correctly.

                          Let us start with a smaller example - the aerial imagery business, which is similar to satellites, but somewhat closer to the ground. In order to get aerial imagery to "be in the right location", a series of 'ground control points' (GCPs) are established - in some cases these GCPs are old triangulation points from the traditional mapping days. Establishing a GCP is based around accurately determining the location (latitude, longitude and elevation) of a given object, using differential GPS, and making sure that the object can be seen in the acquired imagery.

                          Once you have a bunch of such GCPs, it is possible to combine the accurate location information from that them and the fact that you can find the measures object in the imagery to accurately determine the georeference of the acquired image.

                          With a basis in the above, images can be stitched together and a accurate map can be made. A similar approach is usable for satellite imagery, however, at times it is not required to establish a new set of physical GCPs, as you can rely on other people's work by simply georeferencing your satellite image to an accurate aerial dataset.



                          Additional challenges lie in varying elevations across the imagery and getting GCPs in areas that are not easily accessible. Both of which can be dealt with by bringing in more data.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered yesterday









                          Mikkel Lydholm RasmussenMikkel Lydholm Rasmussen

                          5,008818




                          5,008818






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Geographic Information Systems Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f310282%2fhow-are-satellite-images-mapped-to-terrain-coordinates-so-precisely%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              If I really need a card on my start hand, how many mulligans make sense? [duplicate]

                              Alcedinidae

                              Can an atomic nucleus contain both particles and antiparticles? [duplicate]