Mosaicing of MRO CTX images
I tried to create mosaic from two MRO CTX images using ISIS3.
I found the detailed description how it could be done on http://lunokhod.org/?p=468 page.
Instruction is very clear, and I successfully received described result - after jigsaw running the sigma0 was dropped to 1.2 pixels.
After obtaining these successful results I tried to reproject images using cam2map utility and build mosaic.
But result was very confusing for me - images inside mosaic has big shift between them.
You can see it on attached picture - I put images in different bands of resulted mosaic.
Could you please explain that I am doing wrong?
#3 Updated by Lynn Weller over 1 year ago
I was able to reproduce your results by following instructions in the link you provided. I think if the author had taken the extra steps of projecting and mosaiking the jigsaw updated images he too would have seen the problem. It appears he simply took the output into the ASP and got reasonable results there so it did not occur to him to question the jigsaw output. I have a solution for you and some recommendations.
You will get better results if you run jigsaw in a slightly more advanced manner. When solving for camera angles (camsolve=angles which is a default setting for jigsaw), I would normally set twist=true). I think the author of that link chose to set twist=no because when it is set to yes and radius=true jigsaw fails with an error. I am familiar with the error from experience and have found by providing jigsaw with a constraint for radius (point_radius_sigma), jigsaw will run again. Although I am very familiar with jigsaw, I am not a photogrammatrist and I did not write the program so I will not try and explain why some settings work nor how a user should choose values, but I will tell you what worked:
jigsaw fromlist=cube.lis cnet=control_pointreg.net onet=control_ba.net update=no radius=yes camsolve=angles twist=yes point_radius_sigma=50
The value for point_radius_sigma is in meters. Also note that I set update=no. I typically do not update my images until I am happy with my network and the results of jigsaw. Be sure to rerun spiceinit on your images if plan on using them again and have updated them via jigsaw. You can see if an image has been updated by running cathist on it and seeing if Jigsaw is in the history.
Running jigsaw in the above manner will produce more desirable results, but those results are only as good as the quality of the registrations of points and measures in your control network. The value of Sigma0 you reported is not terrible, but could be lower. After running pointreg, did you view your network in qnet and modify anything? There are some number of points that were Ignored because pointreg could not register them (based on the contents of the deffile) and you can attempt to fix them by manually modifying them in qnet as indicated in the link you provided. I won't go into qnet details either, but will share some links to ISIS online documentation that may help you get better acquainted.
If you did modify some points, you may still have poor registrations for the successful points and the output of jigsaw can help you identify those. Take a look at the jigsaw output file residuals.csv. The last column contains the Residual Vector Magnitude for each measure associated with a point. The higher the value, the more likely the measure is not registered to other measures in that point very well. You can use that information in qnet to modify and improve the measures, save the network and run jigsaw again (this is why I don't set update=true). I found there were a number of points that could use improving after running jigsaw on the network I created using the instructions on the site you provided.
Here is a link for additional information for some of the ISIS programs you used to build your network to help you along:
You'll be most interested in the topics under the Advanced section (Image registration, etc.) as well as the Demonstrations of ISIS3 Interactive Tools at the bottom of the page which has links to qnet and jigsaw discussions.
Please let us know if running jigsaw the way I described has helped.
#4 Updated by Alexander Yadykin over 1 year ago
- File residuals.jpg View added
- File qnet.jpg View added
- File mosaic.jpg View added
- File mosaic_part2.jpg View added
- File mosaic_part1.jpg View added
Thanks a lot for detailed response.
Of cause, I used qnet for fixing invalid points.
But unfortunately the result of jigsaw running still is not completely clear for me.
I checked the residuals.csv (you can see it in attachment residuals.jpg - it looks like that I can attach only images here) file after fixing invalid points and found that point 16 has big residual vector magnitude - near of 2.5 pixels.
I checked this point using qnet (see qnet.jpg in attachment) one more time and found that it has very good match - the goodness of fit is 0.93.
I found the same situation for some another points.
Could you please explain why points has good match, but residual vector magnitude is big enough?
I tried to reproject & create mosaic from images after running jigsaw with your parameters.
The result (see mosaic.jpg) is much better than it was in my previous letter.
But you can see a local shifts (see mosaic_part1.jpg & mosaic_part2.jpg) between images in some areas.
It looks like that the biggest shifts take place on mountain areas.
Maybe we must use somehow DEM during image processing?
Could you please describe any way to enhance the quality of images stitching in resulted mosaic?
How can I eliminate local shifts between images?
Thanks in advance.
#5 Updated by Lynn Weller over 1 year ago
There may be some things you can do to improve results, but it means backing up some.
I haven't worked with CTX data beyond some testing, but I have worked with Themis IR and LROC NAC data extensively and have found more points tend to produce better results for linscan data. There is no magic number I can provide you for the number of points necessary and you can easily overdo it, but you might try rerunning autoseed and modifying the definition file X/YSpacing values to something smaller to get more points. Maybe shoot for hundreds of points instead of dozens. Be aware that the X/YSpacing values do not need to be the same - more along track might be more helpful and across, but this can depend on the number of images and how they overlap. For these two images it might not matter so much, just providing some ideas for future reference. This will take some playing around with before you get something you are comfortable with.
You might also consider changing the Pattern and Search values in the pointreg deffile to find something that could potentially work better with this data. The parameters from the online tutorial you pointed to worked ok, but blander areas might benefit from a somewhat larger pattern and search chip to capture more features for matching. However, the bigger you go with chip size, the longer the pattern matching takes. Also, chips do not need to be square and if you plan on working with more CTX data in the future you might find a rectangular chip is better for this sort of data. Keep in mind after running pointreg on a network, the output network can be passed back to pointreg and rerun with a second deffile in an attempt to register the ignored points, where possible. For round two be sure to set the parameter measures=candidate to work only on the ignored measures and not the previously registered points. An unregistered measure has Measure Type=Candidate and one that has been subpixel registered by pointreg has Measure Type= RegisteredSubPixel. When a user modifies the point manually in qnet then Measure Type = Manual. I bring this up because you may want to minimize the amount of manual point manipulation if you have a network with hundreds of points. After running pointreg and modifying any points via qnet you may also decide the remaining ignored measures/points are worth keeping, you can run cnetedit in default mode to remove the ignored measures and points.
As for your comment about DEM, your images should be pointing to the default MOLA DEM for Mars. This would have automatically occurred when you ran spiceinit on your images. You can verify by viewing the label of an image (using the unix command less or more) and looking for the keyword ShapeModel under the Kernels group. I believe some of the mismatches you are seeing have to with the large difference between the resolution of your CTX images (~6 m/pixel) and that of the DEM the software is using for geometric intersections and projections (the MOLA has a resolution of 128 m/pixel). Maybe you have noticed in qnet that tie points near prominent topographic features breath and appear to have other distortions when the blink tool is on and the geom button is enabled? I believe a lot of that is due to topography (though the camera model may be coming into play as well - I'm not sure). Even blanketing the images with perfect points may not address the mismatches you highlight in your png's. And as another colleague pointed out, the offsets/mismatches you are seeing in your png's is probably parallax between what are likely a stereo pair. That is not something that can be fixed by a bundle adjustment.
Unfortunately creating networks and improving image geometry is as much art as it is a science and takes a bit of experimentation for certain data sets.
There are different ways to run jigsaw (and interpreting the numerical results to improve parameter settings) that may improve your results, but I will leave the details of that conversation to our Photogrammetrist Ken who will add to this post when time permits.
#6 Updated by Alexander Yadykin over 1 year ago
I tried to play with tie points count, pattern & search chip size.
The results were sometimes better, sometimes worse than it was shown in my previous post.
But as the main result - unfortunately I could not get perfect images stitching on whole area of mosaic.
I agree that low resolution DEM is one of the main reasons of such result.
I checked all points using qnet with blink tool & geom button, but I did not see nothing unusual.
Left & right images had slight different rotation angle, were squeezed or stretched a bit, but I think that the reason of such distortions is a different viewing angle during image recording.
#7 Updated by Kenneth Edmundson over 1 year ago
My name is Ken Edmundson and I am a photogrammetrist here at USGS in Flagstaff and have written much of the ISIS bundle adjustment software (jigsaw).
I understand that you are struggling a little bit with the CTX example and I'm sorry about that.
I hate to leave you waiting for an explanation, but it would take some time for me to study your problem and respond and I am simply swamped at the moment.
And so, stay tuned, but honestly, it could be a couple weeks or more until I can have a look.