Match photo:Some problems of real-world coordinate calculation process in match photo

I am currently studying the problem of converting pixel coordinates on images into corresponding real-world coordinates.
Through the photo matching function of SketchUp, I measured the real world coordinates of some points (I set them as reference points in SU) in the image background I selected. I call the coordinates of the reference point relative to the origin( I set in SU)as the real world coordinates.

1.I would like to ask you about the calculation principle or process of SU reference point real world coordinates?
(I’m not sure, in the calculation process of SU, whether the ratio of the unit pixel at different positions corresponds to the real world size is consistent.)
2.In addition, I wonder if SU has any documentation on this issue?
(If so, I hope you can share it with me.)

Thanks & Best Regards

I probably wouldn’t answer your questions to your satisfaction but looking at your screenshot, I think you may need to also consider the lens used to make the reference photograph. In your image there is pronounced barrel distortion due to the wide angle lens. Even when SketchUp’s camera is set to a very short focal length, there will be no barrel distortion. Edges in the model will be straight even out near the edges of the model window. I think what you’ll find is that as you move away from the center of the image coordinates in the model will not correspond to coordinates in the image.

3 Likes

Yes, as @DaveR says, barrel distortion is no good. Some photo processing applications like Lightroom allow you to correct optical distortion. It’s amazing what Match Photo does, but it’s only approximate. The only way to get accurate results is to model numerically from measured dimensions. I use a hybrid technique of both: Model numerically from what is critical and large, over all distances, and use Match Photo to infill less critical details as a matter of interpolation.

1 Like

Yes, I need to think about how to reduce the effect of barrel distortion.Now I plan to solve the camera model in Zhang Zhengyou calibration through six sets(>=6) of corresponding coordinate data pairs.A set of coordinate data pairs consists of pixel coordinates of a point and corresponding real coordinates.The camera model calibrated by Zhang Zhengyou considers the influence of distortion to a certain extent.The camera model is used as my coordinate transformation model.

The real-world coordinates of the six sets of coordinate data pairs, which I obtained through the photo matching in SU.

Therefore, I would like to further understand the calculation model and details of SU real coordinates.This will help me to further evaluate the approximate degree of SU coordinate calculation.

Do you have any documents about SU coordinate calculation?