Match Photo for cropped/rectified/tilt-shift pictures

Request: Allow Match Photo to accommodate pictures that were cropped asymmetrically, were taken with a tilt-shift lens, or were rectified post-shoot, or a combination of the above.

Problem description: Currently, Match Photo only works correctly when the optical axis of the perspective transform that underlies a photo-matched image lies at the center of that image. The above techniques will break that assumption, as many users, including myself, have found out after frustrating fiddling. Failure typically presents itself as the blue axis being tilted away from where it is expected, and/or by measurements in the red/green plane being unequal, possibly quite dramatically so. Yet, often, there is no intutively obvious issue with the source image.

Mathematically, all of the aforementioned image manipulation techniques can be represented by a single perspective transform, one that accommodates the optical axis being off-center. Such a transform has 11 scalar parameters (with viewport size, location, and pixel discretization included). Therefore, 11 parameters are required for the inverse transform. But currently, Match Photo takes in only 9 parameters: 6 for the 2D positions of the two vanishing points and the origin, 1 for axis scaling, and 2 implicitly from the image aspect ratio and its pixel size.

Suggestion: To take in the “missing” 2 scalar parameters (currently implied as fixed), I believe the existing Match Photo UI could be organically extended by allowing the user to manipulate one additional handle point to indicate the location of the optical axis.

  • The UI object could be either a square handle like those for the vanishing points, or a bullseye element.
  • The default handle location will be the image center, such that no action will be needed for uncropped images.
  • The handle might be colored magenta or cyan to complement R/G/B for the directions and yellow for the horizon line.

Proof of concept (of sorts): I constructed a sample scene with deliberately strong perspective (short focus, camera close to scene elements) and elements arranged to one side:

I exported that scene as an image, cropped it both horizontally and vertically such that the camera aim point is off-center in the crop, and applied sepia-tone to reduce confusion on re-import. I then attempted Match Photo on it, which expectedly failed with the symptoms mentioned above:

To recover the Match attempt, I edited the picture externally, to re-extend the crop such that the camera aim point falls again at the center. This image gives excellent results for Match Photo, as expected:

In this contrived example, I of course knew a-priori where the aim point was, but a canvas extension could conceptually be done for any cropped picture to work with the existing Match Photo implementation. However, iterating the aim point search that way would be extremely tedious (having 2 dimensions to boot). If Match Photo could be extended to take in the aim point as additional 2D handle point, however, such an iteration should be much easier to perform by the user, and avoids a great deal of the current frustration.

I hope I have not missed any math issue here, and that there isn’t an ominous business obstacle in play behind the scenes that so far prevented accommodating crops and the like.

Next: Accommodate barrel/pincushion distortion around the newly found aim point. :wink:

Thanks for reading;
Michael Sternberg.


Well argued and presented. That expanded canvas manipulation (an operation you’d have to do in, say, Photoshop) you mention occurred to me as a possible work around, but hadn’t tried it. Being able to deal with it right in SU would be better, and your suggestion sounds sensible to me.

I’ve wanted other improvements to Match Photo as well, like tools to match up a one point or nearly so perspective. It would likely require other “handles” for manipulating the parameters.

1 Like

@Michael_S, Thank you for this request.

I like your idea, to be able to match the original principal point.

But… , if the verticals are really vertical and parallel ( two point perspective) and the principal point is unknown, then it would be still very difficult to find the principal point.

Two point perspective:

Without measuring data (ratio) of the object you want to model, you only know the principal point will be on the horizon line between the vanishing points. Where then will it be on that horizon?

If you know some ratios of the object you want to model, then it is possible to find the right place for the principal point on the horizon. With this ratio(s) and using descriptive geometry, it is possible to find the principal point.

Without knowing a ratio(s) of the object, you can place the principal point where you want (on the horizon, between the vanishing points). “Match Photo”, with your proposed functionality, will give a match, but then, probably, the 3d model will not match (in ratio) the real object that is depicted on the image. Only by coincidence, when you place the principal point on the correct place on the horizon line, the 3d model and the real object will match.

If you already have a 3d model with the appropriate ratio, then your proposed functionality will be very useful. Then it would be possible to match nice architectural images with perspective correction, with the 3d model. :grinning:

For cropped photos, with no parallel vertical lines, I would prefere a system that is able to manipulate the verticals with two extra blue bars, like the green and the red bars (to find the principal point). But your suggested “one additional handle point” also easily gives a proper result :+1: (in combination with red and green bars). For cropped images, without parallel vertical lines, this would be rather easy to implement. Perhaps there are legal reasons why Trimble can’t implement this system. Maybe this system is protected with a patent?

Here some extra information, about how to use cropped images, on the sketchUcation website:


@Michael_S ,

The best information, I could find about the principal point, measure points and angle of view, in “one-point-perspective” and “two-point-perspective” images is in a free to download pdf:

“Darstellende Geometrie für Architekten”. in German language, from a German mathematician “Erich Hartmann” Darmstadt University.

You can find a weblink (the first one), in this German Wikipedia page;

Erich Hartmann also maintains this German wikipedia page about descriptive geometry.

In this pdf, you can find exercises about how to find the principal point (Hauptpunkt: H)) in one-point-perspective, two-point-perspective and three-point-perspective.

The exercises ( Aufgaben) are:

  • Aufgabe 5.22 on page 124, solution on page 180
  • Aufgabe 5.23 on page 126, solution on page 181
  • Aufgabe 5.24 on page 127, solution on page 182

If you don’t know the German language, you can still try to understand these images. The problem is that I couln’t find comparable free, clear and up-to-date information in The English language about how to find the principal point.

In English there is good and much information about perspective on the great website from artist
"Bruce MacEvoy";

But his intention is to build a proper perspective on canvas.
Our problem is reversed. We have the canvas (image), but we want to know and understand the perspective of the image. I learned a lot from this website.

I can also recommend some 100 year old books;

  • “Elementary Perspective” by “Lewes R. Crosskey” 1898
  • “Advanced Perspective” by “Lewes R. Crosskey” 1901

I wish you every success with “Match Photo” and extending your canvas.

If you don’t have access to commercial 2D CAD software, I suggest LibreCad as a free and open source alternative to do the “canvas extension”. With this software you can use the ‘descriptive geometry’ as given in the websites and books above. You can work very precise. The image will hold its resolution. I did have good results with difficult images. Good luck.


I had to do photogrammetric reconstruction exercises at university. It was more than 40 years ago so I don’t remember much, but I do remember that success didn’t depend on the photo not being cropped. Of course finding the main point is easy if you can just put it point blank in the middle of the image.


The first time I did “Match Photo” was 1979, and the tools were T-squares, push-pins, paper and pencil drawings and a slide projector. I do remember I was able to reverse engineer, so to speak, the perspective in the photo back to my site plan by establishing station point, picture plane etc., and then constructing the building perspective back into the traced photo in the conventional way. Back then, creating just one perspective view was so much work.


To be clear, I used canvas extension merely conceptually as a workaround to convey the principal point of a perspective projection within a cropped/rectified/lens-shifted image to the existing Match Photo implementation. The canvas boundaries are to be chosen such that they are equidistant from the principal point (be it determined or guessed), thereby fulfilling the expectation for Match Photo that the principal point is at the center of the image. A handle object for the principal point as I proposed conveys the same information, has a sensible default, and happens to be overwhelmingly likely to be located within the image. It should therefore be nicely accessible for manipulation.

@iarga, you are right in describing forms of additional information needed to obtain a 3D reconstruction that has correct measurements along all of its axes. I see that as an (albeit slightly) lesser problem because SketchUp makes it easy to selectively scale the reconstructed model once Match Photo is done, to arrive at correct or at least plausible aspect ratios. (Post-match scaling would throw off further match edits, of course). More obviously problematic and more frustrating is when the blue axis cannot be made to coincide with the verticals in an image despite the user’s best efforts towards pixel-perfect alignment for the red and green handle points. A blue axis tilt is a showstopper for any further reconstruction attempts.

Also, thank you for the excellent resource links. They preempt a lot of explanations. I happen to be German and a physicist, so I find the most pertinent reference to be the math course material (pdf). BTW, one has to appreciate the richness of the German terms in play here, e.g., Schleifende Schnitte, just like Stürzende Linien in photography.

Regarding the blue vanishing point: Being able to move the blue vanishing point will provide 2 more scalars to Match Photo just like moving the principal point would. All 3 vanishing points and the principal point are mutually dependent, forming an orthocentric system – moving one point implies a change in one or more of the other points. It could be a UI challenge to pick which point(s) of the other three to change when one of the vanishing points is moved, while preferably not trashing placement work done for the other helper bars. Exposing the principal point, being something clearly different, should help avoid this dilemma.

I like to think that the blue axis cannot be directly manipulated because handle objects would be very “touchy” for architectural images with their nearly or exactly parallel verticals. A slight nudge of a handle for such bars could send the corresponding vanishing point wildly across the image. That can be disorienting, especially since handle nudging is not tracked by Undo/Redo (That’s another feature request I’d make, actually).

Perhaps there are legal reasons why Trimble can’t implement this system. Maybe this system is protected with a patent?

Yup, that is what I meant by “ominous business obstacle”.


Thank you for this simple solution. I have to confess I haven’t thought about this. :man_facepalming: Now I want to test this, seeing is believing. It saves a lot of work.
But when working with multiple photo matches with different viewpoints, for the same model, I then have to scale immediately after my first photo-match (when two-point-perspective). The first photo-match will be the reference for the following photo matches. If there is an image with three clear vanishing points, then I can take this image as reference-image of course.
(edit: The (after first photo match) appropriate scaled 3d model will be the reference. Thereafter the first photo-match-scene can be done again with the proper scaled model.)

I like to think that the blue axis cannot be directly manipulated because handle objects would be very “touchy” for architectural images with their nearly or exactly parallel verticals. A slight nudge of a handle for such bars could send the corresponding vanishing point wildly across the image.

Now I understand exactly why you want to be able to match the principal point with a handle (and not with extra blue bars). Very clever. This is the core of the problem! I don’t know how complex this is to implement in Match Photo.

It occurred to me that you can perform multiple photo matches and still be able to revisit and adjust each of them by leveraging components. They allow you to replicate the model structurally but stretch it for each match individually. Try the following:

  1. Perform the first match and begin modeling therein.

  2. When you deem the match suitable:

  • Put the entire structure modeled so far into an all-encompassing component. Do not scale that component instance.
  • Assign the component to its own layer so you can easily hide it later.
  1. To match a new photo:
  • Add a temporary scence.
  • Place a new copy of the component in it, suitably away from earlier copies.
  • Assign the new copy to its own layer.
  • Hide the previous components via their layers.
  • Optionally change the axes of the scene to suit the upcoming photo.
  • “Update” the scene.
  • Enter Match Photo, and do its dance:
    • Set the vanishing points and origin.
    • Scale the Match Photo axes to match your component vertically.
    • Here’s the key step: scale the component as a whole to adjust the red/green ratio as needed.
    • Only then, open the component and edit as needed to move existing parts or add new ones.
  • Re-use the scene to set up the next photo to match.

Crop impact

As mentioned, applying a horizontal crop to a photo-to-be-matched can introduce a Red/Green scale difference yet still give a matching blue axis when the photo was taken with the camera level (and only then). That is the situation where the above trick can help. In testing, I was astounded at the impact a crop can have when the picture has a far-away vanishing point, i.e., is in nearly 1-point perspective. In that case, even a modest horizontal crop can quite dramatically change the Red/Green aspect ratio:

I removed only 9% on the left of the full image of my sample scene, yet the tile size along the depth of the picture above is seriously incorrect (again, also consider Stacy’s location), whereas the left/right and vertical tile size match quite well.

An even stronger crop in such a geometry can seriously disturb the axis setup for Match Photo:

cam4 extreme h crop

Evidently, for such nearly-degenerate cases a crop can quickly lead to non-physical solutions for the parameters of the inverse perspective.

1 Like

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.