I’m working on a to-scale 3D master plan of an existing neighborhood in Detroit. I’m wondering if there is a recommended app for modeling existing structures. I’ve been taking hand measurements of each property and modeling them into SU, but it’s extremely time consuming.
I’ve tried PhotoMatch, but it’s not as necessarily precise as I’d like it to be. I also find that it’s really hard to use a photo in a file that has an existing 3d model in it already (for example, if I had already modeled much of a structure, I couldn’t use photo match with my existing model to complete the roof or something - maybe this is possible, I’m having a really hard time figuring it out though).
With that in mind, does anyone have any suggestions for the best practice here? I’ve looked at some other photogrammetry programs, which seem promising. Maybe I should do everything in AutoDesk ReCap or something and import it into Sketch Up?
For what it’s worth I have access to a drone and potentially some fancier cameras if needed. Since I am doing this for several hundred properties, I’d like to do identify a best practice and then standardize it, so that I can do a very high quality job efficiently
I’m attaching some examples of structures I’m working on and of the masterplan itself (which I am hoping will be fully public, 3D and interactive.
For my architectural projects, I use this technique with success. I don’t have to do whole neighborhoods though! Wow, that’s a lot of work.
Because I need some dimensional accuracy, I start with a basic model built from field measurements, but for efficiency, I just get a few over all dimension to make a basic massing model. Then look at the photo, and tumble the SU model around as close as possible to what the photo view looks like and then start the Match Photo process. For details like windows and doors and other stuff, I just use the projected photo to fill that in. It also includes bushes and shadows and other junk, but it’s not bad for quick and dirty.
Match Photo is very precise. I did a lot of tests with geometric shapes. I drew them in perspective in a 2d CAD software. The matches are 100% accurate,
Photos and existing models perhaps aren’t 100% exact?
SketchUp needs 3-point-perspective or 2-point-perspective images, with the principal point exactly in in the center of the image. So no cropping and no shifting is allowed. Also it is much easier with an image with the view 45 degrees to a corner.
The lens correction must be very precise. I read you use a drone also. A lot of drones use fisheye lenses. So this correction must be done accurately. Don’t use perspective corrections.
I’m not sure about the quality of the first two images in the first row, you posted here above. The second image isn’t a suitable image for Match Photo. Here the verticals are parallel, but the principal point is unknown. Also this image looks distorted (maybe a StreetView image?).
If the third image isn’t cropped or edited, Match Photo should work. A 45 degree view to the corner would give better results, for then you can work more accurate.
Also the last image on the second row should work, if it is not a cropped image.
So my advise is to use proper made photos that are suitable for Sketchup Match Photo.
Be sure your photo camera and the lens correction thereafter doesn’t distort the perspective.
For hundreds of images and if it is a professional project, I would use other software (much more expensive then SketchUp!) and use proper calibrated cameras.
The existing models are accurate within 1/16th of an inch. It seems like photo match is putting the axis origin thing in a different location and I can never get the two to line back up? I’m sorry, I know the language I’m using here is super vague.
Do you have an example of what an ideal 3-point-perspective image looks like? What do you mean by cropping or shifting? What do you mean by 45 degrees to a corner? I’m not sure what a perspective correction is.
The first image is a series of 4 photos of the same structure. Photo one is the structure in the 70s (historical archive). Photo 2 is in the early 2010s (google street view). Photo 3 is the structure as it looks today (from my iPhone 5s). Photo 4 is the 3D model I made in SketchUp. Sorry. i probably didn’t need to include photo 1 and 2, they were just a part of the same image file.
Maybe you could show me a template 3-point-perspective of a similar structure that would work well?
I use software called Pix4D. It is photogrammetry software designed for taking a photographic dataset from say a drone survey and terrestrial images ( I use my iPhone camera for the terrestrial images but obviously better results are achievable with better cameras.) so Pix4D can produce accurate orthomosasic images (photographic maps) as well as a 3D mesh.
I use the orthomosaics in SketchUp for accurate 2D placement and drawing roads / geographic details / boundaries etc and the meshes as stand-in geometry.
I then do the building models from my manual surveys etc.
if I could get my head around match photo I’d probably use that as well.
Pix4D is a great bit of kit for dealing with drone surveys but it is expensive. You can just rent it for a month.
You could also use Drone Deploy but I have found better results and have far more control with Pix4D.
All my photography goes through Lightroom. There is a place to correct optics and perspective. Correcting for errors in the lens like barrel and pincushion distortion is good to apply, but modifying perspective (mimicking shifts and tilts in a bellows lens camera) shouldn’t be done for Photo Match. (You can do that later if you ultimately want that.)
Looking straight on is nearly impossible to match with Photo Match. It wasn’t designed for it and I’ve tried anyway. A corner view is what’s assumed. There are other tools you can use to apply a straight on photo to a surface if you don’t like to oblique view of windows and such.
Your iPhone camera photo is in three-point perspective. Three point perspective is a view were you can see perspective in three directions. So there are three vanishing points. there are no parallel vertical lines. The vertical lines also point to a third vanishing point. In the next example the third vanishing point is underneath, because the view is from above.
edit: If you have a picture with parallel vertical lines (2-point-perspective), taken straight out of a ‘conventional’ camera, it will also give a good result, for the ‘principal point’ is in exactly in the center of the image.
Being a Pro user, you can use Advanced Camera Tools to set up a SU camera matching your match your digital camera to recreate a photographic scene inside SU. Joshua Cohen use of this method is impressive.
Hey Sam you guys tried Enscape yet?, bloody good man and only 45$ a month (fixed seat), Also supports scenes. With Unity I used the VRTK plugin free on the asset store which is also really good but need to do heaps of post processing/optimization to beautify your model.
Yes - enscape is great. So easy to get to VR. It’s kind of my backup.
Unity or Unreal look to have more potential in the long term and support much more geometry…our designs are quite heavy in landscaping and need accurate foliage species, etc.
The survey photogrammetry we’re getting is good quality (Leica BLK360) but it’s quite dense.
Will look into VRTK. Does it work with Vive or do you use Rift?
Right now it feels like choosing VR platform is similar to windows vs mac!
We used the Vive, never used Rift but what ive heard the results are relatively similar, VRTK supports both and there is another on on the Asset store if that doesnt work either. Cant help much with Point cloud imports into Unity 3D. if its a 3D mesh from Sketchup, the performance gets a very big boost if you crunch some polygons in 3DS Max.