I have to glaze a large steel structure, so as usual I have laser scanned it to create a Point Cloud, and imported the Point Cloud into SketchUp to model it and create the detailed glass manufacturing drawings. However, whilst working on the model I’ve noticed the structure is slightly off grid (which to my mind is of no real consequence whatsoever) but the projects architects have now asked me to quantify exactly how far the structure is off grid relative to the original setting out markers.
To give them this information I have rescanned the structure to capture all of the setout markers but there is a problem. The scanner for some reason does not capture the markers?
You can see from the images below where marker A is located and then a close up from the Point Cloud showing there is no data in the area where that marker should be and a photograph of the actual marker. The final photograph is of another marker on site which was located low enough for me to get a good photo.
I need to locate these markers with millimetre accuracy onto the Point Cloud, so I am going back next week with a ladder to take a straight on close up photograph of each of them which I think I can then position exactly onto the Point Cloud by matching the position of the surrounding bricks.
The only thing is that whenever I have tried to import photographs into SketchUp previously in order to scale them and position them accurately I have never had any success, so I am asking if anybody knows how to do it?
When I need to do that sort of thing I use the Tape Measure tool to measure a kown distance in the image and type that dimension followed by letting SketchUp scale the image.
Here I’ve set out some guides outline the thickness of the brick and I added a vertical one crossing the horizontals to give me anchor points for the dimension. Then I measured between the points. I only added the dimension so you could see what happens. No need to do that in practice.
Once you’ve got the image the right size you could explode it so it becomes a texture on a face. Make a group or component so you can put it where you need it to go.
No but if you make a component to contain the image or explode it to make the texture on a face and then make it a component or group the object will show in Outliner.
One of the talking points in my 3D Basecamp presentation on MatchPhoto was “Interpolate, don’t extrapolate.” By that I mean, don’t scale a large image based on a small element within. Rather, get the best, big, overall dimension you can with a laser distancer, and the photo will interpolate all the stuff in between. That way the photo data can’t be too far off since they are only some fraction of the distance between two dead-to-nuts points in your model.
For a rectangular texture patch, I draw a rectangular face numerically so it’s accurate, and edit the corresponding rectangle of photo in Photoshop, then while importing and placing the png, you can snap to the corners of the predrawn rectangle to get the dimensions right.
Yes. That’s good advice. I meant to add that in my first reply. I was only using the height of the brick to try to keep my GIF window reasonable small and the GIF size low enough to upload directly.
Working with the largest know dimension will generally yield more precise results.
With this application the Point Cloud is already millimetre accurate - I am just trying to locate the centre of the marker by positioning a photograph of the relevant brick so I guess its a trade off between including lots of bricks but needing a head on photograph to avoid any distortion.