Better way to anchor 1:1 model to real world

Hello community.

I would like to propose a better way to anchor the 1:1 3D models seen through the Sketchup Viewer app using the AR mode.
Currently, the AR mode works very well for furniture models, but it is not very user-friendly and not very functional for viewing 1:1 scale 3D models of buildings.

I suggest two things:

  1. Position the model by anchoring it to 3 points. Simply choose three corners of a room in the real world and then choose the same corners in the virtual model. This way, the 3D model could be easily anchored and with the appropriate scale to the real world.

  2. General or per-layer transparency bar. There should be the ability to generate transparency for the entire model to observe reality and at the same time the superimposed 3D model.

I believe that Trimble Connect AR does all of this that I am requesting, but I am asking for it to be included in the Sketchup Wiewer app so that it is easy to load and use for single-person offices.

I hope someone takes note of the proposal. :pray:t3::smiling_face::wave:t3:

Hola comunidad.
Quisiera proponer una mejor manera de anclar los modelos 1:1 vistos por medio de la app mediante el modo AR.

Actualmente el modo AR funciona muy bien para modelos de muebles pero es poco amistosa y poco funcional para ver los modelos 3D de edificaciones a escala 1:1

Sugiero dos cosas:

  1. Situar el modelo mediante anclaje a 3 puntos. Simplemente escoger tres esquinas de una habitación en el mundo real y luego escoger las mismas esquinas en el modelo virtual. Así podría anclarse fácilmente y con la escala adecuada el modelo 3D al mundo real

  2. Barra de transparencia general o por ayer. Debería poder generar una transparencia a todo el modelo para observar la realidad y al mismo tiempo el modelo 3D superpuesto.

Creo que Trimble Connect AR realiza todo esto que solicito, pero lo pido para la app para que sea fácil de cargar y usar para oficinas unipersonales.

Saludos y espero que alguien tome nota de la propuesta. :pray:t3::smiling_face::wave:t3:

1 Like

Hi @JorgeArq

There are some great ideas in your suggestions, and we very much appreciate you sharing them with the community.
Could you tell us a bit more about the kind of work you do, how this alignment feature would improve your workflow, and at what stages of the project you would typically need/use alignment in VR?

Thank you,

I can second this request. I do large scale 1:1 visualization on the ipad pro using the app and the the AR feature. I find the Z height of the anchored 1:1 geometry to be consistantly about a meter off the floor. Even when placing the ground plane anchor point on the floor it is easily possible to move the ipad down low and see “underneath” the model which should not be possible, and walking around inside the model the ground appears between knee and waste height. I have tried artificially lowering the geometry below the ground plane but the app appears to take it’s ground plane from the lowest piece of geometry as this does not show any change in the behavior.

Any system that could keep the ground plane at real world foot level would be very welcome. I design large scale installations and it’s a crucial part of the work flow to visualize an entire design at real world size within a space, to be able to walk through, around it, and inside of it. I make it work but it does seem like it’s optimized for looking at small buildings on a conference table, not for 1:1.

Transparency control to be able to blend AR and R as needed would be very appreciated as well.

Have you tried moving the model axes to a real-world anchor point, like a door or the corner of a building?

No, it’s a good idea, I didn’t think the app took anything but floor level into account for mapping the Z height, I’ll give it a try. :+1:

1 Like

I have Android (phone/tablet) versions and I’m not sure it can make a difference or not because Immersive mode works differently in these apps. But for HoloLens and Quest Pro setting the model axis to a known point basically sets the insertion point (anchor). So, for example, if you know you want to start viewing next to door A on floor 2 (IRL), that’s where you’d put your model axes/anchor.

I learned this the hard way by moving models one small pinch-and-drag at a time… which is about as fast a belly crawling across a building site ;^)!

The SiteVision extension Trimble SiteVision | High Accuracy Augmented Reality System might be interesting for you to check out. I don’t have a Catalyst/subscription to make the most of it… (but I’m hoping Aris will send me one to try out ;^!)

Hi all,

I appreciate the additional feedback, and I understand that most folks here are discussing the AR functionality for SketchUp for iPad
However, I do have one question for @JorgeArq: The topic has been tagged with “vr-viewer”. Can you please clarify if your suggestions refer to the SketchUp Viewer for Quest, SketchUp for iPad, or both?

Thank you,

Hi Aris (and to those who have participated).

Actually my request should be applicable to all VR viewers (XR as well). It is a feature that is already well established with the Hololens but in the other applications it is left to the system to anchor to the floor automatically, getting random results. I understand that mixed reality technology with the Hololens has more development, but I feel that the technology already has enough level to better anchor a virtual model to physical reality with other apps and hardware.

My specific request is to be able to anchor a virtual model of a bedroom or office to a real environment. Let’s say it’s an interior design project and you have a set to show, the client is on site and I want to show him the decoration of his bedroom or office, not only a piece of furniture -because that works fine- but the whole space. I don’t use an iPad or Quest, I have an android tablet and the sketchup viewer app (I also have windows mixed reality headsets but that’s beside the point).

The problem is that positioning, scaling, orienting and navigating the 3D model using the VR viewer from the app is very complex and I have no way to manually anchor it to reality. This should be the first thing one does before showing anything to the client, to ensure that the experience is true and smooth.

Thank you for following up on my query. Possibly my solution is to buy an iPad and/or some Quests! :wink::+1:t3:

Hola Aris (y a los que han participado)
En realidad mi solicitud debería ser aplicable a todos los visores de VR (XR también). Es una función que ya está bien consolidada con los Hololens pero que en las otras aplicaciones se deja al sistema anclar al piso de forma automática, obteniendo resultados aleatorios. Entiendo que la tecnología de realidad mixta tiene mas desarrollo pero siento que la tecnología ya tiene suficiente nivel para poder anclar de mejor manera un modelo virtual a la realidad física con los demás apps y equipos.

Mi solicitud puntual es poder anclar un modelo virtual de un dormitorio u oficina a un entorno real. Supongamos que es un proyecto de decoracion interior y se tiene un set para mostrar, el cliente está en el sitio y deseo mostrarle la decoracion de su dormiotrio u oficina no únicamente un mueble -porque eso funcionea bien- si no todo el espacio. No uso iPad ni tampoco Quest, tengo una tablet android y con la sketchup viewer app (tambien tengo windows mixed reality headsets pero no viene al caso)

El problema es que posicionar, escalar, orientar y navegar el modelo 3D mediante el visor VR desde la app es muy complejo y no tengo manera de anclarlo manualmente a la realidad. Esto deberia ser lo primero que uno haga antes de mostrar nada al cliente, para asegurar que la experiencia sea cierta y fluida.

Gracias por darle seguimiento a mi consulta. Posiblemente mi solución sea comprarme un iPad y/o unos Quest! :wink::+1:t3:


So a quick idea on this would be to have the Viewer scan for floors and/or walls and then have an intermediate “Place Anchor” step where an anchor could be attached at a point on the floor or wall. The model would then open at the anchor point where the model axis is. So for example if I had a model of a wall with a door, I would move the model axes to the handle (and align it to the wall). Then when in the Viewer app, I’d scan, find the wall, then add an anchor to the real-world door handle so the model would open at the Anchor/door handle. Since everything in the model is relative to the axis/anchor all the other model elements would be aligned (one would hope!).

I think the UI difference would be that instead of a raycast from the head mounted display, you’d use screen space and a cursor that would hit the wall or floor to get the anchor point. In other words, the user would use their finger on the display and the cursor would move on the wall/floor mesh.

1 Like


That’s a good approach!
Another one could be that you choose a corner between the ceiling and two walls (many times it is easier to find free corners by lifting your head than looking at the floor) and manually adjust the XYZ axes of the virtual model with the real space in that corner to anchor the model.

There can be several approximations but the important thing is that they are easy to apply and simple for the user.

@JorgeArq Yes, wall intersections with each other and/or floors ceilings can work. I think it depends on what you are modeling and what the ‘obvious’/convenient place is to set up your model presentation. I.e., are you presenting a new toilet, the outside of a house, or a wall in the interior of a large building.

There are differences between devices as far as how they ‘scan’ the environment. For example, the HoloLens creates a mesh. The mesh is not ‘tight’ at corners (or anything - it’s like a triangulated terrain mesh draped over everything) but you can place models onto it. Then you have to move the model to an approximate place where the real wall (or whatever) and mesh meet. But some devices do more of a plane-finding scan. Android tablets do something like that, so you can see a flat ‘Grid’ that overlays floors/walls. I don’t have an Ipad but they may use lidar and I think that could give a pretty good ‘mesh’ model.

But one of the things about the (Android) Viewer is that it seems to want to take the camera position from the SU scene/model space and then place the model ‘on the floor’. But without an intermediate step to place an anchor that corresponds to a known point in the model (axis) it’s ‘making an assumption’ as to where you are (I think it is the SU scene camera). With a Quest Pro / XR headset, there is a camera that correspond to the headset position. So, you have more points of reference (and probably a better scan).

If you are ‘crazy’ about these topics you can see videos I made testing model placement here: XR Design & Installation - YouTube

I’ve given Aristodimos the gist of my ideas as far as XR goes (including anchors). I’ve prototyped a bunch of concepts using Unity. One option I like is ‘spawn-anchors’. The anchors can be placed via controllers and then models be spawned to them. This could be handy for opening more than one ‘model’. The Viewers are a different story. They work differently. I don’t know how to make anything for Ipad and can just do simple face/AR overlays on Android… Actually I made a target app that used a picture as an ‘anchor’ once… so a picture in a catalogue opened a model. Anyway, the way AR Viewer apps work now you just can’t get great placement.

Thanks for sharing the video.

Yes, I am very interested and I will watch it.

Regarding the location of the model with the Android device, it does have to be able to place axes and height of the model but it is extremely complicated and easily lost. I think it is designed to visualize objects and is not designed to walk spaces, because it is very simple to place a chair but it is very complicated to fit space on space, you know what I mean!

Translated with (free version)

1 Like