Hi everyone! A new user to the forum. Thanks to all having the knowledge which I don’t. I’m wondering how SketchUp captures the 3D position of the mouse according to its location on screen. I can now get the mouse’s pixel position on screen but I don’t known how deep from the camera should I set the point. Can anyone help me?
Keep in mind that the mouse does not have a 3d position.
SketchUp can only guess a 3d position (as well as you could on your own).
Tge Ruby API provides some usefull helper methods:
- view.pickray and then model.raytest to retrieve the first position at which a ray hits an object below the cursor.
- view.pick_helper retrieves entities exactly below the cursor (not the position)
- view.inputpoint retrieves a point and entities using inferences (best matching around the cursor)
view.pixels_to_model to convert sizes or distances from the screen plane to somewhere far in the model space
And the inverse way:
- view.screen_coords to project a 3d point to the screen plane
Thanks, Aerilius. According to your answer, my problem comes to be how to “guess” (or say infer) the 3d position? Or in other words, what does SketchUp do in view.inputpoint?
Download the Example Ruby Scripts, and examine the
It explains how to write a tool that uses the
Thanks DanRathbun, but my actual problem is not “how to use SketchUp::InputPoint” but “how view.inputpoint works” or “what is the theory of inference”. Is that an open technique? Feel sorry if this problem is inappropriate.
From the User Guide …
That may have really great help. Thanks very much. But I still have a question. For example, if I draw a line from (0, 0, 0) then how to decide which plane (XY, YZ, XZ or else) the end point of the line is on (when only the origin point can be inferred from) ?
I do not believe SketchUp has built-in planar constraints.
You need a face to infer to. Or the user enters a set of coordinates.
There must be some planar constraints, otherwise nobody knows where the first line should be.
According to my own test, lines will be drawn on the same XY plane with the first inferred point (e.g. origin point) by default. Under some circumstances (which I wonder what they are) they will on the same YZ or XZ plane. My assumption is when the cross point of the input ray and the very XY plane (containing the first inferred point) is too far from the camera, but not so sure.
okay, well it depends upon the angle of view.
I tried in SketchUp and I’m confused how the second line (the blue one) is inferred upwards while the first line (the black one) is on XY plane.
Are you sure it is on the XY plane, you would be the only one to know, because from the picture, it could very well be on the XZ plane.
As @DanRathbun explained, it depends on the view.
There is a fundamental ambiguity in translating from a cursor location on the screen into a point within the 3D coordinates of the SketchUp model. The camera’s eye point and the cursor’s screen location define a ray through model space that is “where you are looking through the cursor”, but that ray is infinite and there is no inherent way to know how far down that ray you meant the point to go. The inference engine’s job is to help resolve this ambiguity based on the existing content of the model.
Some other modeling apps (I call them “2 1/2D” ) have a notion of “current working plane” that they use to resolve the ambiguity. All new geometry is created only on the working plane. I previously used one such and hated it. I was constantly having to switch between working planes to accomplish even simple 3D operations. SketchUp has no such concept.
Let’s explore how SketchUp’s inference engine worked in the example shown in your image.
When you clicked to set the first end of your first edge, the only things in the model were the origin point and the axes. Had you moved the cursor near to one of these, you would have gotten a tooltip and your click would have been taken to lie on the origin point or on an axis. But when the cursor was far enough away from these to make it unlikely that was your intent, the only remaining possibility was to put the point onto the cardinal plane the view appears to be “looking at” - in this case the XY plane. If you orbited the camera before clicking, the “looking at” plane would change. Also please note that the XY plane is not a “thing” in the model, it is simply the only reasonable choice of where an otherwise unconstrained point was meant to be.
When you then moved your cursor to the next point the inference engine again watched the movement. Once again, had you placed the cursor on the origin or an axis the engine would have offered a snap to that. But once again the end location you clicked was not obviously meant to be the origin or on an axis so the engine inferred you must want the XY plane.
But please realize this isn’t because SketchUp is currently “working on” the XY plane or because the initial point was on the XY plane, it is because there was no other rational choice to make. Also please realize that a single edge cannot define a plane! Your first edge is actually in an infinite family of planes rotated in all possible directions about it. So there is no prior notion of a working plane that can affect subsequent edges.
Now, when you next moved the cursor to create another edge starting at the end of the last one, you moved the cursor in a direction parallel to the blue axis on the view. The engine noticed this and offered it as the most likely interpretation of what you were trying to draw. Had you moved the cursor parallel to the red or green axis you would have seen similar inferences for them. And there are two additional inferences for when the cursor position appears to lie on an extension of the existing edge or perpendicular to that edge. The perpendicular will go in the XY plane again, not because the existing edge defined a plane but because that is the only rational choice among the infinite number of directions an edge could be perpendicular to the existing one.
As you add more content to the model, the suite of inferences you could have intended becomes more extensive. SketchUp will also look for points on faces, on existing edges, at endpoints of existing edges, etc., based on the notion that you are most likely wanting to add onto your existing content if the cursor looks like it is over something. Where possible, it offers the centers of arcs and circles.
On a complex model, there may be multiple possible inference snaps close to the one you want. In many cases you can use the arrow keys and/or the shift key to lock a particular choice of inference. You can also hover the cursor over one inference point to give the engine a clue about your intent. For example, if you hover over the midpoint of one side of a rectangle and then over the midpoint of the other side, the engine will conclude that you probably want the center point of the rectangle. But sometimes you have to zoom and/or orbit to remove the ambiguity. This last issue is why some users want a feature added to selectively enable/disable specific types of inference snaps.
I’m sure the black one is on the XY plane. I tried different views to ensure that.
Thank you veeeeeery much, slbaumgartner! That’s what I’m longing for!