I am trying to get the camera position (the eye) in the 3D model, which has been set manually. Thing is, since most of my cameras are parallel cameras, the zoom level make the camera eye position irrelevant.
So, I am looking for a way to get this zoom factor of the current view, so I could hopefully retrieve the camera / view position within the scene. But the zoom function from the View class doesn’t seem to return any value.
Am I missing something here ? Any idea about how to do that ?
It returns what the docs say it returns, which is the reference to the receiver object
(ie, the model’s singleton view object.)
What is throwing you off is that View#zoomlooks like a getter method, but is in fact a setter method. (The naming obviously does not follow Ruby convention, nor does the method return the argument value as setter methods usually do in Ruby.)
“A zoom factor” is only relative to the current view. Ie afterward, any factor used is relative only to the previous view. There is no basic (or reference) zoom factor unless you yourself save one.
I could suggest you save a zoom view (camera properties) after calling View#zoom_extents and use that as a reference for later comparison.
Thanks for the clear and detailed reply. Even if that means my parallel camera positionning won’t suit my needs, at least now I understand why
But still, I don’t really understand why zooming with a parallel camera doesn’t actually move the camera eye, it seems to me it would be easier to handle.
If you ponder parallel projection carefully you may realize that the distance of the camera from the model has no effect on scale! Move the model away from the camera in a direction perpendicular to the view plane or move the camera in that direction and there is no change in the size of the model in the view because points in the model are projected to the view plane along that axis. So instead of camera movement, in parallel projection the API uses the height of the viewport in model units to set the scale of the view.
Yeah, it’s crazy and there’s no real-life camera that can do parallel projection. Sometimes it can get messed up, being a huge distance from the model. If you look at the model in this thread:
The Camera’s eye was at (-2017969.173762m, -1535978.792975m, 2503208.202455m), using parallel projection, but the view was almost normal, with some weird artefacts which were worse when rendered in V-Ray.
Another loosely related thought: the lack of connection between view scale and positioning of the model and camera relative to the view plane is why clipping is more of a problem in parallel projection. A model can look like it is far away due to scale yet have something poke through the front clipping plane as you orbit (which moves the camera in a circle).
Well, I am having a different result with my tests : I make the eye-target distance increase along with the height of the camera eye height (so my angles and target point remain constant), and I do have a “zoom” effect just by doing so (always using parallel cameras so far).
I would join a gif / video if I could, do you know how do easily do that ?
This is probably what I don’t fully understand here. I have red the API doc about the #pixels_to_model function, but I don’t get it. Would you have any additionnal info / example about this ?
Anyway, thank you all for your explanations so far !
I’m not sure I fully understand what you are doing. Am I right that it is entirely via the Ruby API, i.e. not involving the zoom, orbit, or pan tools?. It could be that multiple simultaneous changes are interacting. Could you provide a code snippet?
If I add a multiple of the eye-to-target vector to the eye position and invoke camera#set using the same target and up but the new eye, there is no visible change on the view or the camera’s height value, but the camera position is changed (see snippet below). Other than creating a new camera, there is no method except set by which to modify the eye, target, or up values.
I use LiceCap, which is free and available for both Windows and Mac. There are alternatives, but I find LiceCap very easy to use.
The value returned by camera#height and that you pass to camera#height= will be in the current model units. Imagine a line in the model that is exactly parallel to the view screen and parallel to the screen’s vertical axis (not the model’s axis), and that extends from the top to the bottom edge of the screen. Height will be the length in model units (e.g. cm) of that edge. If, for example, you assign