Scaling Edge Towards Camera

Just got one of my extensions approved again and the extension reviewer suggested:

Note: You may want to scale down the preview lines drawn to the view by some percent towards the camera, to have them always drawn on top of coplanar faces with no Z fighting. SketchUp does something similar when drawing edges.

Any suggestions on how I would do that? I have a global (@edge) I highlight when acting with my extension.

Something like:

def draw_edge(edge, view = edge.model.active_view)
  # Get the two points that describe the edge:
  points = edge.vertices.map(&:position)
  # Get the view's camera object:
  cam = view.camera
  # Get the camera's eye point:
  eye = cam.eye
  # Calculate a smidgen of the camera's distance to target point:
  distance = eye.target.distance(eye)
  factor   = 0.9996
  smidgen  = distance * ( 1.0 - factor )
  # Get two vectors from the points to the camera eye point:
  vec1 = points.first.vector_to(eye)
  vec2 = points.last.vector_to(eye)
  # Set vector lengths to the smidgen:
  vec1.length= smidgen
  vec2.length= smidgen
  # Offset the drawing points by the vectors:
  points.first.offset!(vec1)
  points.last.offset!(vec2)
  # Set drawing color & draw the line to the view:
  view.drawing_color = 'Purple'
  view.draw_line(points)
end

:question:

This is what I had come up with after some thought (and blowing out the Ruby/SketchUp API cobwebs…):

def scale_to_camera(edge)
	# From endpoints of 'edge' make copy of points a little closer to the Camera
	vertices = edge.vertices
	new_pts = []
	pts_0 = [ vertices[0].position, vertices[1].position ]
	camera_eye = Sketchup.active_model.active_view.camera.eye
	vector0 = camera_eye.vector_to(pts_0[0])
	vector1 = camera_eye.vector_to(pts_0[1])
	length0 = vector0.length * 0.9996
	vector0.normalize
	vector0.length = length0
	new_pts[0] = camera_eye + vector0
	length1 = vector1.length * 0.9996
	vector1.normalize
	vector1.length = length1
	new_pts[1] = camera_eye + vector1
	return new_pts
end

I can definitely see your coding is more efficient compared to mine. Thanks for the advice :slight_smile:

I don’t think you need to normalize the vectors before setting their lengths.

But yes, you are on the right track, and that is to use some point along an imaginary vector from the edge’s points toward the camera. (EDIT: I see you went the opposite along vectors from the eye to the points.)

The edge’s end points may not be the same distance from the camera eye, and therefore applying different offsets based on two differing distances can move one end of the draw line closer to the camera (and further from it’s edge point,) that the other.

This is why I suggested using the camera’s distance between it’s eye and it’s target point, hoping the draw points would be offset by the same amount.

However, OpenGL Z fighting is strange. I seem to remember that there is might be more bleed through for objects at a further distance than closer. You’ll need to experiment to find out which is best.

1 Like

Agreed. Originally I was going down the path of multiplying each component of the vector by the new length (and thus they would have to be normalized), but then I remembered the ‘length=’ call.

Point taken on moving each point by the same amount toward the Eye (along the two intersecting vectors), thus making the new edge somewhat parallel to the original.

EDIT: However, since ‘Z’ fighting is more predominant further from the Camera, having a more pronounced adjustment at greater lengths would not be as confusing to the algorithm.

Thanks again Dan :+1:

In a view draw scenario it is important to write fast code because it gets called a lot during a tool’s operation.

Making Ruby reference assignments does take time, although it makes for readable and maintainable code.

I could probably have decreased the reference assignments concerning:

  # Get the camera's eye point:
  eye = view.camera.eye
  # Calculate a smidgen of the camera's distance to target point:
  smidgen  = eye.target.distance(eye) * 0.0004

No problem. We like solving puzzles here.

The range bins used by the z-buffer are usually larger when farther from the camera because details far away are not visually separable anyway.

1 Like

The range bins used by the z-buffer are usually larger when farther from the camera because details far away are not visually separable anyway.

Better explanation of what I was trying to say.

All the more reason to make the adjustment for more distant objects more pronounced, such that after the integral round-off the range to object is sure to fall into an different (in this case nearer) range bin.

1 Like