I did a shadow study yesterday, and I needed to calculate the area that was left unshaded for each of the hours in question. I knew there would be an extension to turn shadows into geometry (this one), but I was under the gun and didn’t have time to d/l it from sketchucation and test it, so I just did it by eyeballing it and making geometry that way to get an area. I’ll explore the extension later, but I just want to understand a few things.
I turned off “Enable length snapping” and changed to mm from m for units and zoomed right in, but I still seemed to get inferencing happening from points that I could have sworn were off screen. 1) Is that possible and 2) if so, is it because of SU’s inferencing “touch memory”? To get around it, I dragged out construction lines to cross at the points I was trying to target with the line tool, as they seemed to have less interplay with the inferencing system. It was pretty laborious, and again I’ll look at the shadow extension, but I’d like to understand what was going on.
Recapping and adding:
- Can you still get inferencing from stuff off screen?
- Is so, does it bias towards SU’s “touch memory”?
- Does making things hidden (either by Hide or turning off their visibility in tags) stop inferencing from those entities?
Thanks in advance.