UV mapping: importing meshes with multiple UVs per point

Hi folks,

I’m trying to solve a problem related to UV mapping (and I’m no expert on the subject). In our SketchUp plugin, we are importing geometry meshes. We have plugins that do this in several other 3D platforms—Cinema 4D, 3ds Max, Maya, etc. Unlike some of these, SketchUp does not allow us to assign multiple UVs to a single point. To handle this, I initially tried making duplicate points for these cases, so they could each be assigned different UVs, but this didn’t work: Geom::PolygonMesh deduplicates the points, so when you try to add a second point at the same coordinates, you get back the index value of the first point. No second point is added.

The problem here is needing to position some textures in specific ways on some faces, to handle where textures meet at wrapped-around seams.

I tried manually creating a simple example in SketchUp, to see how SketchUp normally handles textures that share an edge but have different texture coordinates (since they can’t have multiple UVs at the exact same vertices). I created a simple rectangle, divided it into two faces, applied a single material to both, then chose Make Texture Unique for one of the faces. Then I used Face.uv_tile_at to check what happened with the UV positions at the common edge.

To my surprise, the results indicated that the UV positions at the shared vertices were very slightly different. In other words, it looks like the way SketchUp handles vertices having multiple UVs (in this case, for two different textures, not the same texture) was by very slightly fudging the distances. The UV position values were not quite the same at those shared vertices, where I expected them to be exactly the same.

I realize that things might work differently with Face than with PolygonMesh, and that there may also be something I’m missing here (did I use Face.uv_tile_at correctly?), but this all suggests that the way to deal with our situation, when we need multiple UVs at certain points, is slightly fudge point positions (and therefore to create pseudo-dupe points where we had expected to create exact dupes). We would have to combine this with creating unique textures, rather than using a single texture discontinuously (since this can’t be done). Also, I’m aware that the decimal precision is limited with point positions, so those pseudo-duplicates may have to be fairly apart in order not to be deduplicated.

Does that sound right? I would love any insight on this. Thank you!

See this tracker issue. It may be related.

Thanks, that does seem related. We are faced with a similar problem, that it seems to require a lot of gymnastics around this issue to make our models work in SketchUp, while they work fine in many other platforms. I’m attempting it from the Ruby API, where it looks like the workaround options will distort the model more than from the C SDK. We might have to try from there instead, but neither case is ideal.

By the way, in the last release, the API added the EntitiesBuilder object which will often be better (and faster) than using the old PolygonMesh class.

However, I don’t think it will solve the UV issues because this is due to vertex merging.

Thanks. We’re definitely also looking into migrating to EntitiesBuilder once we can get this figured out.

PolygonMesh only support contiguous UVs. For non-contiguous UV mapping EntitiesBuilder is best for performance. In older versions you would have to first generate the mesh, then take a second pass and use face.position_material - though it’s a bit of a pain to cross reference the faces polygonmesh creates with your original source data.

Thanks for the guidance. So, to get this non-contiguous situation to work, I’m currently rewriting our mesh import using EntitiesBuilder instead of PolygonMesh. I have a couple of questions:

  1. You mention that face.position_material is necessary for older versions. What is the newer version of the way to set UV coordinates for a point? Or is it the same?

  2. Following the API docs, I am adding the geometry by first creating a Geom::Point3d for all of our mesh points. Then, I iterate through our mesh polygons and add a face for each polygon with builder.add_face. An example mesh I’m working with contains 9180 points and 12240 polygons. After about five minutes, this process eats up all my system memory, crashing my OS. I’m happy to post code, but do you have a sense of what I’m doing wrong?


I think what @tt_su really said was, that in older versions it would require a second iterative pass.

With EntitiesBuilder you can call face.postion_material() immediately within the build block as soon as a valid face is returned. (This means you should test the result of EntitiesBuilder#add_face for validity.)

Also, since v2021.1 you must do any texture projection in the Face#postion_material() method call.

1 Like

Posting code is always helpful.

How much memory do you have ?

Is this an Intel machine or silicon M1 ?

Are you 1st saving the model before this heavy import ?

Are you wrapping the geometry generation in an undo operation (with the GUI switched off) ?

Thanks for the help. To answer your questions:

I’m working on an Intel machine with 16GB RAM built in (and no other heavy processes running simultaneously).

I am not wrapping in an undo block—and I wasn’t sure the GUI disable option would work, because I do need the results displayed in the model. But this was something I considered trying.

I was previously running this import with PolygonMesh. It took some time to execute, but did not gobble up a ton of system memory. It never seemed to try to update while in progress, only at the end.

The results will display when the operation is complete if the disable_ui flag is set.

1 Like

This is because a Geom::PolygonMesh is a virtual helper object, not a model object. It’s polygons do not get added to the model as faces until a call like Entities#fill_from_mesh is made.

Ah, OK, well maybe that explains why I’m now having this memory problem. Perhaps disabling GUI updates will solve it—I can test that, at least.

This is a (sanitized) version of just the code that adds the faces from an existing mesh containing arrays of points and polygons (among other things). This will crash my system, without even getting into any material positioning.

container.entities.build do |builder|
      # Initialize an array to store the SU versions of all our points
      su_points = []

      mesh.points.each do |pt|
        # Create a new SU point out of our point and add to our point array.
        su_points << Geom::Point3d.new(ours_to_su(pt))

      # Create the SU versions of our polygons.
      mesh.polygons.each_with_index do |poly, polygon_idx|
        idx0 = poly[0]
        idx1 = poly[1]
        idx2 = poly[2]
        next if idx0 == idx1 || idx0 == idx2 || idx1 == idx2
        face_points = [idx0, idx1, idx2].map { |idx| su_points[idx]}

        # Add the face, with the positioned material.
        face = builder.add_face(face_points)

Whaddya know, I wrapped this in an undo operation with GUI updating disabled and the memory problem disappeared. I’m still working out the geometry and positioning question, but that’s a big step forward. Thanks!