SUImageRef Transforms Incorrect?

When we try to match the placement of an image in our application, the transformation matrix associated with SUImageRef seems to rotate the image correctly but positions and scales it incorrectly. We assume that the local space position of the image is (0, 0, 0), (w, 0, 0), (w, h, 0), (0, h, 0) where w and h are the world dimensions of the image object obtained from SUImageGetDimensions. Any help would be really appreciated.

Can you post some sample code and model to go along with this please? A bare minimum to replicate easily.

Sorry for the slow response. The goal is to:

  1. create a texture from an image and save it to file if it doesn’t already exist.
  2. place the new texture (with its unique id) on a quad with the appropriate transform, vertices, uvs, normal and faces.
  3. Repeat 1) & 2) for all images

I’ve pasted both these segments of our code below. 1) works but 2) is not resulting in the correct position and scale once the transform is applied to the quad. I’ve attached images of what the scene looks like in our application vs sketchup.

// 1: Create texture from image and save to file 
size_t pw, ph;
SUResult res = SUImageGetPixelDimensions(image, &pw, &ph);

if (res == SU_ERROR_NONE) {
	// get bits per pixel and data size
	size_t bitsPixel, dataSize;
	res = SUImageGetDataSize(image, &dataSize, &bitsPixel);

	if (res == SU_ERROR_NONE) {
		// get image data
		SUByte *pixelData = new SUByte[dataSize];
		res = SUImageGetData(image, dataSize, pixelData);

		if (res == SU_ERROR_NONE) {
			// create texture from image data
			SUTextureRef texture = SU_INVALID;
			res = SUTextureCreateFromImageData(&texture, pw, ph, bitsPixel, pixelData);

			if (res == SU_ERROR_NONE) {
				// save texture to disk
				std::string destTexturePath = texturePath + "/" + imageName;
				res = SUTextureWriteToFile(texture, destTexturePath.c_str());
				if (res == SU_ERROR_NONE) SUTextureRelease(&texture);
			}
		}
		delete[] pixelData;
	}
}

// 2: Place texture on quad
double w, h;
res = SUImageGetDimensions(image, &w, &h);

if (res == SU_ERROR_NONE) {
	// get image transform			
	SUTransformation t;
	res = SUImageGetTransform(image, &t);

	if (res == SU_ERROR_NONE) {
		// set transform
		mesh.transform << t.values[0], t.values[4], t.values[8], t.values[12],
			 	  t.values[1], t.values[5], t.values[9], t.values[13],
			 	  t.values[2], t.values[6], t.values[10], t.values[14],
			 	  t.values[3], t.values[7], t.values[11], t.values[15];		

		// set v, uv, n, f
		mesh.vertices.push_back(Vertex(Vector(0, 0, 0))); // Constructor: Vertex(const Vector& position)
		mesh.vertices.push_back(Vertex(Vector(w, 0, 0)));
		mesh.vertices.push_back(Vertex(Vector(w, h, 0)));
		mesh.vertices.push_back(Vertex(Vector(0, h, 0)));

		mesh.uvs.push_back(Vector(0, 0)); 	
		mesh.uvs.push_back(Vector(1, 0));
		mesh.uvs.push_back(Vector(1, 1));
		mesh.uvs.push_back(Vector(0, 1));

		mesh.normals.push_back(cross(mesh.vertices[1].position, mesh.vertices[2].position).normalized());

		// Constructor: Face(const Vector& vIndices, const Vector& uvIndices, const Vector& nIndices, const int& textureId)
		mesh.faces.push_back(Face(Vector(0, 1, 2), Vector(0, 1, 2), Vector(0, 0, 0), textureId)); 
		mesh.faces.push_back(Face(Vector(2, 0, 3), Vector(2, 0, 3), Vector(0, 0, 0), textureId));
	}
}


Looks like the code formatting got lost there - could you edit the post and wrap the code block up as preformatted text?

It should be fixed now!

Sorry for the slow response - but I had to convert the snippet into one that could run on my machine. (You might want to provide a complete standalone snippet next time.)

Anyway, I made it write out the Image entity of a test file to an OBJ file - then I inspected the results, both the values exported and visually. As it turns out, the image transformation convert the image’s pixel size to model space.

So this chunk:

mesh.vertices.push_back(Vertex(Vector(0, 0, 0))); // Constructor: Vertex(const Vector& position)
mesh.vertices.push_back(Vertex(Vector(w, 0, 0)));
mesh.vertices.push_back(Vertex(Vector(w, h, 0)));
mesh.vertices.push_back(Vertex(Vector(0, h, 0)));

Should be:

mesh.vertices.push_back(Vertex(Vector(0, 0, 0))); // Constructor: Vertex(const Vector& position)
mesh.vertices.push_back(Vertex(Vector(pw, 0, 0)));
mesh.vertices.push_back(Vertex(Vector(pw, ph, 0)));
mesh.vertices.push_back(Vertex(Vector(0, ph, 0)));

Where pw and ph was obtained earlier by: SUResult res = SUImageGetPixelDimensions(image, &pw, &ph);

There is one thing thing - I only saw a different in scale - not position. Maybe my test time is somewhat different than what you have. If you see a position issue then it would help if you posted a sample SKP file to reproduce. (In general - complete minimal repro case help a lot.)

ImageToObj.zip (83.7 KB)
I’m attaching a sample solution where I write an Image entity to OBJ. When I import this OBJ (inch units) it matches the original.

1 Like

Thanks, this worked!

I have another related question. When I run the code below, count in many cases is less than numImages. Is there a reason for this? I’ve attached a skp file which contains 2 images, but I am unable to extract these images as count is 0.

SUEntitiesRef entities = SU_INVALID;
SUModelGetEntities(model, &entities);

size_t numImages = 0;
SUEntitiesGetNumInstances(entities, &numImages);

if (numImages > 0) {
    size_t count = 0;
    std::vector skpImages(numImages);
    SUEntitiesGetImages(entities, numImages, &skpImages[0], &count);
}

missing_images.skp (377.1 KB)

You are asking for component instances - not images. Replace with SUEntitiesGetNumImages and you should be fine.

SUEntitiesGetNumInstances was a typo. Sorry about that. Using SUEntitiesGetNumImages but still not getting the images.

Have you checked the return results for error codes?

Yes, SUEntitiesGetNumImages(entities, &numImages) returns SU_ERROR_NONE with numImages being set to 0.

Just had a look at the SKP file - you don’t have any images on the root of the model. You have two components which in turn contain images.

SUModelGetEntities doesn’t yield all the entities in a model, it yields the entities in the root node. From there on you need to traverse the model hierarchy to each them all.

Please start a new thread for a new question.