SketchUp Diffusion: Share your Feedback

Why is there a watercolour pan, brush and picture frame(?) in the watercolour style output? It should just show the SU model rendered in watercolour. Same with the pencil sketch style where it shows a mangled hand, desk and pencil. Weird.

2 Likes

Inpaint!

Diffusion is a creative freelancer

2 Likes

Hi @rolando and Happy New Year,

This is something that occasionally happens. We will have a look in addressing this in a future release.

Thx for reporting,
AK

1 Like

Noted, loud and clear :slight_smile:

1 Like

Well it sure is interesting, what concerns me most, if this does get to the level some show online, rendering services will be a thing of the past? Right now Diffusion is not anywhere close to Midjourney from what I can tell. LookX.ai. seems advanced?

1 Like

Sketchup Diffusion is an unfortunate waste of time. The materiality is so mismanaged that if makes my designs look SUPER uncontextualized! A design for the mountains looks as if it belongs downtown with ridiculous materials compared to the materials within the Sketchup model. Even PromeAI gets this 20 times better!

1 Like

Its fun to see what the robots come up with, but, after a lot of playing around with settings on various models, Iā€™m not getting anything I can really use even in an informal professional setting.

I like the idea of iterative image generationā€¦and obviously this is early days for the tech.
So hereā€™s what I think would help:

We need a prompt library or list to draw from, and some working examples of models that generate images we like (a bit like Templates or tutorial models).

The thing about rendering is that we can make a tweak then re-render to see the results.
The formula is very consistent so we can learn quickly through trial and error.

If AI could achieve a good level of consistency and repeatability, then we can start to create 3d models that we know will render in a desired manner (a style we like), with the use of a specific set of prompts (text and image-based).

Rendering uses a lot of formulaic settings, templates and tutorials to help people get results they like.
A.I. seems to be in a state of ā€œplay around and see what you getā€ which is fun, but not so useful.

Depends on what the human expectation is. Itā€™s perhaps not going to usurp current rendering software immediately. However for clients who might need 3 or 4 quick ideas, this tool is already going to be useful. A work in progress. Iā€™m sure it will improve.

I would go a step further and request a simple sketch tool, whereby one could sketch over a generated image and label areas. (i.e. train the AI), then request for another iteration of the same image. Thereby (hopefully), by mixing text prompts (as labels directly on the image) and sketched lines, one could create rapid iterations, more accurately and sooner.

1 Like

I like Devineā€™s idea to select parts of and build upon previously generated images. Also, selecting parts of the model/groups to apply prompts to or leave untouched by the Ai. For example only creating a quick background and foreground for the model, leaving its materials intact.

This is what already exists for the iPad.

iPad features a markup tool that allows you to sketch/illustrate with simple paint tools, on top of your model and save that sketch as an image overlayed on top of a scene. After you have that on screen you can use it as the basis for Diffusion. So you can have simple models, iteract ideas by sketching and render new options with diffusion.

Itā€™s a much better workflow than in desktop. Thatā€™s why I keep asking for a markup tool for Desktop too.

2 Likes

Stable Diffusion features a way to control how materials are assigned to areas of a drawing. It can be leveraged to be used from a Sketchup Model by assigning colors to aces and objects. It should be implemented on Sketchup Diffusion:

SketchUp & AI - StableDiffusion, Automatic1111, Controlnet and segmentation colors - Corner Bar - SketchUp Community

Another way to affect only the areas of images you want to change is inpaint, like in Photoshop:

Beginnerā€™s guide to inpainting (step-by-step examples) - Stable Diffusion Art (stable-diffusion-art.com)

The third way of controlling that the image gets repeated is by having a seed. This seed should allow the AI to repeat the ā€œideasā€ from a previous generation. Of course if it repeats them, but the model changes, the effect should not be the same.

Iā€™ve been asking for these 3 features since the beginning, as I think they would allow Sketchup Diffusion to be much more useful for final production or even only for getting your vision on screen in a faster way.

Until then, it can be used mostly as a iteration on random ideas. Itā€™s very useful for that.

Ah thanks I did not know that.

I think those things would compliment my ā€œtemplate/recipeā€ desire nicely.

Iā€™m thinking for 3d modelling that A.I. could function a bit like an ā€œassisted rendererā€ that can make a model look realistic (or artistic) but also will reimagines the 3d model through multiple variations (eg trying out different roof shapes on a house).

Is there a way to export images with higher quality? Even if I type the quality as part of the prompt, theyā€™re not very great when saved.

Hi @dianavg.96

Currently the export resolution is standard and cannot be increased or decreased.
What kind of resolution would you be looking for?

Best,
Aris

1 Like

it has some promise but as it is not working well, I looked at several AI programs bought some time, nothing seems to be much better with the exception of MIdJourney which I believe is the oldest and most prolific. I tried the other ArkoAI and it seems less than Diffusion and they charge 30.00 a month? One major hurdle is landscape interpretations of plants, trees, and people!! Long way to go there. I had some great success with IMG2GO but it works only on descriptions. You canā€™t load drawings or photos into it for art. It does have a photo app version. Meanwihile back to rendering the old fashion way, which right now is faster in the long run and much more accurate.

Full HD, 4K, iPad Pro Screen Size.

As renders are mostly presented fullscreen, those might be the most common formats.

1 Like

I have to make two interior renders , the currently existing place has already a kitchen with materials the owner wishes to keep also the flooring, so i started modeling and adding textures, when i render some door are turned into windows , the bot changes textures even after prompting to do otherwise, also the tv is turned into a tv , i have a curtain in one wall and in some renders every wall is made out of curtains, its a really shitty plug in for projects that already exist and only need a glow up. PURE TRASH

TV is turned into another window