SketchUp Diffusion Feature Requests

I do have an iPad and have used the markup tool with the Apple Pencil, it is useful. I was just pointing out that Diffusion flattens the screen image anyway and what you feed it from SketchUp can be 2D.

The request for a markup tool in Pro is different, would be better for those using Wacom tablets perhaps, it’s the writing “on the screen” that really makes it work.

desktop kinda has. it’s called annotation, it’s an extension using the overlay. wonder if it works with diffusion…

the annotation tool works, kinda.

for some reason, diffusion sees double. here is a poorly drawn circle, yet diffusion sees it on the side, and with a clone.

and it works. SU sees the annotation markup.

1 Like

Any drawing works and you can use overlays in styles too, the thing is that markup in iPad is much easier and fully featured.

1 Like

yeah, absolutely. and it doesn’t have double vision too :slight_smile:

1 Like

Early days comments as I’ve only seen this today. I’m fresh to SU23 (not to SU), so it’ll be interesting to see how I can utilise Diffusion.

As an illustrator I already use some elements of AI generation, mostly as a form of inspiration of forms and composition and not really adding AI generated elements into finished products. I make landscaping and architectural visualisations, using SU and then illustrate over them by hand.

Often clients need specific feels of planting and particular species in particular spots. The question arises for me of how specific I can be with Diffusion on prompting for particular areas of the model (‘Carex brizoides under Siberian Birch in left hand bed by garden office’, for example).

At the moment there is not much I can say about the AI images. They are very typically AI generated, with weird blending of chairs into the ground, trees into the sky into a cloud, etc… none of this is good enough for professional images for my clients designs, personally speaking, and makes me wonder how much time I would need to spend typing and reiterating on a prompt list that only gets me perhaps a flower bed with some partially useful shapes and forms I then would cut and paste, feathering or masking it into a multi media visualisation. I can easily see I’d rather draw the thing itself with my own hand and do a job that’s just as good and much more accurate from the get-go.

Please don’t get me wrong, this isn’t a dissing of the quite frankly amazing and exciting prospect of fast AI render style images generated in quick succession. It definitely serves well the idea of super quick concepting of scenes, settings and vibes of a design concept. Its just that my clients ask me to make images to win a pitch with very specific items and furniture. Therefore I welcome the idea put by others in this thread - comments on more reference images (interesting if I could add moodboard images sent by my client, for example)

I think ideally, being able to train our own version of the AI to make our own styles and allow the recursive element of the MML to improve on our own personal style.

Specificity of texture via image prompts, on surfaces, would be cool…

Interesting about the rights to use imagery for commercial purposes… Perhaps ironic… I bet the AI has been trained on vast image searches from the internet that have not all been cleared for copyright…

2 Likes

Is the Sketchup Diffusion feature using gpu or cpu?

It is not a rendering application. It basically uses AI to manipulate a screenshot of what is visible on your SketchUp screen.

Fair enough. I was kind of asking fir a friend who asked me. His response is as follows:

“GPU will do massively multi threaded, like thousands at a time as opposed to a max of 24 (mine) on the CPU. And the vector math and floating point math are exponentially faster.
CPUs are still doing GFLOPS whilst your GPU is doing tens of TFLOPS, as in trillions of floating point operations per second. Whilst this doesn’t quite translate to AI performance it is useful as a relative guide to what you may get.”

I’m only here to see the development be as good as it can be. I’m just asking out of general intrigue…

AI has nothing to do with your device, be it an iPad, laptop or desktop machine…
Everything is done on servers far away!

3 Likes

I’m really enjoying diffusion at the moment but there’s one thing that bugs me… I used Diffusion to render a Sketchup model with a blue floor (a specific, important blue), but all the renders change the floor colour to beige. I’ve tried adjusting the Geometry and Prompt Influence parameters, and I’ve tried adding ‘Blue floor’ to the prompt box, but I can’t get it to preserve the colours in my model… Is this something that can be user-influenced at all?

The prompt can help, in the prompt I will reiterate the style I want and the materials. Like “photorealistic image of modern house with a blue tile floor and …” If it does not work the first time, try again!

1 Like

Ok Diffusion is bad at its current stage. Sorry considering how much this company charges for subscription, it essentially just takes a screenshot and runs it through an Ai generator using “image to image”. There are lots of sites you can do that on… Also if you have a newer PC you can just run this locally with way better results, using Automatic and so on (it gives you way more options as well). Hell I’d rather pay google cloud then eventually pay for an extra subscription to use this future (if thats ever an requirement). Its all opensource after all, and you’ll always get more features faster elsewhere. Especially considering all this does it takes screenshots and runs it through an AI generator.

Without control net this is rather bad. This causes all sorts of issues, like it swapping out objects, and materials etc. There’s not even a way to select part of the image and just regenerate that part.

Question: What resolution does this actually render at? before upscaling?

Missing features for Diffusion (for now):

  1. negative prompt: (stating what you want it not to do
  2. Control Net (Makes sure things remain as they are, if someone takes time to set it up)
  3. Inpainting… say you want to regenerate part of the image and not all of it with a mask. Pretty self explanatory.
1 Like

Usually in newer versions of image2image there is a negative prompt, which means stuff you do not want it do. That would also help ensure things.

However you could try adding it in () brackets, that usually gives it more emphasis. The more brackets the emphasis it gets. So for example “(((Yellow))) cat” that should significant emphasis to it being yellow.

However without the option of using a control net there’s going a significant chance of it screwing something up.

@tweenulzeven
This service is being run out servers, but image to image xl is possible to run it locally on a laptop as well.

SketchUp isn’t charging you more for Diffusion so they are not really related…

1 Like

A bit rude. It is what it is. It’s not bad for me. I actually think it’s a very nice addition.

It could be improved and that’s why this is a feature request thread. Fortunately you made some requests that are very cool.

Where are you seeing that there’s an additional subscription for this feature? As far as I can see, it’s available even on the free online version… Did I miss a note somewhere?

well you can access it there because you have a subscription :wink:
but it’s only accessible to Go and Pro subscription holders, not legacy licences, not LAB licences, or free users.

Imo, sketchup is a generalist 3d modeller. meaning that I’m not expecting everything to be 100% :

it can make architectural drawings, but if you want to push it to the level of a revit or an archicad, you’ll have to add specific extensions, and pay for some.
it can make terrain and topographic studies but all the same, you might need to get some extensions to go further.
it can make some good woodworking plans but here again… extensions.

That’s the whole logic behind sketchup, provide generic tools but if you need to go further, look for extensions.

Same here. I don’t expect Sketchup Diffusion to reach the levels of customization and computing power that you can get with some (already existing) AI extensions. Or even by runnig diffusion locally with all your parameters.
Sure, it could make more than it currently does, but it’ll never be as good as making it in a dedicated extension or the actual tech.

1 Like

Is that actually a 3D model, or just an image that looks like one? If truly 3D, what model format? Generative AI usually works by a sort of paste-up of images it saw in its database but doesn’t really understand what it is showing.

That was just a pdf from a housing site😉