What is the best AI generator tool that creates the most accurate Ruby script?

What is the best AI generator tool that creates the most accurate Ruby script?

Why don’t you ask “them”? :slight_smile:

Do you have experience with Ruby? Are you familiar with the Sketchup Ruby API?
Can you judge which one is better if you ask “them” to generate? Do you know that there is many “accurate” code solution for one task?
What is the “accurate” means?
Are SketchUp’s own employees more accurate when creating native extensions, or are there perhaps some more accurate ones who publish on EW or elsewhere?

Which is more accurate? (Both give the same result.)
This:

model = Sketchup.active_model
entities = model.entities
center_point = Geom::Point3d.new
normal = Geom::Vector3d.new(0, 0, 1)
edges = entities.add_circle(center_point, normal, 24, 24)

Or This:

Sketchup.active_model.entities.add_circle(ORIGIN, Z_AXIS, 24)
4 Likes

They all fail miserably.

Google claims that if you pay them that their AI Plus edition can write better code. I don’t believe it. Serve me poorly written erroneous code and ask me for money to produce good code? No thank you.

The two most common things I see AIs doing:

  • Not using the SketchUp API’s official documentation as the primary reference.
  • Producing code that makes method calls that do not exist in the API.
2 Likes

Thanks Dan, that has been my experience. However the one that was the best of the worst was ChatPgt. I tried several others like Grok, Gemini, Claude, etc.. and at least with Chat it was somewhat close to my request. But it was still lacking. Thanks for the confirmation. Joe

Right. ChatGPT. I’ve tried also. They can write snippets that work, but often fail at well organized full SketchUp extensions.

BTW, re your previous post. Please don’t spam the topics. That stuff belongs on your forum profile page.

1 Like

Bear in mind how LLM AI works: it gobbles in massive amounts of data, analyzes it to find patterns based on correlation, and uses those to “understand” your request and to generate results.

For boilerplate tasks (which are admittedly a significant part of coding) and stock algorithms and code patterns, it does well. But it falls on its face when faced with a request that doesn’t quite match any pattern it found in the source data, i.e. an innovative solution.

My acid test for a SketchUp Ruby AI generator is to ask it to reproduce the native Offset Tool’s action. I’ve yet to have one that works, even ignoring the special cases such as when the selected face has sharp corners that force Offset to generate multiple shapes. I suppose I could spend a lot of time trying to refine the query until AI understands, but in that amount of time I can write my own code that works as required.

3 Likes

Actually, your Offset Ruby issue was solved by me, 14 years ago with this:

https://sketchucation.com/pluginstore?pln=TIG_Smart_offset

It’s so old it’s just a script rather than an extension.
I’m surprised that AI hasn’t scraped its code from somewhere - although it is a bit arcane by today’s standards…

1 Like

I have wondered where AI is scraping to get examples of SketchUp Ruby code. Seems like a small niche compared to general code.

It certainly isn’t sufficient to suck in the Ruby API docs. The GitHub repository of issues is proof that the API docs are rife with misstatements, errors, omissions, and bad examples.

Aside from the frivolous “anywhere they can”, we don’t know. There is a vast amount of old plugin and extension code on many sites that can be mined. Some of it is obsolete due to Ruby and API changes, but a lot is still valid. But for a number of years now a significant fraction of extensions are encrypted. Are these inaccessible to AI harvest, or could the harvesters have learned from hacker sites how to decrypt extension code? Seems like there could be grounds for suits over theft of proprietary information.

I found that the AI likes to scrape StackOverflow. I had to add a directive to Gemini not to use StackOverflow.

I agree with Dezmo and Dan. From my recent expeirience with LLMs, coding with AI can be really frustrating if you don’t know how to prompt it correctly.

Regarding to add circle example without a human who understand the API to guide and correct it the AI might just spit out the verbose code.

We shouldnt expect AI to match the legendary sketchup masters on this forum that would underestimate human experience, effort, and critical thinking. However once you teach the AI the rules of the game (like Ruby and the SketchUp API) it becomes an incredibly useful tool.

Ultimately, the only limit is your own prompting skills.

You could give Claude (chat) a go. I did a few tests last week and I’m fairly impressed. Claude fixed a few issues in my own plugins, added some extra functions to existing plugins of mine and created a few brand new ones (not too complex, less than 250 lines of code) . Only thing I did was explain what I want to achieve with the plugin.

My set up is as follows: I have a notebooklm where I upload resources as txt files ( sketchup stubs, different notes, public examples from @DanRathbun etc, I set up some design principles, roadmap, etc and regard this as the source for my prompts which I provide to a Gemini code agent.

Both the agent and notebooklm can act a bit like a young adolescent ( one has to repeatedly tell them what to do, and they still don’t listen:) But working together helps keeping in control. I use a powerscript that wraps all the different .rb and .js files in one big bundle.txt file, so I surpass the max file limit for gemini, par example…

bundel_extensie.zip (680 Bytes)

LayOut_API_Bundle.txt (325,6 KB)

SketchUp_API_Stubs_Bundle.txt (1,5 MB)

current state: working on dynamic table fields for pdf creation of drawing lists:

DynamicWidths_Report_1773181806.pdf (30,3 KB)

MW_Suite_versie_5.3.91_Bundle.txt (867,7 KB)

7 Likes

I recently shared a new repo for AI workflows showing .md files on Github.

You can check it out here.

1 Like

For Ruby coding i get the best results with Claude (Sonnet 4.6 Thinking). I’m only using the free plan. I think the results would be even better with the Pro plan. At least you get higher usage limits there.

(I tried Claude, ChatGPT and Gemini (all free))

Your description of how AI works is right on the mark. AI will never be able to write an original work because it is a best-guess algorithm. Any idea of accuracy is out of the question.

Anyone trying to code using any AI tool needs to first understand how AI works before trusting an AI output. As the name says, AI stands for artificial intelligence, or better put, machine-based guessing. The AI program takes your request and compares it to similar cases in a (hopefully) extensive database. Anything that is not precisely in the database is not thrown out; it simply takes a best case gusss to develop an output. The accuracy of that guess depends on how far away similar cases are from your request. Today‘s LLM databases are behemoths, but can still never. hold every detail of everything. Bottom line: if you’re looking for accuracy, don’t look to AI for a solution.

1 Like

Try cursor . It helped me best. Chatgpt was just arguing most of the time.

I’ve only ever used ChatGPT and it is getting remarkably good at generating code, sometimes it hallucinates though, so you do need to watch for that. One thing is it really good at is checking for basic logical or syntax errors. I use it almost everyday to help me debug my code before I start testing it. It will usually find obvious (stupid) errors with relative ease or a missing bracket or quotation mark (some of these simple errors have sometimes caused me hours of searching just to find them).

Look at the block of code it generated in this latest post on my Medeek API thread:

I really don’t have any complaints, with this. Perhaps a bit heavy on the comments but that is probably for the better in this instance. This code block was entirely generated by ChatGPT, I did not create any of it or modify it. It is clean and well organized, stylistically I might change a few things but it is certainly highly readable.

I have had mixed experiences - from Claude telling me an API method didn’t exist when I already knew it did, and trying to invent a convoluted workaround - to it giving me very very useful advice on code and processes within timer blocks and their affects on updating pages and views and the subsequent export of images - it did take a lot of iterative ‘discussions’, but eventually some useful code was concocted and my issue was resolved in a way that I wouldn’t have come up with left to my own devices… it needed ‘us’ working together to get to the solution, so for a new user it wouldn’t be so helpful as it’d make some pretty impenetrable code - I myself had to ask for clarification of what the parts did a few times !

4 Likes

This is fascinating reading for a numpty like me who has never been able to learn coding and has always dreamt of a “translator” interface that would transform plain language into commands & routines to do stuff… I’ve tried cGPT and gave up, so to read you folks who know about these things coming up pretty empty handed leads me to think that these AI agents are really not going to replace those of you who have coding skills anytime soon, which is actually a relief, even to me who’d like a way of having plugins without always relying on others & their generosity. Fundamentally though I’m flummoxed as to why these agents fail - I mean if somebody taught them the “language” they should be able to code accurately, no ? I like the idea that they are like (stubborn) teens as an explanation :wink: Anyway, let’s see what the SU team comes up with in their own Assistant interface, maybe they will cagole their adolescent into becoming mature enough to more or less get the job done ?

1 Like