I first started using ChatGPT to help me code about six months ago, prior to that I didn’t think AI was strong enough to be useful, I actually thought it was a gimmick more than anything else, just a more clever auto-complete algorithm like my cellphone on steroids. I mean I do agree that it is still not perfect and it cannot completely replace a developer like myself, at least not yet. However I use ChatGPT almost daily now (while I code) and I use it specifically to do repetitive tasks (ie. populate an array or hash with key/values based on existing data). I also use it to check my code for obvious errors (logical and syntax). For these types of tasks AI is incredibly powerful and very useful.
When I was programming my matrix analysis engine for my new Engineering plugin (Aug. - Dec. 2025) I wrote most of the code myself however I did have very specific edge cases that needed to be dealt with. There were very complex problems and ChatGPT struggled to figure them out (as did I), however with a few insights and hints from me along the way and allowing the AI to do most of the sandboxing, the AI engine was eventually able to figure out the right algorithm or technique to deal with these cases, much to my surprise. Essentially the AI was able to reason about the problem and finally figure out a logical and rational solution, so yes they can think, just like us.
I think we are still a ways off before we can set these agents alone on a task and expect reliable results. In my experience its like having a really smart grad student standing over your shoulder giving you helpful hints, but you wouldn’t want to put them in charge of everything just yet.
Also when it comes to understanding language syntax and methods, AI is superior to most of us, they understand HTML, CSS, SVG, Ruby etc… almost perfectly, at least from my perspective. I no longer have to use online resources (ie. Stack Overflow) to figure out ways to write my code. If I have a code question, ChatGPT will instantly provide a very comprehensive answer. If I need to adjust some formatting on my webpage, ChatGPT can easily figure it out way quicker than I usually can.
If these engines keep making improvements and their memories and context windows get even larger then all bets are off. At some point we will outsource most if not all development to them. I think it will only be a matter of time, maybe 6 to 24 months and then we will no longer need to write much of the code that we do. It is coming, the passing of the torch to AI is inevitable in my honest opinion, it is just a matter of when this will become the norm.
The problem is that AI engines do not THINK. They are pattern matchers.
They match patterns in existing information through iterative processes to arrive at the “best” match answer.
So they need to match against good and primary information sources. The most common problem I found with using the online editions is that (for some reason) the SketchUp Ruby AI documentation is never the first source it tries to match against. This often leads to it inventing bogus methods calls that do not exist in the API.
I strongly disagree with you on this point, and I don’t usually dare disagree with you, especially on matters of Ruby, SketchUp and writing code in general.
Earlier versions of ChatGPT did not think, but then they implemented this chain of thought reasoning business and now the darn things are able to rationally and logically grind away at a problem. They are able to “think”, I’m not sure how that compares exactly with what we do when we “think” but at least it appears that they are able to do this now.
They are not sentient or anything of that sort, but they are able to reason their way through a problem, it is really quite impressive.
Well, I will admit I haven’t tried ChatGPT in quite awhile. I’ve been busy getting frustrated with the Gemini and Copilot “flavoured” editions. They constantly ignore the directives I’ve set up (be concise, don’t spew extra text, don’t repeat answers, stay on topic, stop adding suggestions, etc.) and then respond with apologies and excuses when I scold them on this.
There also sometimes get confused and bring in previous facts from a previous question muddying the answers, even after I told it to drop that fact. It is almost comical sometimes.
I have not set up a custom AI like Jack has done, which makes more sense.
Maybe am a bit late to the topic. Yet I will give my opinion cause its free
I use ChatGPT, is not perfect. What I did, is to give the .rb Sketchup manual as only source. And told it that whenever it can’t find a solution on the source to come over here in Developers and search for past solutions and to give me the URL. Only if it could find no solution I told it to start trying solutions.
Took me 2 months to get the 2 scripts I currently have to work correctly. The scripts are very simple.
The first one, only reads folders and the tags, from there it create scenes with X names that turns on and off those tags depending on what i wrote initially.
The other one with those scenes I created it exports them to layout. This last one was very difficult to do and by telling it to come over here to read solutions it found an answer from DanRathbun explaining something very similar.
I believe like others said that for populating or doing simple scripts its the best, and the best you can do is to feed it only source material.
Fascinating reading… you guys know what you’re talking about, so I’m just watching as it were. Half of me wants these AI agents to fail, I hate the idea of them one day doing my job, so ditto any other person’s job. But at the same time, coding is such a mystery to me that I’d love a “coder" at my desk all day who could transform my requests into viable plugins. I guess it’s a case of be careful what you wish for !
I don’t think the large language models actually think, it ‘s still based on ‘weighing’ possible outcomes. Accept, it doesn’t weigh outcomes like humans ( should I do this or that 2~5 options max) but with millions of possible options to weigh in. Their strength is to read through thousands of pages and index them, we humans have to do that ‘on the fly’ or with a ‘gut’ feeling.
My journey actually started 35 years ago, when I was programming and it took me three weeks to draw a rectangle on the screen. I had heard about second, third and fourth generation program languages and was hoping back than that we soon would be able to ‘explain’ to the compiler what we needed.
Fortunately, the company I work for has a strong affiliation with AI, since it (w)(c)ould mean disruptive changes are to be expected in the AECO world. So we do have access to Ai ( although I hit a limit last week:
As I explained earlier in this thread, it’s all about reading masses of text. One can create specialized ‘Gems’ or AI agents and tell them how to behave and how we expect them to answer (Check out ‘frameworks’ like ‘RISEN’ or ‘RODES”) In the end, it’s all about control….
I dived into AI last year with some Ruby agents but now I am creating ‘Boilerplates’ In a Vite/React environment to create extensions in Trimble Connect:
Oooh… I love this (both the conversation about AI and the possibility to actually create an extension at some point without learning to code).
On the topic of “does AI think"?” I will drop this definition:
To think means to use one’s mind to reason, judge, conceive ideas, or form opinions. It involves mental processes like reflecting, believing, or remembering.
Now… if we can accept that AI has a “mind” that is made up of the information that it is equipped with, then does it actually think?
It’s not able to think but did a plugin for me, that evolves, is doing a new workflow that evolved, is creating a breakthrough workflow that is evolving, it is chnging my work completely as we speak.
It doesn’t think but I’m not hallucinating. The output is real.
I’m going to say that ChatGPT is almost at that point where you can possibly code an entire extension with it. The only caveat is that is that it will probably take quite a few iterations before it gets it completely right. The trick is to set up a framework of what you want to do a then break these tasks or tools into logical steps (pseudo code). Then give ChatGPT the overall big picture and the logical framework for each task. I guess what I am trying to say is that prompting the AI with exactness is probably the key to making this really work. Nebulous requests will give it too much latitude in my honest opinion.
However, at times you may want to let AI explore various solutions to a problem, it is actually quite good at that. This morning I wrapped up my update to the baluster upgrade in the Wall plugin. I was excited with the new feature but I was dismayed with the extra time penalty this additional geometry imposed. I asked ChatGPT to give me some possible solutions and one of them turned out to be the perfect fit. Not to say that I could not have come up with it eventually after some brainstorming, but it certainly sped up my development process.
I use ChatGPT almost daily now, while I’m coding. If for nothing else to check for syntax errors, logical flaws and often to pick my code apart and make me a better Ruby coder. I can’t say it has completely replaced @DanRathbun, but a lot of the silly “programming” or API questions I used to ask on the forums are now quickly answered by ChatGPT.
I want to thank you all for responding and your input and guidance. As you can see by the multiple comments and responses this is a hot topic. I personally started to build a media cabinet and I asked ChatPGT to design it for me using a picture as a baseline. It did a good job but needed a lot of edits and help once I use Ruby Script to import it into SketchUp. Then through this conversation I tried Claude and Grok. Grok did a horrible job when I imported the file but Claude did the best out of all of them. It not only designed a better rendition of what I was looking for but it accepted edit commands better than the others when I brought it into Sketch, not through Ruby but as COLLADA. The import it was surprisingly accurate and editable. So far I’m a Claude fan when it comes to building a drawing in AI then to SketchUp editor. Thanks again for the lively discussion.
Honestly, this is the perfect recipe to get AI slop.
The proper way to have AI to working well with ruby (or whatever programming) is not: “This is a photo, I have an idea, do it for me”.
You need to:
Understand the general logic about how computers and code work
Understand how Sketchup (or whatever domain you want to apply AI coding) works
Understand how AI works (context window, token consumption, hallucinations and stuff)
Set up a local file system in which the AI agents have access to
Create your agents, roles, workflows, rules, skills, architectural documents, error-log, hand-off files and whatnot
Design the architecture of your tool (possibly modular, because it’s easier to mantain)
Figure out how your algorithms should work step by step and describe them in “pseudocode”, possibly creating markdown files yourself
Work surgically on a single feature at a time and do a lot of testing
After testing, find bugs, fix bugs, update skills, design new features, update documents, update workflows and test again… and do it over and over again.
I’m not a programmer, but following the best practices I’m achieving more than decent results with pretty complex plugins that allows quite advanced functionalities that I wanted to happen in Sketchup since years (mostly quad-modeling related).
But there’s lot of reasoning needed from human-being-side if you don’t wan’t to fall in AI slop territory.