I have been playing around with more structural dynamic components and asked CHATGPT to assist… just had an interesting conversation regarding the sample results it gave me to validate my DC formulae.
PS… that DC is only work in progress there are other errors embedded in it !
As a side note … anyone know the what the level of mathematical precision calculations are in DC formulae are? when dealing with Modulus of Elasticity and Moments of Inertia the values very extreme and I am triggering errors… Thankfully in metric I can just move the decimal point , feel sorry for all you Imperialists!
Wait until the insurance companies wake up to these situations… Sorry, but if you didn’t carry out verification or undertake appropriate checks then you’re not covered.
I think the OP gives a prime example of where AI has the ability to clog up the courts. If someone installs a beam because AI says it should do the job and the beam subsequently fails, then who or what is at fault. You can be sure the AI masters won’t be picking up the bill anytime soon if they can avoid it and the insurer’s won’t want to take them on. It’s a scale issue. One engineer makes a miscalculation, hopefully he won’t do it again. AI learns from a bad example and then uses the same error which it applies thousands of times where no one checks the result and the effects could be devastating. AI has as much potential to become a risk as an assistant.
Until such times as an AI service provider offers insurance backed certified results I would approach AI cautiously, particularly in areas where physical performance is critical.
Just asked Chat GPT to what extent its answers are prone to error. Here is what it said:
ChatGPT is prone to errors to varying degrees depending on the nature of the task, the information available, and how well the user frames their query. The likelihood and type of errors can be categorized as follows:
Factual Errors
• Extent: Moderate to High for obscure, nuanced, or fast-changing topics; low for well-documented and general knowledge.
• Why It Happens:
• Outdated Training Data: Its knowledge is based on information available up until its last training cut-off (e.g., September 2021 for some versions), though newer versions with browsing capabilities mitigate this.
• Confusion in ambiguous or conflicting data sources.
• Mitigation: Real-time browsing (if available) or asking users to verify with trusted sources.
Reasoning Errors
• Extent: Moderate; often depends on the complexity and logical coherence of the problem.
• Why It Happens:
• GPT models are pattern matchers, not true reasoners. They generate text that “sounds right” but may contain logical flaws.
• It struggles with multi-step reasoning or highly abstract concepts without clear guidance.
• Mitigation: Structuring queries to break down problems into smaller, well-defined steps.
Context Misunderstanding
• Extent: Moderate, especially in ambiguous or poorly framed queries.
• Why It Happens:
• Lack of contextual continuity. Without explicit details, ChatGPT may misinterpret a user’s intent.
• Limitations in capturing subtleties like tone, implied meanings, or cultural references.
• Mitigation: Providing clear, concise, and specific inputs.
Ethical and Moral Reasoning Errors
• Extent: High in contentious or highly subjective scenarios.
• Why It Happens:
• Models are trained on human text that may contain biases or conflicting views.
• GPT does not possess moral understanding but reflects patterns in its training data.
• Mitigation: Encouraging fact-based or neutral perspectives.
Misleading Confidence
• Extent: High; ChatGPT can present incorrect answers with authoritative language.
• Why It Happens:
• GPT doesn’t “know” when it’s incorrect; it is designed to generate plausible text, not to verify correctness.
• Mitigation: Asking for clarification, sources, or double-checking with external references.
How to Minimize Errors as a User
• Clarify Questions: Provide specific, detailed instructions.
• Request Sources: Particularly for factual questions.
• Iterate: Follow up if an answer seems incomplete or incorrect.
• Verify Critical Information: Especially for decisions that have real-world consequences.
Overall, while ChatGPT is an advanced tool with a high degree of accuracy for many tasks, users should approach it as an aid rather than an authoritative source. Its utility is maximized when combined with critical thinking and external verification.
This seems like a perfect example of why we don’t need to panic about AI “taking our jobs”. S far as something like structural engineering, I believe we will always need a licensed engineer to do final sign off… AI can do the grunt work that leads up to the final signature, but our society will always need someone to be responsible (and insured) in case there is an issue!
My thoughts about this as an architectural draftsman:
If AI would do all the dimensioning automaticaly on my Plans and Details, which I of course have to give to the craftsman on the site to built the structure - who would pay for errors, if the plans would show any incorrect stuff? The company that made the AI? No, I don´t think so - the guy who used the AI (me) would have to pay for correction and rebuilding it.
And this is why I’m not worried about AI programming my plugins into obsolescence, at least not yet. AI is good at fuzzy sorts of calculations and stuff, its very organic but lacks exactness. This is fine for some things (like art) but not so good for certain things that require precise answers. Engineering and architecture as two of those fields that require exact and precise answers, and they generally need to be very repeatable.
Imagine if you have an engineering calculator for your beams (Medeek Beam Calculator) and it gave you slightly different answers every time you used it, even though your inputs have not changed. Certain things need to be predictable and any randomness or chaos needs to be completely eliminated. Your PC you are sitting in front of right now is running an operating system which is very deterministic, if it were not so your PC would tend to crash a lot or do things very unpredictably, this is not desirable in an operating system.
My extensions are like a beam calculator or any other deterministic engineering calculator. You pass it a specific number of parameters, each parameter is of course within a specific range but you will always get the same output, over and over. There is no grey area, there is no variance and there is no intelligence, it is just a complicated algorithm (a precise set of instructions) that produces the same result for any given set of inputs. Unless AI can become this predictable and exact then it will not intrude into the engineering discipline very far, in my estimation.
The cool thing about AI is that it introduces a level of organic randomness into its output which allows it to not only learn but do some very cool and creative stuff. However if it becomes too random it will simply degrade into white noise. Our own human intelligence is similar, we are uniquely balanced between our deterministic programming (instincts, reflexes, hormones) and our higher level decision making which allows us to be creative and come up with completely new concepts and ideas. However, if you take the level of randomness a bit too far you will end up with an eccentric genius or even further and you begin to border on what we consider madness.
The discussion CHAPTGPT Caution! revolves around the limitations and potential risks of relying on AI, specifically ChatGPT, in critical fields like engineering and architecture. gsharp shares an experience where ChatGPT provided incorrect sample results for a dynamic component, highlighting the need for human verification.
jQL jokingly comments on the potential consequences of AI mistakes, while DGSketcher raises concerns about liability and insurance in cases where AI-assisted designs fail. DGSketcher emphasizes the importance of approaching AI with caution, especially in areas where physical performance is critical, and suggests that AI service providers should offer insurance-backed certified results.
simoncbevans shares ChatGPT’s own assessment of its error-prone nature, categorizing errors into factual, reasoning, context misunderstanding, and misleading confidence. simoncbevans humorously compares ChatGPT’s behavior to that of politicians.
Others, like TheOnlyAaron, Peter_B, and medeek, discuss the limitations of AI in fields requiring precise and repeatable results, such as engineering and architecture. They argue that while AI can be useful for grunt work, human oversight and responsibility are still essential to ensure accuracy and accountability. medeek notes that AI’s lack of exactness and predictability makes it less suitable for certain applications, but its ability to introduce organic randomness can be beneficial for creative tasks.
This is a new functionality on the forum, I discovered it yesterday…