I've not used it in any of my professional work, but I've been experimenting recently with what the kids call "vibe coding" for generating boilerplate code and test functions. It's actually surprisingly capable. You treat it (and talk to it) like it's a very green intern, and you have to check and often correct everything, but I've seen how it can be a timesaver. Sometimes a significant one. I don't like to feed suits' greed with my personal info, so I've not used any of the online AI "services". I built a system with a 16GB GPU (Nvidia P100) in an HP DL380 server, and I run Ollama and OpenWebUI on it. Overall it was pretty easy to put together, and while it's surely nowhere near as capable as the online services with tons of memory and giant models, again I won't feed suits' greed with yet more of my personal information. The key is to always remember that half of the results will be wrong. You have to understand the code it's writing, and the language in which it's writing it. The business world has a massive collective hard-on for using it to get rid of all those troublesome expensive tech people with their lack of ties, lack of golf, long hair, and crazy ideas about being the real value in a corporation, but that won't happen en masse anytime soon without huge, corporation-killing messes...some of which have already started. You MUST have the discipline to not just take what it writes verbatim. You MUST review it, every single line, and treat it as what it actually is: a starting point. But within those limitations, I've seen with my own eyes how even small models (< 16GB) can write a lot of boilerplate drudge code and test harnesses, to the point of actually saving real time. As with any tool, it must be used within its limitations, which of course means understanding those limitations. For my own work, it's not likely to be extremely productive soon. It was trained on mainstream trendy stuff, which is sloppy Javascript and gigantic loads of webby UI code. I primarily write firmware for laboratory instrumentation and communications systems: the sort of stuff that very few dorm room rats are barfing out. -Dave On 12/12/25 14:09, Neil Cherry via vcf-midatlantic wrote:
I've also posted this to the CDL mail list.
Prereq: I'm not so much interested in failures, I've seen a lot. I'm interested in successes.
I've been playing with AI to help me program (I haven't quite gotten to using things like LM Notebook). I've used it to quickly write data parsing scripts for testing (I do QA on software defined networks). For the quick and dirty script it usually gets me about 95% there, quickly. I've attempted to use AI to fix & improve a data tool (in browser JS). That went really badly as it would loose huge chunks of code and. When I needed to write a complex test suite (Robot Framework/Python) things got even worse as corporate security issues got in the way and the AI didn't seem to understand what was needed to fix them. This was just before my end of year vacation so I haven't been able to find the correct corp. chat to get help and none of my coworkers seem to understand (weird).
Now, on to my hobby use. This gets even worse. I normally give the AI a set of requirements, something normal to anyone who does software engineering professional. I even cover the 'Business As Usual' parts that every SE fails to provide. So for my home project of C code to read, write, create, and modify Flex OS Gotek images under Linux, I gave it a nice set of requirements as assumed that the AI would not find the information on the internet. That went very badly as it had a lot of trouble with off by one errors. I've tried Python, C and 68000 asm and all come up quite short when I start testing.
Has anyone created real applications with AI? I see tons of Youtube videos that claim they have. So either I'm the problem or the AI is not up to the marketing hype.
And for anyone interested, I can share the requirements of my home project. I can't share my work related projects.
-- Dave McGuire, AK4HZ New Kensington, PA