OT: Anyone working/playing with AI professionally?
I've also posted this to the CDL mail list. Prereq: I'm not so much interested in failures, I've seen a lot. I'm interested in successes. I've been playing with AI to help me program (I haven't quite gotten to using things like LM Notebook). I've used it to quickly write data parsing scripts for testing (I do QA on software defined networks). For the quick and dirty script it usually gets me about 95% there, quickly. I've attempted to use AI to fix & improve a data tool (in browser JS). That went really badly as it would loose huge chunks of code and. When I needed to write a complex test suite (Robot Framework/Python) things got even worse as corporate security issues got in the way and the AI didn't seem to understand what was needed to fix them. This was just before my end of year vacation so I haven't been able to find the correct corp. chat to get help and none of my coworkers seem to understand (weird). Now, on to my hobby use. This gets even worse. I normally give the AI a set of requirements, something normal to anyone who does software engineering professional. I even cover the 'Business As Usual' parts that every SE fails to provide. So for my home project of C code to read, write, create, and modify Flex OS Gotek images under Linux, I gave it a nice set of requirements as assumed that the AI would not find the information on the internet. That went very badly as it had a lot of trouble with off by one errors. I've tried Python, C and 68000 asm and all come up quite short when I start testing. Has anyone created real applications with AI? I see tons of Youtube videos that claim they have. So either I'm the problem or the AI is not up to the marketing hype. And for anyone interested, I can share the requirements of my home project. I can't share my work related projects. -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
The project I've been working on (and have talked with several folks in the forums about) is a site that handles people selling / giving away items. Think "A place other than ebay or craigslist"). This was 99% written using AI tools (Claude in Cursor to be specific) https://claimit.stonekeep.com/ We're currently using it in our community for people giving away / selling things locally, but I'm working on scaling it up so it can have 'areas' or 'communities' of items so we can share an infrastructure. So, yeah, I guess i'm doing AI stuff :) On Fri, Dec 12, 2025 at 2:09 PM Neil Cherry via vcf-midatlantic < vcf-midatlantic@lists.vcfed.org> wrote:
I've also posted this to the CDL mail list.
Prereq: I'm not so much interested in failures, I've seen a lot. I'm interested in successes.
I've been playing with AI to help me program (I haven't quite gotten to using things like LM Notebook). I've used it to quickly write data parsing scripts for testing (I do QA on software defined networks). For the quick and dirty script it usually gets me about 95% there, quickly. I've attempted to use AI to fix & improve a data tool (in browser JS). That went really badly as it would loose huge chunks of code and. When I needed to write a complex test suite (Robot Framework/Python) things got even worse as corporate security issues got in the way and the AI didn't seem to understand what was needed to fix them. This was just before my end of year vacation so I haven't been able to find the correct corp. chat to get help and none of my coworkers seem to understand (weird).
Now, on to my hobby use. This gets even worse. I normally give the AI a set of requirements, something normal to anyone who does software engineering professional. I even cover the 'Business As Usual' parts that every SE fails to provide. So for my home project of C code to read, write, create, and modify Flex OS Gotek images under Linux, I gave it a nice set of requirements as assumed that the AI would not find the information on the internet. That went very badly as it had a lot of trouble with off by one errors. I've tried Python, C and 68000 asm and all come up quite short when I start testing.
Has anyone created real applications with AI? I see tons of Youtube videos that claim they have. So either I'm the problem or the AI is not up to the marketing hype.
And for anyone interested, I can share the requirements of my home project. I can't share my work related projects.
-- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
On 12/12/25 14:12, Dave Shevett wrote:
The project I've been working on (and have talked with several folks in the forums about) is a site that handles people selling / giving away items. Think "A place other than ebay or craigslist").
This was 99% written using AI tools (Claude in Cursor to be specific)
https://claimit.stonekeep.com/ <https://claimit.stonekeep.com/>
We're currently using it in our community for people giving away / selling things locally, but I'm working on scaling it up so it can have 'areas' or 'communities' of items so we can share an infrastructure.
So, yeah, I guess i'm doing AI stuff :)
Cool, so is the AI usually close enough, or are you having to treat it like a JR Programmer? How do you 'talk' to the AI? -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
This is a full on vibe coding experiment. I talk to it in english. Basically saying "Change the color of that prompt to blue" and "REfactor the authentication mechanism to use OAUTH" - you do have to treat it like a brilliant idiot with infinite patience. I went through about 20 iterations at one point chasing down a performance problem, asking quesitons and looking at timings until the AI figured out there was a looping problem on an AWS call that was iterating too many times. It fixed the code, timings were down to normal, and we pushed it into the production pipeline (it also helped set up a CI/CD pipeline via github actions). One thing i found very helpful was i could ask it questions about hosting environments and other things, and it would help out with configuration issues (in my case with apache SSL certs), and walked through changes. Is it perfect? Heck no. It makes mistakes, and occasionally goes completely against what i'm trying to do. Now, going a little deeper - it's not like i started chatgpt and said "make an app" - i'm pulling on literal decades of developer and operational experience, so i know the pitfalls, i know all the false roads, and i know when a design track won't work (say, for scaling), so my experience is absolutely critical to making this successful. I designed the app, the data store, the metadata storage, the caching strategy. I just told Claude to do it in a certain way, and it did all the coding. I'm actually enjoying it. And ended up with a cool app that i can come back to anytme and go "Okay, lets change this" without having to relearn the entire codebase everytime. On Fri, Dec 12, 2025 at 2:21 PM Neil Cherry <ncherry@linuxha.com> wrote:
On 12/12/25 14:12, Dave Shevett wrote:
The project I've been working on (and have talked with several folks in the forums about) is a site that handles people selling / giving away items. Think "A place other than ebay or craigslist").
This was 99% written using AI tools (Claude in Cursor to be specific)
https://claimit.stonekeep.com/ <https://claimit.stonekeep.com/>
We're currently using it in our community for people giving away / selling things locally, but I'm working on scaling it up so it can have 'areas' or 'communities' of items so we can share an infrastructure.
So, yeah, I guess i'm doing AI stuff :)
Cool, so is the AI usually close enough, or are you having to treat it like a JR Programmer?
How do you 'talk' to the AI?
-- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
I've not had many successes with AI personally, after a while you give up and realize it's faster and easier to just RTFM, or make the design you wanted on your own in Photoshop, or quite literally anything. and if you don't know how to do those things, it's never too late to learn a new skill that will help you later in life. Thanks, Andrew Mattera
On Dec 12, 2025, at 15:14, Dave Shevett via vcf-midatlantic <vcf-midatlantic@lists.vcfed.org> wrote:
This is a full on vibe coding experiment. I talk to it in english. Basically saying "Change the color of that prompt to blue" and "REfactor the authentication mechanism to use OAUTH" - you do have to treat it like a brilliant idiot with infinite patience. I went through about 20 iterations at one point chasing down a performance problem, asking quesitons and looking at timings until the AI figured out there was a looping problem on an AWS call that was iterating too many times. It fixed the code, timings were down to normal, and we pushed it into the production pipeline (it also helped set up a CI/CD pipeline via github actions).
One thing i found very helpful was i could ask it questions about hosting environments and other things, and it would help out with configuration issues (in my case with apache SSL certs), and walked through changes.
Is it perfect? Heck no. It makes mistakes, and occasionally goes completely against what i'm trying to do.
Now, going a little deeper - it's not like i started chatgpt and said "make an app" - i'm pulling on literal decades of developer and operational experience, so i know the pitfalls, i know all the false roads, and i know when a design track won't work (say, for scaling), so my experience is absolutely critical to making this successful. I designed the app, the data store, the metadata storage, the caching strategy. I just told Claude to do it in a certain way, and it did all the coding.
I'm actually enjoying it. And ended up with a cool app that i can come back to anytme and go "Okay, lets change this" without having to relearn the entire codebase everytime.
On Fri, Dec 12, 2025 at 2:21 PM Neil Cherry <ncherry@linuxha.com> wrote:
On 12/12/25 14:12, Dave Shevett wrote: The project I've been working on (and have talked with several folks in the forums about) is a site that handles people selling / giving away items. Think "A place other than ebay or craigslist").
This was 99% written using AI tools (Claude in Cursor to be specific)
https://claimit.stonekeep.com/ <https://claimit.stonekeep.com/>
We're currently using it in our community for people giving away / selling things locally, but I'm working on scaling it up so it can have 'areas' or 'communities' of items so we can share an infrastructure.
So, yeah, I guess i'm doing AI stuff :)
Cool, so is the AI usually close enough, or are you having to treat it like a JR Programmer?
How do you 'talk' to the AI?
-- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
I've not used it in any of my professional work, but I've been experimenting recently with what the kids call "vibe coding" for generating boilerplate code and test functions. It's actually surprisingly capable. You treat it (and talk to it) like it's a very green intern, and you have to check and often correct everything, but I've seen how it can be a timesaver. Sometimes a significant one. I don't like to feed suits' greed with my personal info, so I've not used any of the online AI "services". I built a system with a 16GB GPU (Nvidia P100) in an HP DL380 server, and I run Ollama and OpenWebUI on it. Overall it was pretty easy to put together, and while it's surely nowhere near as capable as the online services with tons of memory and giant models, again I won't feed suits' greed with yet more of my personal information. The key is to always remember that half of the results will be wrong. You have to understand the code it's writing, and the language in which it's writing it. The business world has a massive collective hard-on for using it to get rid of all those troublesome expensive tech people with their lack of ties, lack of golf, long hair, and crazy ideas about being the real value in a corporation, but that won't happen en masse anytime soon without huge, corporation-killing messes...some of which have already started. You MUST have the discipline to not just take what it writes verbatim. You MUST review it, every single line, and treat it as what it actually is: a starting point. But within those limitations, I've seen with my own eyes how even small models (< 16GB) can write a lot of boilerplate drudge code and test harnesses, to the point of actually saving real time. As with any tool, it must be used within its limitations, which of course means understanding those limitations. For my own work, it's not likely to be extremely productive soon. It was trained on mainstream trendy stuff, which is sloppy Javascript and gigantic loads of webby UI code. I primarily write firmware for laboratory instrumentation and communications systems: the sort of stuff that very few dorm room rats are barfing out. -Dave On 12/12/25 14:09, Neil Cherry via vcf-midatlantic wrote:
I've also posted this to the CDL mail list.
Prereq: I'm not so much interested in failures, I've seen a lot. I'm interested in successes.
I've been playing with AI to help me program (I haven't quite gotten to using things like LM Notebook). I've used it to quickly write data parsing scripts for testing (I do QA on software defined networks). For the quick and dirty script it usually gets me about 95% there, quickly. I've attempted to use AI to fix & improve a data tool (in browser JS). That went really badly as it would loose huge chunks of code and. When I needed to write a complex test suite (Robot Framework/Python) things got even worse as corporate security issues got in the way and the AI didn't seem to understand what was needed to fix them. This was just before my end of year vacation so I haven't been able to find the correct corp. chat to get help and none of my coworkers seem to understand (weird).
Now, on to my hobby use. This gets even worse. I normally give the AI a set of requirements, something normal to anyone who does software engineering professional. I even cover the 'Business As Usual' parts that every SE fails to provide. So for my home project of C code to read, write, create, and modify Flex OS Gotek images under Linux, I gave it a nice set of requirements as assumed that the AI would not find the information on the internet. That went very badly as it had a lot of trouble with off by one errors. I've tried Python, C and 68000 asm and all come up quite short when I start testing.
Has anyone created real applications with AI? I see tons of Youtube videos that claim they have. So either I'm the problem or the AI is not up to the marketing hype.
And for anyone interested, I can share the requirements of my home project. I can't share my work related projects.
-- Dave McGuire, AK4HZ New Kensington, PA
On 12/12/25 16:38, Dave McGuire via vcf-midatlantic wrote:
I've not used it in any of my professional work, but I've been experimenting recently with what the kids call "vibe coding" for generating boilerplate code and test functions.
The vibe coding scares me, I don't think I understand it. It is just not sinking in. Might be the tester in me. :-) I just tried the same asm 68K code query that triggered the initial post with Claude, code is better but Gemini made the assumption that the DUART had a 3.6864 MHz Crystal. Claude doesn't state this assumption but does get matching settings. I didn't state that in my initial (very vague) requirements. So, +1 for Claude. :-)
It's actually surprisingly capable. You treat it (and talk to it) like it's a very green intern, and you have to check and often correct everything, but I've seen how it can be a timesaver. Sometimes a significant one. Capable but it has limits. It is just that AI isn't living up to the marketing hype in my eyes and I'm trying to figure out if I'm doing something wrong. The Linux Flex disk programs were a terrible mess. Took me a week to figure out how it wasn't doing the math.
I don't like to feed suits' greed with my personal info, so I've not used any of the online AI "services". I built a system with a 16GB GPU (Nvidia P100) in an HP DL380 server, and I run Ollama and OpenWebUI on it. Overall it was pretty easy to put together, and while it's surely nowhere near as capable as the online services with tons of memory and giant models, again I won't feed suits' greed with yet more of my personal information.
Actually that is what a lot of us are doing. We're 'training' the AIs for the big corporations. Scares me. I'd love to setup a home AI. Just have too many other things to do.
won't feed suits' greed with yet more of my personal information.
+1
The key is to always remember that half of the results will be wrong. You have to understand the code it's writing, and the language in which it's writing it. The business world has a massive collective hard-on for using it to get rid of all those troublesome expensive tech people with their lack of ties, lack of golf, long hair, and crazy ideas about being the real value in a corporation, but that won't happen en masse anytime soon without huge, corporation-killing messes...some of which have already started. You MUST have the discipline to not just take what it writes verbatim. You MUST review it, every single line, and treat it as what it actually is: a starting point.
I don't trust it, that's why I have tests.
But within those limitations, I've seen with my own eyes how even small models (< 16GB) can write a lot of boilerplate drudge code and test harnesses, to the point of actually saving real time. That's were I may need to change. Although I haven't liked a lot of the software design (functions et al). I tend to think a certain way (high level to nitty-gritty details).
But i do agree that I can use it for the boilerplate parts. I may need to revise my thinking in that direction.
As with any tool, it must be used within its limitations, which of course means understanding those limitations. I'm trying to do this but they keep upgrading the capabilities.
For my own work, it's not likely to be extremely productive soon. It was trained on mainstream trendy stuff, which is sloppy Javascript and gigantic loads of webby UI code. I primarily write firmware for laboratory instrumentation and communications systems: the sort of stuff that very few dorm room rats are barfing out. I can't tell you if my code is professional. :-)
-- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
[quick reply as I have to run out] On 12/12/25 17:21, Neil Cherry wrote:
I just tried the same asm 68K code query that triggered the initial post with Claude, code is better but Gemini made the assumption that the DUART had a 3.6864 MHz Crystal. Claude doesn't state this assumption but does get matching settings. I didn't state that in my initial (very vague) requirements. So, +1 for Claude. :-)
If you think about it, it would be surprising if it hadn't guessed that. Why? Because the thousands of conversations and manuals ingested by Claude that talk about DUART designs all mention the use of a 3.6864MHz crystal. That's how the whole thing works. It's not hype, it's real. It's just in its infancy.
Actually that is what a lot of us are doing. We're 'training' the AIs for the big corporations. Scares me.
As well it should. And you know that blank stare that you get when you explain to someone "if you receive value from an online service and you haven't paid for it, YOU are the product"? They (99.44% of the public...if not more) are every bit as clueless and oblivious about this situation. Bread and circuses.
I'd love to setup a home AI. Just have too many other things to do.
I understand. I only took the time to do this because, at the time, I was desperate for a break from what I was doing. If you'd like to duplicate what I've built, I can provide guidance; it can be done in an afternoon after the (cheap) hardware arrives. Then you can do stuff like this: ----------------------------- ra$ ollama run llama3.2
Explain the relationship between semiconductor band gap and valence band electrons. Answer as Valley Girl.
Oh my gosh, like, fer shure! So, you wanna know about the relationship between the semiconductor band gap and valence band electrons? Like, it's totes important! Okay, so like, when a material is in its pure form, it's got these energy levels, right? And one of those energy levels is called the valence band. It's where all the valence band electrons are, like, hanging out and stuff. Now, when we add impurities to this material, like, say, phosphorus or boron, they can replace some of the original atoms in the lattice. This creates these defects, like, holes in the structure. And because of that, some of those valence band electrons get excited and jump up into, like, a higher energy level called the conduction band. But here's the thing: there's this energy gap between the valence band and the conduction band, right? It's like, a forbidden zone where no electrons can go. This is called the band gap, fer shure! So, when we add these impurities and create those defects, they kind of "push" some of those valence band electrons into the band gap. Like, it's not possible for them to go any further up, 'cause there's this huge energy barrier in the way. Now, when you're trying to conduct electricity, like, what you want is for those conduction band electrons to flow freely through the material. But if they can't get into the conduction band because of that band gap, it's like, super hard to get anything done! But don't worry, dude! Because we can control the band gap by adjusting the amount and type of impurities added. It's like, a delicate balance between getting enough electrons excited to conduct electricity and not so many that they just, like, overheat and stuff. So, in summary, the semiconductor band gap is like, this energy barrier between the valence band and conduction band that affects how easily those conduction band electrons can flow. And by controlling the impurities and defects, we can kinda tweak that band gap to get the desired conductivity, fer shure!
Send a message (/? for help)
-Dave -- Dave McGuire, AK4HZ New Kensington, PA
That’s absolutely the best thing I read today On Fri, Dec 12, 2025 at 18:11 Dave McGuire via vcf-midatlantic < vcf-midatlantic@lists.vcfed.org> wrote:
[quick reply as I have to run out]
On 12/12/25 17:21, Neil Cherry wrote:
I just tried the same asm 68K code query that triggered the initial post with Claude, code is better but Gemini made the assumption that the DUART had a 3.6864 MHz Crystal. Claude doesn't state this assumption but does get matching settings. I didn't state that in my initial (very vague) requirements. So, +1 for Claude. :-)
If you think about it, it would be surprising if it hadn't guessed that. Why? Because the thousands of conversations and manuals ingested by Claude that talk about DUART designs all mention the use of a 3.6864MHz crystal. That's how the whole thing works.
It's not hype, it's real. It's just in its infancy.
Actually that is what a lot of us are doing. We're 'training' the AIs for the big corporations. Scares me.
As well it should. And you know that blank stare that you get when you explain to someone "if you receive value from an online service and you haven't paid for it, YOU are the product"? They (99.44% of the public...if not more) are every bit as clueless and oblivious about this situation.
Bread and circuses.
I'd love to setup a home AI. Just have too many other things to do.
I understand. I only took the time to do this because, at the time, I was desperate for a break from what I was doing. If you'd like to duplicate what I've built, I can provide guidance; it can be done in an afternoon after the (cheap) hardware arrives.
Then you can do stuff like this:
----------------------------- ra$ ollama run llama3.2
Explain the relationship between semiconductor band gap and valence band electrons. Answer as Valley Girl.
Oh my gosh, like, fer shure! So, you wanna know about the relationship between the semiconductor band gap and valence band electrons? Like, it's totes important!
Okay, so like, when a material is in its pure form, it's got these energy levels, right? And one of those energy levels is called the valence band. It's where all the valence band electrons are, like, hanging out and stuff.
Now, when we add impurities to this material, like, say, phosphorus or boron, they can replace some of the original atoms in the lattice. This creates these defects, like, holes in the structure. And because of that, some of those valence band electrons get excited and jump up into, like, a higher energy level called the conduction band.
But here's the thing: there's this energy gap between the valence band and the conduction band, right? It's like, a forbidden zone where no electrons can go. This is called the band gap, fer shure!
So, when we add these impurities and create those defects, they kind of "push" some of those valence band electrons into the band gap. Like, it's not possible for them to go any further up, 'cause there's this huge energy barrier in the way.
Now, when you're trying to conduct electricity, like, what you want is for those conduction band electrons to flow freely through the material. But if they can't get into the conduction band because of that band gap, it's like, super hard to get anything done!
But don't worry, dude! Because we can control the band gap by adjusting the amount and type of impurities added. It's like, a delicate balance between getting enough electrons excited to conduct electricity and not so many that they just, like, overheat and stuff.
So, in summary, the semiconductor band gap is like, this energy barrier between the valence band and conduction band that affects how easily those conduction band electrons can flow. And by controlling the impurities and defects, we can kinda tweak that band gap to get the desired conductivity, fer shure!
Send a message (/? for help)
-Dave
-- Dave McGuire, AK4HZ New Kensington, PA
I can share some thoughts on this although I’m boring compared to a Valley Girl :) I started out using chat side-by-side with an IDE working with single files and code snippets. That was GPT-3.5? It was pretty good for simple things but frustrating. It would rename all the vars every time and forget stuff. The "free" version was really bad, it would go in circles. The paid version solved that. GPT-4 was better. 4o even more so. It became tiresome, going back and forth between the web browser and IDE, and single file context was limiting. The codex extension in VSCode running GPT-5.1-Codex-Max solves this as it can see all the project files and modify them directly, so no more cut & paste. The code quality gets better each release. Recently I realized I was holding it back too much, explaining too much. It can really go once it gets the hang of what direction your framework is going in. Of course chat is farming everything put in, but I don’t work on anything very important. p.s. They’re starting to iron out the memory issues where a chat runs out of memory and gets dumb, although starting a new chat context is often more productive because the AI can get tripped over what it knows in that session. On Fri, Dec 12, 2025 at 6:20 PM Dean Notarnicola via vcf-midatlantic < vcf-midatlantic@lists.vcfed.org> wrote:
That’s absolutely the best thing I read today
On Fri, Dec 12, 2025 at 18:11 Dave McGuire via vcf-midatlantic < vcf-midatlantic@lists.vcfed.org> wrote:
[quick reply as I have to run out]
On 12/12/25 17:21, Neil Cherry wrote:
I just tried the same asm 68K code query that triggered the initial
post
with Claude, code is better but Gemini made the assumption that the DUART had a 3.6864 MHz Crystal. Claude doesn't state this assumption but does get matching settings. I didn't state that in my initial (very vague) requirements. So, +1 for Claude. :-)
If you think about it, it would be surprising if it hadn't guessed that. Why? Because the thousands of conversations and manuals ingested by Claude that talk about DUART designs all mention the use of a 3.6864MHz crystal. That's how the whole thing works.
It's not hype, it's real. It's just in its infancy.
Actually that is what a lot of us are doing. We're 'training' the AIs for the big corporations. Scares me.
As well it should. And you know that blank stare that you get when you explain to someone "if you receive value from an online service and you haven't paid for it, YOU are the product"? They (99.44% of the public...if not more) are every bit as clueless and oblivious about this situation.
Bread and circuses.
I'd love to setup a home AI. Just have too many other things to do.
I understand. I only took the time to do this because, at the time, I was desperate for a break from what I was doing. If you'd like to duplicate what I've built, I can provide guidance; it can be done in an afternoon after the (cheap) hardware arrives.
Then you can do stuff like this:
----------------------------- ra$ ollama run llama3.2
Explain the relationship between semiconductor band gap and valence band electrons. Answer as Valley Girl.
Oh my gosh, like, fer shure! So, you wanna know about the relationship between the semiconductor band gap and valence band electrons? Like, it's totes important!
Okay, so like, when a material is in its pure form, it's got these energy levels, right? And one of those energy levels is called the valence band. It's where all the valence band electrons are, like, hanging out and stuff.
Now, when we add impurities to this material, like, say, phosphorus or boron, they can replace some of the original atoms in the lattice. This creates these defects, like, holes in the structure. And because of that, some of those valence band electrons get excited and jump up into, like, a higher energy level called the conduction band.
But here's the thing: there's this energy gap between the valence band and the conduction band, right? It's like, a forbidden zone where no electrons can go. This is called the band gap, fer shure!
So, when we add these impurities and create those defects, they kind of "push" some of those valence band electrons into the band gap. Like, it's not possible for them to go any further up, 'cause there's this huge energy barrier in the way.
Now, when you're trying to conduct electricity, like, what you want is for those conduction band electrons to flow freely through the material. But if they can't get into the conduction band because of that band gap, it's like, super hard to get anything done!
But don't worry, dude! Because we can control the band gap by adjusting the amount and type of impurities added. It's like, a delicate balance between getting enough electrons excited to conduct electricity and not so many that they just, like, overheat and stuff.
So, in summary, the semiconductor band gap is like, this energy barrier between the valence band and conduction band that affects how easily those conduction band electrons can flow. And by controlling the impurities and defects, we can kinda tweak that band gap to get the desired conductivity, fer shure!
Send a message (/? for help)
-Dave
-- Dave McGuire, AK4HZ New Kensington, PA
-- Bart Hirst "This signature will now be eaten up and spit out by Velociraptors."
Professionally... on occasion. Personally... more often than I expected. "Free" AI models can do something useful. They aren't designed for in-depth programming projects because they are general purpose information models. I subscribe to CoPilot (initially for the extra 1TB OneDrive space over my standard Office 365 1TB allotment), Gemini (due to the 2TB cloud space that comes with it, and now Github CoPilot (because I really wanted to try out dedicated programming models.) CoPilot and Gemini do OK. But as I said before, they aren't completely dedicated to programming if you want project level work. I found more power in my Github CoPilot subscription in the Claude models (particularly Claude Sonnet) than I found in any of my other subscriptions. For $100 per year, I found it a worthwhile investment for myself. I have seen some very interesting and useful results from Claude. What's more, if you use it within a suitable IDE, like VS Code, you can manage and edit a specifications file within your project for the model to reference for your project. This saves you tokens if you use the metered models like Claude. For each reference it doesn't have to look up again, that translates to tokens saved. It can also update the specifications file on its own with knowledge it acquired and retained. As for using it to code, as others have implied in their responses, treat the model as a junior programmer where you role play as its supervisor. The model you choose will have a certain skill level, and you must understand its level of competence to use it effectively, or it will be running itself, and you, in circles. Be prepared to look at what it generates. It still isn't at a level for people with no coding experience at all thinking they could whip up a masterpiece app. As Bart mentioned, some of the basic models have limited "memory." It's like after a time, it forgets how it even started on the project. I found myself having to remind a few of them of what was created after I've been working the chat for a while.. Claude in an IDE, from my experience, eliminates that problem, because such information is available in the project specifications file. And to follow up on the original question, I use AI more for personal projects because the professional models cost money, and they won't foot the bill at work for a paid, professional AI subscription. All that's provided to me at work is the basic m365 CoPilot, and I may use that to clean up or streamline existing code. Jeff Salzman
SOTA paid models can do 6809, 6502, assembly, plus C for those cpu targets as well using and learning various compilers and linking semantics (cc65, cmoc). And that is the tip of the iceberg. Selling them short and laughing isn't the intelligent way, i'm sure people laughed at Gutenberg with his useless printed books with ugly type! Printing presses give power to the state! Now anyone can just 'print' propaganda! And yes, I use it professionally and it's like having at least 5 devs that can prototype things in 1 hour for us to evaluate. It loves C# just a little less than node, so it's not just trendy web things at all. If you use free models then you get exactly what you paid for lol. -andy
On Dec 12, 2025, at 8:28 PM, Jeff S via vcf-midatlantic <vcf-midatlantic@lists.vcfed.org> wrote:
Professionally... on occasion. Personally... more often than I expected.
"Free" AI models can do something useful. They aren't designed for in-depth programming projects because they are general purpose information models. I subscribe to CoPilot (initially for the extra 1TB OneDrive space over my standard Office 365 1TB allotment), Gemini (due to the 2TB cloud space that comes with it, and now Github CoPilot (because I really wanted to try out dedicated programming models.)
CoPilot and Gemini do OK. But as I said before, they aren't completely dedicated to programming if you want project level work. I found more power in my Github CoPilot subscription in the Claude models (particularly Claude Sonnet) than I found in any of my other subscriptions. For $100 per year, I found it a worthwhile investment for myself.
I have seen some very interesting and useful results from Claude. What's more, if you use it within a suitable IDE, like VS Code, you can manage and edit a specifications file within your project for the model to reference for your project. This saves you tokens if you use the metered models like Claude. For each reference it doesn't have to look up again, that translates to tokens saved. It can also update the specifications file on its own with knowledge it acquired and retained.
As for using it to code, as others have implied in their responses, treat the model as a junior programmer where you role play as its supervisor. The model you choose will have a certain skill level, and you must understand its level of competence to use it effectively, or it will be running itself, and you, in circles. Be prepared to look at what it generates. It still isn't at a level for people with no coding experience at all thinking they could whip up a masterpiece app.
As Bart mentioned, some of the basic models have limited "memory." It's like after a time, it forgets how it even started on the project. I found myself having to remind a few of them of what was created after I've been working the chat for a while.. Claude in an IDE, from my experience, eliminates that problem, because such information is available in the project specifications file.
And to follow up on the original question, I use AI more for personal projects because the professional models cost money, and they won't foot the bill at work for a paid, professional AI subscription. All that's provided to me at work is the basic m365 CoPilot, and I may use that to clean up or streamline existing code.
Jeff Salzman
On 12/13/25 08:56, Andrew Diller via vcf-midatlantic wrote: Andy, huge thanks. Don't take anything in this message as a criticism. I'm a bit frustrated with the AI situation and trying to figure out how to make better use of it. Your information is helping.
SOTA paid models can do 6809, 6502, assembly, plus C for those cpu targets as well using and learning various compilers and linking semantics (cc65, cmoc). And that is the tip of the iceberg. Selling them short and laughing isn't the intelligent way, i'm sure people laughed at Gutenberg with his useless printed books with ugly type! Printing presses give power to the state! Now anyone can just 'print' propaganda! I'm laughing (and crying) at that last part as that's a pretty cool analogy. As there are folks manipulating the AI into what they want. So they are 'printing' propaganda. But then again the 'media' has been doing that all along.
And yes, I use it professionally and it's like having at least 5 devs that can prototype things in 1 hour for us to evaluate. It loves C# just a little less than node, so it's not just trendy web things at all.
I'm trying to figure out how to use this tool. I realize that AI is a huge boon towards speeding things up. I've just not figured out how to do that properly. I've used AI for Python, ASM (6800 & 68K, 68K went a lot better), C, JS (Browser app & node.js), and Robot Framework. It's the last one that is giving me the fits as it doesn't seem to support the security aspect of reaching APIs. It only seems to give simple curl functionality for the APIs. I have to use 2 different access methods to setup API calls. I may need to 'ask' that outside the company's AI.
If you use free models then you get exactly what you paid for lol. Actually that's part of the problem I'm having. I know the free AI are limited. But I've used several paid for models and I'm not seeing a great difference. *Although* I just put Claude (free) to the Flex disk test and its code is *MUCH* better than Gemini (Google's). It's not nice and tidy with pretty functions (I can fix that) but its total C code is a lot shorter. Readability is a mess but the code is 99% correct. Minor error is show Sector 255 as the 255th sector when displaying the information. The same mistake a Perl programmer made. :-D
My customer's internal AI is not really that good for code. Weird as they should have a lot of code to put in there. Remember I professionally code for my customer. When I get back in January I'll attack the security issues I can manually pull the data from the input and output to compare the flow. And that's the weird part. I can't call the same methods from inside the Robot Framework. I get security violations. -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
I can help you with one simple link: https://antigravity.google/ Don't try to work on code w/ AI 'side loading.' Work with it in an IDE. I've been in love with Windsurf (google it) for all this year. Google hired away the people that made that and then produced AG. Download it and open up a project and try it out. Talk to the AI on the side. It's a fork of VSC so it won't be something crazy to learn. Accept that the printing press is here, and books are coming. Code and enjoy the actual results of what you want and need, not the arcane creation of it. If you say "i enjoy the creation of it" then stop reading here and go fuzt around and enjoy your own thing. I want the results of the programming, i.e. the program! -andy
On Dec 13, 2025, at 9:44 AM, Neil Cherry <ncherry@linuxha.com> wrote:
On 12/13/25 08:56, Andrew Diller via vcf-midatlantic wrote:
Andy, huge thanks. Don't take anything in this message as a criticism. I'm a bit frustrated with the AI situation and trying to figure out how to make better use of it. Your information is helping.
I will never take anything from you as a criticism! Fire away, if I didn't want to help i would have just not replied!
SOTA paid models can do 6809, 6502, assembly, plus C for those cpu targets as well using and learning various compilers and linking semantics (cc65, cmoc). And that is the tip of the iceberg. Selling them short and laughing isn't the intelligent way, i'm sure people laughed at Gutenberg with his useless printed books with ugly type! Printing presses give power to the state! Now anyone can just 'print' propaganda! I'm laughing (and crying) at that last part as that's a pretty cool analogy. As there are folks manipulating the AI into what they want. So they are 'printing' propaganda. But then again the 'media' has been doing that all along.
Yes ignoring is not useful.
And yes, I use it professionally and it's like having at least 5 devs that can prototype things in 1 hour for us to evaluate. It loves C# just a little less than node, so it's not just trendy web things at all.
I'm trying to figure out how to use this tool. I realize that AI is a huge boon towards speeding things up. I've just not figured out how to do that properly.
See above, use AG.
I've used AI for Python, ASM (6800 & 68K, 68K went a lot better), C, JS (Browser app & node.js), and Robot Framework. It's the last one that is giving me the fits as it doesn't seem to support the security aspect of reaching APIs. It only seems to give simple curl functionality for the APIs. I have to use 2 different access methods to setup API calls. I may need to 'ask' that outside the company's AI.
Your companies policies can't be helped. There are ways around everything however.
If you use free models then you get exactly what you paid for lol. Actually that's part of the problem I'm having. I know the free AI are limited. But I've used several paid for models and I'm not seeing a great difference. *Although* I just put Claude (free) to the Flex disk test and its code is *MUCH* better than Gemini (Google's). It's not nice and tidy with pretty functions (I can fix that) but its total C code is a lot shorter. Readability is a mess but the code is 99% correct. Minor error is show Sector 255 as the 255th sector when displaying the information. The same mistake a Perl programmer made. :-D
Free AI is just something to tell stories to. It will never code effectively in the short term. I don't use AI for stories or writing, i use it 100% for code because I never learned to effectively code and now i can code almost anything in 1/100 the time. Win.
My customer's internal AI is not really that good for code. Weird as they should have a lot of code to put in there. Remember I professionally code for my customer. When I get back in January I'll attack the security issues I can manually pull the data from the input and output to compare the flow. And that's the weird part. I can't call the same methods from inside the Robot Framework. I get security violations.
If you are not using AI to help you, then at some point your customers will use /find someone that does. This is reality. If the company you work for, doing code, doesn't realize this then they will be passed up for a company that does. Show them how it's effective (once you see it on your own) Imagine a company that delivers milk. It's 1909. They have 100 horses to do deliveries. There is another company that just got a model T. They start to use it. What happens then? You are that driver still using a horse in 1910. If you can't get your company to move to something that does not shit all over the road then you are working for a dead company. It's just a matter of time. Finally look at this image and ponder it for a long moment: https://preview.redd.it/rfxlhjb7ple51.jpg?width=768&auto=webp&s=aa0035f275f8... We had ~28 MILLION horses in the US in 1910. All the people that laughed at cars (besides being dead) were not acting in a reality based lifestyle. -andy
-- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
On 12/12/25 14:09, Neil Cherry via vcf-midatlantic wrote: I've taken all I've learned (my brain aches) and started applying it to using AI, in Code under Linux, with a paid Github account. The level of improvement is quite noticeable. This was Jeff's suggestion when he was demoing the MOBIDIC. I also stopped trying to give it a full requirements document and expecting a done product. :-) I've broken it down to more manageable chunks. I've been experimenting with Git repos I control. So screw ups are asy to fix (as long as I use the repos correctly). I've learned that it doesn't pay to give the AI 2 small things at the same time to fix or upgrade. Just one thing at a time. This has gone much better, getting the AI to undo it's changes have been 'fun'. I went round and round in a circular conversation until I could find the correct phrasing it understands. So yes, a Jr. Programmer still. But my coding speed as gone way up. Now I need to learn about copyright issues with AI code. Seems all the open source communities are very nervous about that. For my simple home projects this is not an issue. Also not an issue at work (so far). I still have much to learn and need to learn about the different AI tools other than the coding LLMs. Thanks -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
I got an early holiday present. The Hugging Face/Pollen Robotics AI robot. https://www.pollen-robotics.com/reachy-mini/ The robot is open source, and all the programming is in Python. I built it over the weekend, and it has a reasonable chatbot built in. As a fun exhibit, I would like it to be an assistant docent in one room of our computer museum. As is, I can ask it about many of the artifacts in the museum, and it already can give a one sentence summary. But it is not yet up to the task. 1) I can get more details, but it takes multiple prompts. I would like a 2-3 paragraph answers. 2) I can "train" it to learn additional facts, but the training is not persistent. Each time it boots, it starts back at the base. 3) You don't need the actual robot. There is an emulator! https://github.com/pollen-robotics/reachy_mini/blob/develop/docs/platforms/s... Anyone interested in building/helping build it? Bob Roswell broswell@syssrc.com 410-771-5544 ext 4336 https://museum.syssrc.com -----Original Message----- From: vcf-midatlantic <vcf-midatlantic-bounces@lists.vcfed.org> On Behalf Of Neil Cherry via vcf-midatlantic Sent: Monday, December 22, 2025 11:19 AM To: vcf-midatlantic <vcf-midatlantic@lists.vcfed.org> Cc: Neil Cherry <ncherry@linuxha.com> Subject: Re: [vcf-midatlantic] OT: Anyone working/playing with AI professionally? (Update) On 12/12/25 14:09, Neil Cherry via vcf-midatlantic wrote: I've taken all I've learned (my brain aches) and started applying it to using AI, in Code under Linux, with a paid Github account. The level of improvement is quite noticeable. This was Jeff's suggestion when he was demoing the MOBIDIC. I also stopped trying to give it a full requirements document and expecting a done product. :-) I've broken it down to more manageable chunks. I've been experimenting with Git repos I control. So screw ups are asy to fix (as long as I use the repos correctly). I've learned that it doesn't pay to give the AI 2 small things at the same time to fix or upgrade. Just one thing at a time. This has gone much better, getting the AI to undo it's changes have been 'fun'. I went round and round in a circular conversation until I could find the correct phrasing it understands. So yes, a Jr. Programmer still. But my coding speed as gone way up. Now I need to learn about copyright issues with AI code. Seems all the open source communities are very nervous about that. For my simple home projects this is not an issue. Also not an issue at work (so far). I still have much to learn and need to learn about the different AI tools other than the coding LLMs. Thanks -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
Bob- Got it as well (of course lol). https://huggingface.co/spaces/dillera/reachy_mini_conversation_app/tree/main I've forked the conversation app and added a whole bunch of things, (my fork is above) porting it over to Swift for iPad fun right now as I read your email. This got me thinking about AI 'embodiment' in a whole different light- this is a very cool tool. Def interested in the project. I can already imagine connecting it up to your existing inventory database, then use the image recognition it already has for lots of content. I've used it for long-form story telling already to test the realtime voice API and fix streaming and jitter issues. I'll send you an email.... Hop on the discord for PollenRobotics. -andy
On Dec 22, 2025, at 12:15 PM, Bob Roswell via vcf-midatlantic <vcf-midatlantic@lists.vcfed.org> wrote:
I got an early holiday present. The Hugging Face/Pollen Robotics AI robot. https://www.pollen-robotics.com/reachy-mini/ The robot is open source, and all the programming is in Python. I built it over the weekend, and it has a reasonable chatbot built in. As a fun exhibit, I would like it to be an assistant docent in one room of our computer museum.
As is, I can ask it about many of the artifacts in the museum, and it already can give a one sentence summary. But it is not yet up to the task.
1) I can get more details, but it takes multiple prompts. I would like a 2-3 paragraph answers. 2) I can "train" it to learn additional facts, but the training is not persistent. Each time it boots, it starts back at the base. 3) You don't need the actual robot. There is an emulator! https://github.com/pollen-robotics/reachy_mini/blob/develop/docs/platforms/s...
Anyone interested in building/helping build it?
Bob Roswell broswell@syssrc.com 410-771-5544 ext 4336
-----Original Message----- From: vcf-midatlantic <vcf-midatlantic-bounces@lists.vcfed.org> On Behalf Of Neil Cherry via vcf-midatlantic Sent: Monday, December 22, 2025 11:19 AM To: vcf-midatlantic <vcf-midatlantic@lists.vcfed.org> Cc: Neil Cherry <ncherry@linuxha.com> Subject: Re: [vcf-midatlantic] OT: Anyone working/playing with AI professionally? (Update)
On 12/12/25 14:09, Neil Cherry via vcf-midatlantic wrote:
I've taken all I've learned (my brain aches) and started applying it to using AI, in Code under Linux, with a paid Github account. The level of improvement is quite noticeable. This was Jeff's suggestion when he was demoing the MOBIDIC. I also stopped trying to give it a full requirements document and expecting a done product. :-) I've broken it down to more manageable chunks. I've been experimenting with Git repos I control. So screw ups are asy to fix (as long as I use the repos correctly). I've learned that it doesn't pay to give the AI 2 small things at the same time to fix or upgrade. Just one thing at a time. This has gone much better, getting the AI to undo it's changes have been 'fun'. I went round and round in a circular conversation until I could find the correct phrasing it understands. So yes, a Jr. Programmer still. But my coding speed as gone way up.
Now I need to learn about copyright issues with AI code. Seems all the open source communities are very nervous about that. For my simple home projects this is not an issue. Also not an issue at work (so far). I still have much to learn and need to learn about the different AI tools other than the coding LLMs.
Thanks
-- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
One of our guys has been talking about these, but hasn't gotten one yet. Using one for docent duty is a fantastic idea, that is seriously cool and sounds like it'd be a neat learning project, whether it works out in the end or not. -Dave On 12/22/25 12:15, Bob Roswell via vcf-midatlantic wrote:
I got an early holiday present. The Hugging Face/Pollen Robotics AI robot. https://www.pollen-robotics.com/reachy-mini/ The robot is open source, and all the programming is in Python. I built it over the weekend, and it has a reasonable chatbot built in. As a fun exhibit, I would like it to be an assistant docent in one room of our computer museum.
As is, I can ask it about many of the artifacts in the museum, and it already can give a one sentence summary. But it is not yet up to the task.
1) I can get more details, but it takes multiple prompts. I would like a 2-3 paragraph answers. 2) I can "train" it to learn additional facts, but the training is not persistent. Each time it boots, it starts back at the base. 3) You don't need the actual robot. There is an emulator! https://github.com/pollen-robotics/reachy_mini/blob/develop/docs/platforms/s...
Anyone interested in building/helping build it?
Bob Roswell broswell@syssrc.com 410-771-5544 ext 4336
-----Original Message----- From: vcf-midatlantic <vcf-midatlantic-bounces@lists.vcfed.org> On Behalf Of Neil Cherry via vcf-midatlantic Sent: Monday, December 22, 2025 11:19 AM To: vcf-midatlantic <vcf-midatlantic@lists.vcfed.org> Cc: Neil Cherry <ncherry@linuxha.com> Subject: Re: [vcf-midatlantic] OT: Anyone working/playing with AI professionally? (Update)
On 12/12/25 14:09, Neil Cherry via vcf-midatlantic wrote:
I've taken all I've learned (my brain aches) and started applying it to using AI, in Code under Linux, with a paid Github account. The level of improvement is quite noticeable. This was Jeff's suggestion when he was demoing the MOBIDIC. I also stopped trying to give it a full requirements document and expecting a done product. :-) I've broken it down to more manageable chunks. I've been experimenting with Git repos I control. So screw ups are asy to fix (as long as I use the repos correctly). I've learned that it doesn't pay to give the AI 2 small things at the same time to fix or upgrade. Just one thing at a time. This has gone much better, getting the AI to undo it's changes have been 'fun'. I went round and round in a circular conversation until I could find the correct phrasing it understands. So yes, a Jr. Programmer still. But my coding speed as gone way up.
Now I need to learn about copyright issues with AI code. Seems all the open source communities are very nervous about that. For my simple home projects this is not an issue. Also not an issue at work (so far). I still have much to learn and need to learn about the different AI tools other than the coding LLMs.
Thanks
-- Dave McGuire, AK4HZ New Kensington, PA
On 12/22/25 12:15, Bob Roswell wrote:
I got an early holiday present. The Hugging Face/Pollen Robotics AI robot. https://www.pollen-robotics.com/reachy-mini/ The robot is open source, and all the programming is in Python.
Well you're nothing but trouble. ;-) I do have 2 Hansen Einstein Robots. I really need some time to play with those. This will be later, thanks -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
I may just super-glue my Reachy onto my Hero Jr and wire it into the motors (via pi's gpis) to give it some mobility this holiday break. The hero has some cool sensors but basically no brain. -andy
On Dec 22, 2025, at 12:49 PM, Neil Cherry via vcf-midatlantic <vcf-midatlantic@lists.vcfed.org> wrote:
On 12/22/25 12:15, Bob Roswell wrote:
I got an early holiday present. The Hugging Face/Pollen Robotics AI robot. https://www.pollen-robotics.com/reachy-mini/ The robot is open source, and all the programming is in Python.
Well you're nothing but trouble. ;-)
I do have 2 Hansen Einstein Robots. I really need some time to play with those.
On 12/22/25 12:53, Andrew Diller wrote:
I may just super-glue my Reachy onto my Hero Jr and wire it into the motors (via pi's gpis) to give it some mobility this holiday break. The hero has some cool sensors but basically no brain.
You gave a Reachy-mini legs? I don't see this going well! ;-) -- Linux Home Automation Neil Cherry ncherry@linuxha.com http://www.linuxha.com/ Main site http://linuxha.blogspot.com/ My HA Blog Author of: Linux Smart Homes For Dummies KD2ZRQ
participants (9)
-
Andrew Diller -
Andrew Mattera -
Bart Hirst -
Bob Roswell -
Dave McGuire -
Dave Shevett -
Dean Notarnicola -
Jeff S -
Neil Cherry