Why we can't stop talking about ChatGPT

When ChatGPT was first released, I remember my Twitter timeline being inundated with screenshot after screenshot of AI generated responses. Everything from simple questions to complex programming logic, with most marveling at the technological advancement. The tech was incredibly interesting, for sure, but to me, it quickly became quite tiring. I even contemplated muting the keyword for a bit! It wasn't because I don't welcome progress, quite the opposite. I just had this sinking feeling that AI generated text was going to start polluting the Internet. I certainly wasn't wrong about that, but I think there are other much more concerning angles to this.

There have been many, many, many, many things written and said about ChatGPT. Lots of which were probably created by (or with the assistance of) ChatGPT. For example, I find the impact on school writing assignments particularly interesting (I have two teenage daughters that are no strangers to technology). But on #tech Twitter, one of the biggest concerns seemed to be about whether or not this technology can write full blown applications and replace the need for developers altogether. It's an interesting question.

Developers need to remain self-aware

As I said, there is plenty already written on this. Forrest Brazeal's excellent post asked, "When programming is gone, will we like what's left?" He argues that there will likely be a shift in job functions, and that maybe "just good enough" code might be all upper management really cares about. The "fast and cheap is better than hand-crafted" strategy has been played out time and time again across many an industry. Then again, he also points out the fact that the speed of writing code is usually not the bottleneck for large organizations, and full blown or assistive AI is unlikely to make a huge impact.

Dan Conn bluntly asked, "Will ChatGPT replace developers?" He's a bit more optimistic, pointing to the fact that not only is the technology unable to understand certain nuances, it's also quite often flat out wrong. I fully agree with Dan's points in the article, but I'm concerned that others won't necessarily see it this way. It makes me think of the movie Idiocracy where people in the future blindly trust technology that they no longer have any idea how to build or control. There will be more and more tools built around this where people will be inadvertently putting the wrong probe in their mouth. And they won't know any better.

Learning from itself might be the least of its problems

I get that AI is rapidly evolving and will continue to learn over time. But how much of the input for future learning is going to be from content and code samples it generated? Dan's article mentions the Stack Overflow ban on ChatGPT responses. This was because, yes, some of the information was wrong, but the bigger problem is that ChatGPT probably used Stack Overflow and similar sites to train itself in the first place. Maybe it's smart enough to detect its own (or other AI's) potentially inaccurate drivel? But maybe it's not? That seems like a vicious cycle.

It also seems as though it can easily be manipulated. A recent NY Times column titled, "A Conversation With Bing's Chatbot Left Me Deeply Unsettled," is clearly a bit concerning. Sure, I get that maybe this guy was intentionally pushing it, but considering that this isn't the first time Microsoft had some trouble with AI chatbots, it doesn't come as a surprise. There's been other people posting Bing ChatBot threats as well. I mean, if Elon Musk is concerned, then shouldn't we all be? 😱

While turning a Chatbot into a racist stalker pressuring you to leave your wife certainly isn't good, I'm much more concerned about chats gone awry with those that are much more impressionable. This Verge story say it's an emotionally manipulative liar, and people love it. Personally, I don't find that particularly amusing. A recent study found that 30 percent of teenage girls said they seriously considered attempting suicide. At least Google immediately directs you to a suicide prevention hotline and inundates you with links to resources to get help. I'm sure these chatbots will have some filters that catch a lot of these keywords, but I don't think it's a stretch that an extended conversation that "can lead to a style we didn't intend" turns into a machine helping a distraught kid rationalize hurting themselves or someone else.

AI is another tool, and it's here to stay

Kirk Kirkconnell wrote a post on using ChatGPT to assist with content creation by generating prompts for you. I actually thought about doing this as well, but found them to be overly generic at best. This is at least better than the plethora of blog posts we've already seen written by ChatGPT, most of which are very convincing. I honestly can't be certain that an AI generated (or assisted) article hasn't made its way into my newsletter. It's easy to be fooled.

I'm not an old man yelling at a cloud. I do believe that ChatGPT/AI can and should assist with lots of things, including code suggestions, writing test, answering (certain) questions, and even helping to eventually optimize complex cloud workloads. Let's just not lose sight of what separates humans from machines. Otherwise we might all be left with a really bad taste in our mouth.

Comments are currently disabled, but they'll be back soon.