I’ve successfully been ignoring all the hype around ChatGPT and similar AI stuff all through this year.
I was initially amused by some of the stuff folks were posting to Twitter when ChatGPT and DALL-E and other tools were made publicly available. There was a lot of funny stuff out there, with folks getting oddball results out of the chatbots, and using the image generators to make some really crazy images. Initially, it seemed pretty harmless, but also fairly useless.
Then came the op-eds and think pieces from people worried about the impact that these things could have on the world. Everything from worry about AI causing human extinction, to ChatGPT replacing writers and programmers, to the environmental cost of running all this stuff. A lot of that was overblown, I think.
But recently, something pushed me over the edge and I decided I had to start learning some of this stuff. I’m not even sure what did it, exactly. Either way, I’ve been digging into this stuff, and I thought I’d write up some notes.
First, I’ve been looking at two primary categories of “AI” here: the LLM chatbots, and the image generators. I like playing around with the image generators, but I haven’t found much practical use for them, and they’re not that interesting to me, so I’m going to skip talking about those. I’ll just say that the Bing image creator is pretty fun to play with.
As to the LLM chatbots, I’ve started playing around with ChatGPT and a few others. I registered for a free account with ChatGPT, which gets me access to GPT-3.5. Upgrading to ChatGPT Plus for $20/month would get me access to GPT-4, which is supposed to be much better. I don’t think I’ll be doing that, but a number of people seem to think it’s worth it.
At work, we have our own chatbot called “Mindspark”, which is powered by Azure OpenAI, which in turn uses GPT-4 and/or GPT-3.5, if I’m understanding it correctly. It’s internal-facing, and at this point, really just an experiment, I think. I’m not sure if there are any long-term plans for it. Anyway, it’s reasonably good, and also one of the only options, from my work computer. For some reason, we block access to ChatGPT’s web interface, so I can’t use that directly at work. (Which is one of the reasons why I probably wouldn’t pay $20/month for ChatGPT Plus. If I was paying for it, I’d want to have access to it at work and not just at home.) I’ve also noticed that we block Perplexity, and I expect some of the other popular tools. (I’m not sure why, though I’d guess it has something to do with distrust of the privacy policies for those tools and worry that proprietary corporate info will get into them and then maybe leak back out?)
I’ve also played around with Poe, which is a tool that gives you access to a bunch of different AI tools, including ChatGPT. They also have a $20/month plan that gets you access to more advanced models, and lets you use it more. I’m not sure how worthwhile that is, vs. using ChatGPT directly. I guess there’s some utility in having access to multiple sources through a single interface. I definitely want to play around with it some more.
And I’ve tried out the new Bing chat. It’s also powered by GPT under the hood, I think. The nice thing about Bing chat is that, unlike the free version of ChatGPT, it combines web search with GPT, so that it can return more recent information than using ChatGPT alone. (And my company doesn’t block Bing chat, so I can use it at work.)
So that’s my brief overview of the front-end interfaces for LLM back-ends that I’ve tried out. I haven’t found one that is noticeably better than the others, at this point, but I haven’t done much with them yet.
I should also mention that all of these things, for a lot of the use cases I’ve tried, are spectacularly bad at returning correct and/or useful data. In general, I’m not sure if they’re super useful as general research assistants. If you can find an answer to a question with a regular web search or a simple Wikipedia check, that’s way better than asking ChatGPT.
Aside from just playing around with these things, I’ve also been reading some articles and listening to some podcasts. I thought I’d include some podcast links here, for reference.
- Here’s an episode of the New Yorker Radio Hour from a few months ago, where they did an interview with Sam Altman, CEO of OpenAI. It’s somewhat interesting, at a high level.
- Ezra Klein has done a few shows talking about AI and LLMs and stuff. Some of it is pretty interesting to me, but it’s mostly high-level philosophical stuff, and I’m not sure what I think about some of it.
- On the more practical side, Scott Hanselman did an episode of his podcast recently where he interviewed a guy who wrote a book on “prompt engineering”. That’s the kind of thing that made me roll my eyes, until I started digging into it a bit. I still think the whole prompt engineering thing is a bit overblown, and I don’t want to read a whole book about it, but I’ll admit that some of it is useful, and I have now watched a couple of LinkedIn Learning videos on the subject.
- Also on the practical side of things, I’ve queued up a few episodes of .NET Rocks related to AI. This one, from August, looks interesting.
- And there’s a recent episode of RunAs Radio that got into some good no-nonsense explanations for how LLMs work. I think that episode has a better explanation of the tech involved than anything else I’ve read or listened to. (I’m sure there are other good explanations out there, of course, but this is the best one I’ve stumbled across so far.)
- And, finally, related to .NET Rocks, I see that Carl has a video series called The AI Bot Show that covers this stuff. I guess I’m going to have to watch some of those.
So, in conclusion, I guess I’m doing a little less eye-rolling at this stuff now. I see some utility in it, and I’m getting a better idea of what it’s good for and what it’s not good for.