All right, I think it’s time for me to throw my 2 cents in here. I’ve been watching this thread with interest, and I think that while there are some interesting points raised, there are some things I should point out as a technophile who watches this space.
(Please note that my views are my own and I’m not expecting anybody to necessarily agree; I just want to make some observations. Anyway, here goes!)
Should AI scare you? Maybe. The big problem right now is that world+dog sees it as a quick way to get definitive answers on everything from “how do I write this code?” to “should I commit suicide?” (not kidding, that actually happened recently. The company behind the AI used in that case had to scramble to patch the AI immediately). That’s not what it is. If we start viewing it as a tool to help us sift through data but keep in mind that it is inherently incapable of intentionally producing an intelligent answer (see the collapsed block below), AI becomes not scary.
AI can't intentionally produce an intelligent answer?
AI is based around pattern matching. Basically, the AI has a giant dataset of text, images, video, or whatever else it needs to operate on (for GPT and such AIs, the model would be based around text) that it analyzes. It then uses that to predict the likelihood of a specific word appearing after what it already has:
In this case, it predicts that “you” will likely come after “how are”, “feeling” will follow “how are you”, and “today?” will follow “how are you feeling”.
This all goes to show that AI isn’t being intelligent: it’s just mimicking human intelligence, which on the internet is often not as intelligent as it is offline.
Well obviously they need to get a dataset somewhere to make it better, and why not just train it on the massive amounts of text people have been feeding into it? Discretion on what you ask AI is definitely good though; Samsung recently had trade secrets shared with ChatGPT by employees who didn’t consider the ramifications of doing so.
Oh no, crypto wars are starting! Seriously, though, I think that cryptocurrency is an interesting concept that has met with undue controversy. I am in agreement that government-run cryptocurrency is probably not the way to go, since community-driven crypto is fairly easy to use anonymously (since you can create a wallet and just not publicize the fact that it is associated with you), while goverment crypto is going to end up with tracking involved. Realistically speaking, though, the government can already get your purchase history by subpoenaing your credit card company or your bank[citation needed], so government crypto isn’t all that earth-shattering in terms of privacy.
Back to AI!
As I pointed out above, this is because people look at AI, see that it seems alive, human, and intelligent in its responses, and therefore people place a very high degree of trust in AI. But AI cannot be trusted by virtue of its design.
Spoken like a true geek GIGO is definitely one of the top offenders with regard to generative AI issues. Like I said earlier, the average human IQ seems to decrease by a few points while connected to the internet. I have no idea why, but it’s easily proven by checking out Twitter or Reddit.
Yep. AI can be very good at this sort of stuff. Another example is removing satellite streaks from astronomy photography (e.g. the smear of light produced when one of the thousands of Starlink sats moves past your telescope).
I think it’s essential to take into consideration what the preview for the linked article says: ChatGPT’s dataset dates to 2021. Does ChatGPT know about Biden vs. Trump? Absolutely (and more so if you are paying for access to the newer GPT-4 backend). Do the developers have a hand in restricting output about Trump? Undoubtedly. But I think we need to cut OpenAI some slack here (even though this forum tends toward right-wing views, hear me out on this one). They gave themselves the monumental job of creating an AI trained on practically everything on the internet, whether good, bad, or downright nasty, and then filtering out any and all offensive material, whatever that may be. They obviously can’t let the AI be completely unfiltered on high-profile and highly controversial politicians (from either party); they had to make a choice and I believe that they are trying to do their best to filter the output in a fair manner. And as I implied at the beginning of this paragraph, ChatGPT’s model has a dataset that’s skewed towards Trump in terms of how much data it has, so there’s more data for the AI to look at and decide on its own that it needs to restrict its response about Trump, whereas Biden has only a year’s worth of campaign data to be judged by instead of a full presidency.
If he wrote an AI in Python, I’m calling him a coder. Ain’t nobody gonna learn Python and write an AI without becoming some form of hobbyist programmer in the meantime.
More to the point, keep in mind that we’ve already had the tech to recognize printed writing for 20 years, and recognizing handwriting, while undoubtedly more difficult, is not really scary.
This is precisely the great part about generative AI. For example, I at one point wished to replace a certain bit of text in all the files in one folder on my laptop with another bit of text. I asked Bing Chat to help me out, and it gave me a working command to do that, as well as an accurate explanation. Here’s a recreation with ChatGPT:
This concept of having an AI help you with technical tasks actually was brought to life a few years ago by GitHub with the introduction of their Copilot AI, which will generate code on-demand for developers. I’ve never used Copilot myself, but they say it can be pretty useful.
Here’s Bing Chat expounding on Headin’ Home…
You put yourself on the internet, so you’re going to end up in an AI database. It’s just a matter of time. Even if Bing’s answer here was based mainly off of web searches, that just reinforces the point: AI is being put to work on as large of a dataset as possible, and the internet contains the absolute largest database of text available to AI experts.
Now now, LIbby, I would hardly call him foolish for asking ChatGPT to write a poem.
In conclusion: Should AI scare you? Maybe. Will it take your job? Maybe (but it’s probably fairly low chances for most people). Will it give you a job? Possibly; there are people being hired as prompt engineers. Is it perfect? Absolutely not.
Is it exciting?
(image generated by AI)
Boy oh boy oh boy, yes it’s exciting!