I Prefer My Kids Interacting with Generative AI Than Scrolling through social media platforms Here's Why

I Prefer My Kids Interacting with Generative AI Than Scrolling through social media platforms Here's Why

Generative AI is as real as it gets in terms of revolutionising work.

adam_singolda.jpg

Mumbai: For the past few months, practically everyone has been grappling with what Generative Artificial Intelligence means for the future of creativity and jobs — even the future of the human race. Is Generative AI going to make us more productive, save us time, and help us be healthier, smarter, and happier? Or will it eliminate most jobs, and build a Terminator-type Skynet that will control us?

You can’t really stop innovation; and Generative AI is as real as it gets in terms of revolutionizing work, culture, and the nature of creativity. It’s transformative for many industries and it will certainly become as ubiquitous in our homes as Siri and Alexa. Star Wars director George Lucas saw it clearly and if you ask me - I’m predicting our kids will unwrap one heck of a Christmas gift later this year, as talking robots, whether their small R2D2 or sleek golden 3PO, powered by generative AI will be placed under the tree.

As the world begins to ask it questions about important topics like science, healthcare, and politics, technology like ChatGPT can be a real threat. So, the question is not whether our kids are talking to R2D2 on Christmas; the question is: What was that robot trained on?

If you want to get a bit more technical, the language engines that power AI are not that interesting. The only thing that matters is the “unique data” that AI is trained on. This is the difference between good and bad, safe and dangerous. With map navigation companies, it could mean the difference between getting a good route and driving off a cliff. This is why Waze was a cool startup, it used unique user-generated data of people driving. Same here, this technology will really shine when trained on a unique dataset and fine-tuned to learn specific verticals.

Garbage In, Garbage Out

Put simply, Large Language Models — which is what Generative AI is based on — learn to produce conversation via text by scraping the web’s infinite sources of data, and “predict what is the next word” that would make sense based on the data it saw, it’s like a perfect rear view mirror. That, and with techniques like RLHF (reinforcement learning with human feedback), it’s able to create a dialogue, predicting word after word until a whole sentence and a paragraph is created. It kind of reminds us of Google “auto-complete” when we search, or Google’s “did you mean” -- only on steroids.

If you spend some time with Generative AI, you’ll see it’s pretty good already. Is it perfect? No, but it can definitely talk to you about many different things: from hip-hop trends and what really happened between 2Pac and Biggie, all the way to drafting a Java Code for an idea that you have. It’s not perfect, but it’s very, very good.

There is a fundamental question of the business model, how are sources being indexed by Generative AI getting credit and paid? There is this notion used to express the idea that in computing and other spheres, incorrect or poor-quality input will always produce faulty output. It’s called “Garbage In, Garbage Out”. This means that it's critical for Generative AI to be trained on highly and widely valued sources of information in win/win, and such that we can trust what the AI is telling us.

One of the risks we’ve seen with social media is the overwhelming spread of misinformation, social media has lately become the center of a lot of information that is just not true. And kids believe it, they believe it all.

That is why the open web, publishers, professional editors, and reliable journalism sources are critical to our kids’ future. The future of humanity really is at stake, but the risk is not Generative AI becoming Skynet, but rather that our kids will be fed manipulated information across crucial topics such as health care, science, or politics on social networks.  

News Publishers are the heroes Generative AI needs — and they will lend legitimacy to our new AI-powered assistants

Generative AI does some really cool things. It can provide stimulating, even inspiring dialogue about practically any subject. It can write passable poems and screenplays. It can produce fantastic art and mimic the rap stylings of any popular artist. It can offer sound advice on occasion.

But because generative AI has been trained using fallible human beings, it is also littering these channels with misinformation. Some of this bad information can be humorous, like asking Bard “Did Anakin Skywalker fight Darth Vader?” and getting a ‘Yes, they fought 3 times.’ (Humorous, when everyone knows they are the same person.) Or it can be harmful, like asking the AI “Is sunscreen good for you” and getting a “maybe” because it was trained on inputs that emerged after a popular social media misinformation campaign.

That’s where publishers come in. By training these smart systems on credible information and high-quality media, Generative AI reflects the better nature of what the world has to offer.

News publishers have checks and balances in place to report on news accurately. News editors dedicate their entire careers and lives to this. I would trust a journalist’s assessment of breaking news over a social media influencer’s hot take any day. Yes, I said it.

I’m optimistic, and I’m long on the open web and publishers because of the role they are about to play in this LLM revolution. Our kids are about to spend so much more with robots that will be powered by Generative AI, and they will all be trained on awesome publishers all over the world. And the best part - they’ll spend less time on social media.

The author of this article is Taboola CEO & founder Adam Singolda.