This image has been generated by Fluid AI GPTI
n these wild times, where every dinner table conversation is likely to veer into a spirited debate about AI (artificial intelligence) regulation and whether we'll be waking up to a real-life Terminator
sequel, one thing is crystal clear—we've got an epic challenge on our hands. When thinking about all of this, the biggest problem that comes up is that no two people have the same definition and underlying understanding of what AI actually is today and how it works. Don’t believe me, ask everyone in your house this question and you’ll soon agree with me. So, the first key step is to define it and get everyone on the same page. AI is a combination of software code combined with advanced mathematics that’s capable of improving its output by taking feedback. That’s it! And no, it doesn’t have consciousness or independent thought (yet!), the things that make us human.So, think of AI like a big ol' collection of math formulas, designed to get a job done based on what we humans tell it to do. That job could be anything from making a self-driving car safe on the road to having a friendly chat.
When it comes to regulations, we've got to make sure we keep our hands on the steering wheel of this AI ride. Rather than getting lost in the nuts and bolts of the code, it's about asking, "Who's telling this AI what to do, and why?"
AI is getting more complicated every day. But that's not always a bad thing. Just look at Google's AI that played the Game of Go—it made some wild moves no human could guess, but it sure did win the game. What's crucial is keeping an eye on these AI systems, essentially focusing on AI governance on outcomes rather than procedures used to achieve the outcomes.
And now that we're heading into the world of generative AI, we've got to watch out for AI making stuff up. Regulators have to make sure AI isn't pulling answers out of thin air. At Fluid AI, we’ve seen this first-hand when we ingest an organisation’s data to provide an AI interactive layer on top. We're dealing with this by adding 'Anti-Hallucination' layers—it's our way of keeping our AI honest by providing references for its answers. But the only way to think about regulation is on the output of AI and where it’s being used for decision making.Also read: Don't fear bots, just don't write like them
Outsmarting the Villains
AI doesn’t introduce a new way of doing mischievous things, at least none that we know of. It simply enhances the speed of doing both good and bad things as they currently are. As such, the existing legal framework of what’s wrong and what’s right protects us at large. For example, an AI that creates fake videos and images to defraud and trick people still falls under the same illegal lens that prevents people from publishing photoshop renders of folks in an indecorous fashion. Regulation preventing AI development and proliferation isn’t going to stop the bad that AI brings simply because AI (a combination of math and code) is too easily available and distributable and can’t be controlled.
So the only way to protect its bad effects is to use AI as a defence mechanism from the bad actors. Having AIs that screen incoming content and information for fake news and fake imagery or to improve our cyber defence is going to be a far more scalable and realistic approach to solving this problem.
Another key consideration to ensure AI remains for good is to actually encourage its development globally so that this powerful technology isn’t monopolistically held for misuse in the hands of a few individuals. Ironically, one of the reasons the company OpenAI was formed for in the first place. The more we encourage development of it among startups and the open source world, the more we ensure an equal and level-playing field. What we want to ensure is that any form of government regulation shouldn’t create a complex moat which only allows the big boys to build and deploy complex AI. Otherwise, you will have created the very thing the regulation was supposed to prevent. Too much power, wielded in the hands of a few!Also read: Sam Altman: The brain that unleashed ChatGPT on the world
Who Holds the Reins?
Another interesting aspect of AI regulation is the content that these systems learn about and the right of the creators of this content. You see, AIs learn in a way that's kind of similar to us humans, by soaking up information from stuff we've created, like books, articles, images and more. But they do it way faster, like in turbo mode.
What rights do the folks who've created this content, which AI's feasting on, actually have? No, they won't own the final tricks AI's learnt. The massive amount of data AI learns from sometimes over a trillion data points won’t give any one creator the right on its output just like the author of a book doesn't have the rights on the works of a human student who is inspired by her work. We think what regulation should allow for content creators to say, "Hey AI, off my lawn!" if they don't want their work used for AI training? Or maybe they might want to say, "Sure, AI, learn away, but show me the money!" Either way, it's all about giving creators the power to decide if, and how, their work gets used in the wild world of AI training.Also read: Navigating the future and ethics of Generative AIAI holds immense potential for positive change, but like any potent tech, it's not without its risks.
As we embark on this thrilling journey, the key objective of regulation should be to foster democratisation of this dynamic tech. We need a flexible framework that can mature and adapt as the technology does, rather than one that inhibits innovation due to fears of the unknown.(The writers are founders, Fluid AI)