Regulating AI without stifling innovation

The need for AI regulation

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

Most people are probably unaware of the everyday, simple use cases ofArtificial Intelligence(AI) in action. Smart home speakers, Siri and Cortana phone assistants, personalizedsocial mediafeeds and website adverts are all driven by AI. But at what point does seamlesscustomer experiencestop being enough and consumer privacies or “creepy” AI-driven ads start to put people off?

It’s not just consumers and end-users that AI will benefit. Many see AI as enabling businesses to foster innovation and want to introduce regulation to minimize the disruption in the growth of the technology. Others however have concerns aroundprivacy, discriminatory AI algorithms, unchecked growth and potentially unwarranted use, so AI has become an area of concern for many. As AI adoption increases and it becomes more ubiquitous in both our personal and professional lives, it will come under increasing scrutiny to ensure it is used as a force for good.

With global AI investment expected to reach $232 billion by 2025 and current investments standing at nearly $12.5 billion, the AI market is set to grow rapidly over the next few years. As it does, can the technology continued to grow unchecked, or does it warrant the need for regulation to ensure its use for good?

Hot button privacy issues

Hot button privacy issues

Privacy has always been a hot button issue and that’s not going to change anytime soon. We’re seeing discussions around AI anddataprivacy on a global scale — the European Commission is considering a five-year ban on facial recognition technology because of potential “big brother” type implications. Last year, IBM declared it would stop offering facial recognition software and claimed that AI systems used by law enforcement departments needed to be tested for bias issues.AmazonandMicrosoftsoon followed suit.

With the growth in AI generated deep fakes spreading across social media platforms, the bias issues that AI create are certainly prevalent within the media and there is no shortage of stories that highlight racial or gender biases in various AI systems. This is a major issue for public acceptance of AI and is one that needs to be fixed sooner rather than later.

It’s important to remember AI holds tremendous promise - impacting everything from helping cities to plan transit routes during peak times to chatbots facilitating greater customer satisfaction - so there needs to be clearly defined privacy boundaries. By requiring users to opt-in, similar to GDPR, to sharing their data for analysis and processing by AI, that boundary is clearly defined.

Striking the balance

Striking the balance

The surface of AI has only just been scratched with almost endless possibilities for the growth and adoption of the technology. Everything from the biggest questions we face today, to the mundane everyday life hacks. As AI gathers up more data, on a personal level but also local government, national, or global level, a balanced regulation that protects both privacy and gives industry the opportunity to innovate is key.

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Regulation forces technologists to think about the long-term side effects of AI to force them to consider the future problems that could arise in a year, a decade or a century. With AI still in its early days, there will be a lot more proposed plans, initiatives and regulations before we get it right. In theory, the “Global Partnership on AI” is a good idea, because global coalitions can work–just look at the Paris Climate Agreement as an example. But it’s important to strike a balance and governments will need to remain mindful of regulatory overreach. A global coalition enables long-term conversation as the technology develops. It’s not just one and done.

For AI to continue to deliver innovate services for end-users, businesses need to ensure they work within a framework of regulation without having their hands tied. Remaining cognizant of how a particular AI application is being developed and ensures it does not breach societal concerns, like gender and racial bias,securitycompromises or, mass surveillance increasing inequality. A delicate path forward should be taken with nuanced legislation that ensures greater good without hamstringing innovation. Companies need to take a holistic approach to address privacy concerns, versus a piecemeal approach.

Regulate to innovate

In the absence of any regulations or standards, many tech businesses have sought to create their own code of ethics, or regulations, that guide their development of AI. In 2018,Googlepublished its own AI principles to help guide the ethical development and use of the technology. But without a wider regulatory framework, businesses are free to choose how they develop their AI systems.

Regulation delivers a broad framework to work within, without overstepping its boundaries and becoming restrictive. By working together to deliver a framework that works for everybody, is developed responsibly and leaves nobody behind then AI has the power to truly transform our lives for the better.

Prasad Ramakrishnan, CIO, Freshworks.

Phishing attacks surge in 2024 as cybercriminals adopt AI tools and multi-channel tactics

This new phishing strategy utilizes GitHub comments to distribute malware

Arcane season 2 finally gave us the huge Caitlyn and Vi moment we’ve been waiting for – and its creators say ‘we couldn’t have done it in season one’