<div class="paragraphs"><p>Suresh Babu</p></div>

Suresh Babu

 

Group Product Manager - NVIDIA

Brands

Artificial Intelligence What Lies Ahead?

Shweta Singh

Artificial Intelligence (AI) is the most disruptive technology innovation of our lifetime, says Suresh Rajasekaran. AI will become as much a part of everyday life as the Internet or social media did in the past.

In doing so, AI will not only impact our personal lives but also fundamentally transform how enterprises take decisions and interact with their external stakeholders (e.g., employees, vendors, and customers).

The question is not whether AI will play a role but more of which role it will play and, more importantly, how AI systems and humans can (peacefully) coexist next to each other.

Which decisions should rather be taken by AI, which ones by humans, and which ones in collaboration will be an issue all companies need to deal with in the future.

Experts have been predicting that it will only take a few years until we reach Artificial General Intelligence -the next evolution of Artificial Intelligence where systems will show behaviour indistinguishable from humans in all aspects and that have cognitive, emotional, and social intelligence; only time will tell whether this will indeed be the case.

The Future-Need For Regulation

The fact that in the near future, AI systems will increasingly be part of our day-to-day lives raises the question of whether regulation is needed and, if so, in which form.

Although AI is, in its essence, objective and without prejudice, it does not mean that systems based on AI cannot be biased. Instead of trying to regulate AI itself, the best way to avoid errors like biases is probably to develop commonly accepted requirements regarding the training and testing of AI algorithms, something similar to consumer and safety testing protocols used for physical products.

This would allow for stable regulation even if the technical aspects of AI systems evolve over time.

In a similar manner to how the manufacturing processes automation has resulted in the loss of blue-collar jobs, the increasing use of AI will result in a lowered need for white-collar employees and even high-qualified professional jobs.

A few ways to regulate and minimise the impact on employment can be mandating the firms to spend a certain percentage of the money saved through automation on training employees for new jobs that cannot be automated. Governments and enterprises may also decide to limit the use of automation.

In France, self-service systems used by public administration bodies can only be accessed during regular working hours. Or firms might restrict the number of hours worked per day to distribute the remaining work more evenly across the workforce.

All this need for regulation necessarily leads to the question of who will guard the guards themselves? AI can be used not only by firms or private individuals but also by states themselves.

China is currently working on a social credit system that combines surveillance, Big Data, and AI to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”

In an opposite move, San Francisco recently decided to ban facial recognition technology. In the end, international coordination in regulation will be needed, similar to what has been done regarding issues such as money laundering or weapons trade.

The nature of AI makes it unlikely that a localised solution that only affects some countries but not others will be effective in the long run.

Nobody knows whether AI will allow us to enhance our own intelligence or whether it will eventually lead us into World War III. However, everyone agrees that it will result in unique legal, ethical, and philosophical challenges that will need to be addressed.

Similar to the Trolley Problem, where an imaginary person needs to choose between inactivity, which leads to the death of many and activity which leads to the death of few, in a world of self-driving cars, these issues will become actual choices that machines and, by extension, their human programmers will need to make.

How do we regulate a technology that is constantly evolving by itself— and one that few experts, let alone politicians, fully understand? How do we overcome the challenge of being sufficiently broad to allow for future evolutions in this fast-moving world and sufficiently precise to avoid everything being considered AI? This is precisely what we should strive to answer.

Suresh’s Top 5 Predictions For AI In 2023

  • Generalist AI Agents will proliferate and solve complex and open-ended problems.

  • Generative AI will become a reality from hype.

  • Digital Twins’ creation and usage will explode in solving complex problems and simulations.

  • AI adoption will be streamlined across enterprises and will become cost-effective.

  • AI/ML Engineers and Tools will become mainstream.

Suresh is an experienced product leader with over sixteen years of industry experience and is currently the Group Product Manager at NVIDIA.

In his current role, he is focused on helping enterprises to adopt, accelerate and scale AI solutions in their businesses. Suresh worked at companies like Samsung, Adobe, and Autodesk before joining NVIDIA. Suresh is passionate about Artificial Intelligence and its profound impact on Humans.

Suresh holds a Master’s in Software Engineering from Carnegie Mellon University and an MBA from The University of Chicago Booth School of Business. He currently lives in the San Francisco Bay Area with his wife and two kids.

Get The CEO Magazine to your Door Steps; Subscribe Now

What are some great free online tools for entrepreneurs?

How To Earn Money Through Google Blogger?

What is the difference between Mutual Funds and Stocks?

Get Productive! Top Google Docs Features Explained

What is a business plan?