Top risks related to Artificial Intelligence projects and how to overcome them

Before adopting AI or an AI-based project, it's important to answer this question - Does AI need the business or does the business need AI to grow, prosper, and compete?

Faisal Husain
Published: 19, Dec 2017

Faisal Husain is an entrepreneur and proven business leader with over 20 years’ financial services and technology expertise. As a CEO of Synechron, Husain is responsible for providing the vision and strategy that brought the company from a self-funded start-up to $500M in revenue. Under Husain’s leadership, Synechron has invested heavily in R&D, launching Synechron’s global FinLabs and six blockchain accelerators. Prior to Synechron, Husain was responsible for developing enterprise-level applications for Merrill Lynch and Dun & Bradstreet. He holds a Bachelor’s in Aeronautical Engineering and a Master’s in Computer Science.

Shutterstock
Shutterstock

Remember Skynet, the fictional Artificial Intelligence (AI) system which threatened to take over the world in the Terminator franchise? That was many of our first introductions to AI and to the “horrors” it might unleash. That was our imagination in hyper-drive, and over the decades, we’ve played out similar scenarios in other major blockbusters from the Matrix to iRobot and everything in between. Yet, despite continued fear-mongering, the scenario playing out in real life is a very different picture. AI is being used to address major business challenges for anti-money laundering and fraud where large volumes of data make it difficult for humans to resolve manually. Predictive analytics in financial services, healthcare and other industries are making it easier to plan around negative outcomes to create safer situations for workers and citizens. However, AI does have its risks if its not managed and governed well.

Today, we hear of every second corporation trying to use AI in some way or the other to help achieve either one or many business objectives – becoming more efficient, effective, leaner, faster turnaround time, better customer service, reducing costs, and the list goes on and on.  AI is now seen as an answer to all the ills plaguing the world of business and just about everyone is either already on the bandwagon or ready to jump.

However, it is important that we take a step back and think about how to use AI rationally to help our businesses and not use AI just because everyone is doing so. AI has to be understood and used as a technology which is helping solve a business problem. There has to be a plan and an operating model to use AI, and it has to be properly implemented after a thorough due diligence. AI needs training through data modelling and pilots and ongoing training and intervention after that so humans remain involved in setting and maintaining proper governance and controls.

AI is just another technology AI will soon become an integral part of everything that we do. We are already seeing a number of businesses across manufacturing, banking, healthcare and so on, using AI for innovative solutions. However, while technology/ AI has helped us remove human error and create more predictable outcomes, it has also raised questions about what the future job market will look like and how to address AI-powered decisions in our existing legal and ethical frameworks.

Given this, businesses should treat AI as they do with any other new technology. They should test its benefits, understand its risks, and put in proper controls and governance to be able to reap its benefits while minimising its limitations. After all, AI is just another technology in a toolbox of tools. Maybe, it is the right solution to a problem but maybe a better technology can be used. If you are planning to use automation like Robotics and Natural Language Processing (NLP) for efficiency gains, how are you adapting your workforce to future job needs within the business? When the AI models are being trained, what is the version of the truth you’re using to consider its accuracy and what is the dividing line between when something can be automatically executed by AI and when a human needs to intervene or at least review decisions? These are common processes that a business would follow when implementing any new technology like blockchain or cloud and proper governance around AI should be put in place just as they would with any other technology.

A business plan and due diligence  
Does AI need the business or does the business need AI to grow, prosper, and compete? This is the fundamental question that all companies wanting to implement or use AI need to answer. If it’s about promoting the technology first, your AI-lead approach risks failure if it does not fit into your larger business strategy. CTOs should not be asking, what cool AI project can I work on but instead what business challenge do I have and what are the many approaches I have to address them. AI may be one of them.

I reiterate that like any technology – digital, virtual, online - AI too needs a strategy and a business plan. Corporations should be realistic about their current capabilities and not pursue AI at the expense of other more feasible technologies. They have to understand how AI can help followed by the why, where, and what.

AI needs training
AI, machine learning, deep learning is real and will be required unless you want to be left behind. However, before we embark on an ambitious and expensive AI implementation, it is imperative to first have a data storage capability and then properly train your workforce that can extract data, read it, make sense of it and use the insights to create a distinct advantage. Those firms that do not think about the operations and infrastructure that may be a prerequisite to support AI initatives may not get very far. However, start small. If it’s too big a hurdle to get approval for a massive data lake to conduct more advanced cognitive learning, prove value within the business around automation (RPA and NLP).

Given AI has created a sense of fear amongst our workforce, firms can also try to implement a small pilot which helps the workforce understand the technology and how both can co-exist. Companies, especially those in financial services, business process outsourcing are already using AI through chatbots. What has happened is that the nature of the jobs has changed – employees in these industries have been re-trained, re-deployed, and re-focused to exist symbiotically.

Governance and control
Coming back to Skynet, the AI system which took over the world, we must remember that there is still fear and concern around AI that comes with the unknown and though this future may be imaginary, the lesson is that AI needs to have proper governance and control. Like any other business tool, the goals for AI and the overall goals for the business need to be in sync. If a casual association exists, very often AI goals crop up that have no bearing on business goals. And when this happens, you wonder what is really being accomplished.

AI, like any other disruptive technology, has its pros and cons. What is important is to use AI to solve real business challenges and take the responsibility to minimise trade-offs. Remember, AI is a technology, like any other, that will only be beneficial if used sensibly and wisely.

- By Faisal Husain, Co-founder and CEO, Synechron

Post Your Comment
Required
Required, will not be published
All comments are moderated
Prev
Put Down the Cookie Cutter: Digital transformation should be uniquely customised
Next
Tail spend management: Enhancing user experience