W Power 2024

AI: Understanding the technology beyond the hype

Any use of AI to augment or automate human work is not an inherent feature of the technology but a decision made by human managers. As such, it's incumbent on all of us to develop a better understanding of AI and how to leverage it ethically. Citing ongoing research and case studies, IESE Business School Prof. Sampsa Samila elaborates in this interview

IESE Business School
Published: Nov 17, 2023 10:55:14 AM IST
Updated: Nov 17, 2023 12:19:20 PM IST

AI: Understanding the technology beyond the hypeImage: Shutterstock

Mention artificial intelligence (AI); the first thing that pops into your head is likely “it’s coming for my job.” Or “it’s going to wipe out humanity.” Or maybe it’s the open letter signed by AI developers demanding that we put the brakes on this thing until we understand it better.

As academic director of the AI and the Future of Management Initiative, IESE Prof. Sampsa Samila has been trying to do just that — understand AI better, coordinating several ongoing research projects on AI and the future of work.

He urges everyone to remain calm as he walks us through the issues. As he reminds us, AI is a tool not to be feared, and it’s up to us to use it well based on solid business concepts.

Q. Tell us about the aims of the AI and the Future of Management Initiative.
It’s a multidisciplinary research agenda drawing together different IESE faculty members to conduct qualitative and quantitative studies on AI across the whole spectrum of business — from labour markets to strategy to organisations to leadership to human-machine collaboration to ethics. We’re also producing case studies for use across all our programs. The aim is to help business leaders develop their knowledge and skills related to AI so they can manage it ethically and socially responsibly.

One of the things we’re studying is the effect of automation on labour markets. MIT professor Daron Acemoglu has researched the phenomenon of automation technologies displacing workers but without real gains in productivity, cost savings or quality of service — in other words, companies are automating, even when it’s inefficient. In our research, we’re finding a mechanism for why that may happen.

Part of the explanation may be that automation or technological development is perceived as inevitable. But in the case of trade, that is perceived more as a policy choice, and people whose jobs are directly affected by more imports are more likely to vote for more protectionism, as one example. Yet, people take a more passive response when it comes to labour automation. Especially where companies have sufficient labour market power — such as when they are a large local employer with very few competitors in the labour market — the threat of unemployment makes people more willing to accept automation even when it goes against their interests; it may even push down the wages of those employees who remain in jobs. We have some empirical evidence in this direction.

Also read: Will you shape (or be shaped by) the future of AI, Blockchain and the Metaverse?

Q. Regarding the “inevitability” of AI, there has been growing chatter about the “inevitable” threats of AI to our very human existence. Indeed, as we’re doing this interview, a newspaper headline reads: “Five ways AI might destroy the world: Everyone on Earth could fall over dead in the same second.” How much sleep should we be losing over this?
I think the extinction risk is not something we should be worried about. Other concerns are more realistic and pressing. Real concerns about the adjustment process, income inequality and economic power exist.

Going back to our research, the concern in that study is not AI; the real concern is labour market concentration. AI or automation is a tool for giving powerful companies a new weapon. Technologies do not create or destroy jobs by themselves. That is done by companies led by managers who make specific choices. Any use of AI to augment or automate human work is not an inherent feature of the technology but a decision made by human managers.

I’m more worried that the high-profile lobbying to regulate or pause further AI development will do more to block new market entrants, limit competition, and increase concentration until we end up with a global economy dominated by one, two or three large US companies. As our research highlights, concentrated economic power is a far more real and tangible risk to welfare than our hypothetical extinction by AI.

Also read: Making AI Trustworthy: Navigating the future of innovation and ethics

Q. So, if we as companies, managers and employees are ultimately the determinants of AI’s future, what should we do?
Since the challenges posed by AI lean more toward management rather than the technology itself, the role of managers becomes pivotal. If AI transforms core business processes, then that isn’t something you can delegate to third parties or just let happen. Each of us is responsible for developing a sound conceptual understanding of AI, its application within the business framework, and how to leverage it. And given CEOs’ larger responsibilities and decision-making power, it is even more incumbent on them to be fully up-to-date on the technology and where it might go next. Their capacity to lead and motivate is crucial in propelling the entire organisation through this transformative journey.

Q. What about recruiting more junior-level employees with the new AI skill sets you’re looking for?
One approach used by a company we’re studying is, as certain people come up for retirement, you look at ways to automate some of their old tasks and then recruit new profiles with the new skills and capabilities you require without introducing a radical restructuring of the entire organisation, which is what raises a lot of the worries related to AI. Whenever I talk with executives, their concerns are often about the treatment of existing employees: What do we do with the people who don’t have the requisite skills? How do we teach them? What if they’re not interested in learning and using these new tools?

Also read: Embracing inclusivity by humanising AI decisions

Q. Good question: what do you do?

If you want your employees to be willing and interested in learning and using AI, one of the critical things is designing AI tools that benefit them. This idea is relatively obvious and well supported by evidence, but it is not always easy to implement. New technologies create new opportunities, and that will energise some employees. But in any transition, there will always be some resistance, hence challenges, in humanely managing it.

Q. What other cases are you working on?
We have another new case on OpenAI, considering the ethical and business implications of large language models like GPT-4. We discuss, among other things, how to ensure that AI benefits all humanity. We are currently working with companies actively using generative AI at work.

Q. What are some of the dilemmas emerging?
Intellectual property is an important one. Every day, we hear of new lawsuits filed by authors, artists and other content creators who say their copyrighted materials were used without their permission to train the AI algorithms, now reproducing their content in whatever they generate. Does training an algorithm fall under “fair use,” whereby a limited amount of copyrighted material is allowed without consent? That’s an ethical but also a legal question for regulators and courts to hash out. In Japan, legislators said training an algorithm on any material was not a violation of copyright to try to encourage more AI development there.

Also read: Getting an AI partner: Pros and cons

Another dilemma is whether non-human-generated content can be copyrighted or patented. In the US, the law is very clear that it can’t — but some of those laws are over 200 years old, back when only humans could be inventors. How much must the original content be modified before a person cannot claim copyright over it? What percentage of AI inventions must involve human intelligence before qualifying for protection? South Africa became the first country in the world to allow AI-generated inventions to receive patent protection — a move some say went too far, too soon.

Q. It sounds like there will be different regulatory regimes on AI.
That wouldn’t necessarily be a bad thing. It’s not that different from what we have now. At some point, India blocked Chinese apps, and China blocked American ones, which led to the development of domestic Chinese apps. Meta couldn’t launch its Threads app in Europe because of the EU’s stricter privacy laws. If having all these different regimes actually encourages a strengthening of antitrust regulation, then we may start to see the benefits of market competition rather than the negative situation we have now, where all the economic power is concentrated in a few big corporations, leading to higher prices for consumers and lower wages for workers.

As things stand, we’re all dependent on Microsoft. We use Google for searching. All my devices are Apple, and I would have a hard time switching to anything else. Will ChatGPT become the next Big Tech player that dominates the field and locks everyone else out?

Also read: For most companies the payoff from artificial intelligence is going to be negative: Aswath Damodaran

This brings me to my larger point about AI: It’s a technological tool, and while technology changes, the laws of economics and the fundamentals of competitive advantage do not. Just like the internet changed certain economic features but didn’t change the underlying economic laws, I don’t see AI changing the laws of economics or strategy. In our OpenAI case, we look at barriers to entry, which is the same concept as it has always been, but we try to understand it in terms of what it means now in this new context of large language models. Managers need to approach AI with this same kind of conceptual thinking.

Q. Along these lines of sticking to the basics, should we keep learning programming then?
We need to keep learning programming because, as programming becomes more efficient and thus cheaper, we will do more of it. In line with the Jevons paradox in economics, as the productivity of programming increases, we may actually need more programmers because we will have many more things to program. So, it’s not a foregone conclusion that we will need fewer programmers. There’s some evidence that people who used ChatGPT to write code did so faster, but it was less good and less secure, with more flaws and bugs, than purely human-written code. Hence, programming is unlikely to be entirely automated anytime soon.

Another issue is that they no longer write code from scratch; they put together existing code libraries. If the AI puts together the libraries for you, and a mistake is made, then everybody who uses that program is going to be affected by the same embedded flaw. If you know programming, you understand how this works.

But if you don’t, and you start using these AI tools to help you think, and reason, and everyone is using the same tool for their thinking, then this could shift the entire distribution of thinking on a topic in one (potentially negative) direction. This is more than bias. Many people have many different biases. I’m talking about an algorithm with one particular bias that everyone adopts so that we all assume the same bias in everything we do and don’t even realise it.

Also read: Are A.I. text generators thinking like humans — Or just very good at convincing us they are?

Furthermore, I think programming is useful for learning conceptual logical thinking. It’s how large language models learned their “reasoning” abilities, so understanding programming helps to understand how the AI “reasons.”

Q. As AI is evolving so quickly, is it possible to make any future predictions that won’t be out of date when this interview appears?

All I dare say is that every prediction made about AI hasn’t materialised so far — whether it’s completely eradicating some employment category or that AI will progress much faster than it has. For example, because of early studies that AI could detect diseases in radiographs better than humans, AI was predicted to replace radiologists. Obviously, that hasn’t happened; if anything, we have a shortage of radiologists. For most radiologists, reading scans is only one task out of their entire job. Despite those early studies, we still don’t have an actual AI system that reliably diagnoses images better than a human doctor.

So, what will the future hold for large language models and generative AI? Progress will depend on computational costs, training data availability, and model architecture improvements. There has been a consistent finding that more is better: more computation, more training data and bigger models. Will this continue to hold? Quite possibly, but certainly not guaranteed. However, as the models get bigger, the computational costs also rise considerably.

We are also facing memory limits on the current GPU chips that do the computation, limiting the practical size of the models. Technological progress will ameliorate these issues, but how fast and at what cost remains to be seen. Nvidia is launching a new generation of GPUs, and the practical impact should be felt soon.

Also read: What should leaders make of the latest AI?

The availability of useful training data is also a factor. Is it the case that developers could get their hands on all the useful training material already fed into the system, and there’s a shortage of complex “reasoning” data and materials? Simply feeding in, say, novels may not help the models get any better because novels don’t contain any fundamentally new, conceptually strong content that can be used for additional training.

All that being said, maybe the next generation will surprise us!

Tips for engaging with AI

  • Ignore the hype. Instead of worrying about your eventual extinction, dwell instead on how AI can be useful for your work.
  • Separate the signal from the noise. Focus on the core properties of the tool.
  • Experiment with it:
    1) Write short texts. I find it good for writing abstracts or an outline for a paper I’m working on.
    2)Ask it for ideas. Even if you never use them all, there may be one or two gems.
    3) Exchange ideas.
    Having a back-and-forth chat can challenge and hone your thinking on a subject.
  • Understand AI at a conceptual level. Make sure you have a deep understanding of how your own business works — the core idea of what you do for your customers and the core value proposition that’s hard for others to replicate — before considering how AI might provide additional value, and how you might capture part of that value.
  • Customise it. It has to make sense within your context and according to your values.

Sampsa Samila acknowledges funding from the Spanish Ministry of Science and Innovation/State Research Agency, the European Commission Horizon 2020 program, the Social Trends Institute and AGAUR (Government of Catalonia) in support of ongoing research with IESE colleagues.
 
This interview was first published in IESE Business School Insight #165.

[This article has been reproduced with permission from IESE Business School. www.iese.edu/ Views expressed are personal.]

Post Your Comment
Required
Required, will not be published
All comments are moderated