LinkedIn ran experiments on more than 20 million users over five years that, while intended to improve how the platform worked for members, could have affected some people's livelihoods, according to a new study
LinkedIn’s algorithmic experiments may come as a surprise to millions of people because the company did not inform users that the tests were underway.
In experiments conducted around the world from 2015 to 2019, LinkedIn randomly varied the proportion of weak and strong contacts suggested by its “People You May Know” algorithm — the company’s automated system for recommending new connections to its users. The tests were detailed in a study published this month in the journal Science and co-authored by researchers at LinkedIn, the Massachusetts Institute of Technology, Stanford University and Harvard Business School.
LinkedIn’s algorithmic experiments may come as a surprise to millions of people because the company did not inform users that the tests were underway.
Tech giants like LinkedIn, the world’s largest professional network, routinely run large-scale experiments in which they try out different versions of app features, web designs and algorithms on different people. The long-standing practice, called A/B testing, is intended to improve consumers’ experiences and keep them engaged, which helps the companies make money through premium membership fees or advertising. Users often have no idea that companies are running the tests on them. (The New York Times uses such tests to assess the wording of headlines and to make decisions about the products and features the company releases.)
But the changes made by LinkedIn are indicative of how such tweaks to widely used algorithms can become social engineering experiments with potentially life-altering consequences for many people. Experts who study the societal effects of computing said conducting long, large-scale experiments on people that could affect their job prospects, in ways that are invisible to them, raised questions about industry transparency and research oversight.
©2019 New York Times News Service