Ten interesting things we read this week

Some of the most interesting topics covered in this week's iteration are related to 'Meaning of time', 'Future belongs to polymaths', and 'how to learn'

Published: May 12, 2018

 g_105629_reading_280x210.jpgImage: Shutterstock

At Ambit, we spend a lot of time reading articles that cover a wide gamut of topics, including investment analysis, psychology, science, technology, philosophy, etc. We have been sharing our favourite reads with clients under our weekly ‘Ten Interesting Things’ product. Some of the most interesting topics covered in this week’s iteration are related to ‘Meaning of time’, ‘Future belongs to polymaths’, and ‘how to learn’.
Here are the ten most interesting pieces that we read this week, ended May 11, 2018.

1) The meaning of time [Source: Financial Times]
Time, is not uniform: it flows at a different speed depending on where you are and how you move. In the 2014 film Interstellar, the hero travels to the vicinity of a black hole. On his return to Earth, he finds his daughter older than himself: she is an elderly lady, he is still middle-aged. This is not Hollywood fantasy, it is how the world truly works. If we do not experience similar time distortions in our daily life, it is only because here on Earth they are too small for us to notice. Also, it is not the first time science has shown our intuitions to be mistaken. Science is constantly learning something new, and more often than not such learning means jettisoning wrong ideas. That is why science often clashes with common sense — our intuition wants Earth to be flat and still, and time to be the same everywhere. None of this is true.

The uniform nature of time is scarcely the only feature where our intuition has turned out to be mistaken. To everybody’s surprise, the difference between past and future fails to show up in the elementary equations that govern the physical world. But the discovery that has been the most disconcerting of all has been finding out that the notion of “present” does not make sense in the larger universe; it only makes sense in the vicinity of us slow humans. “Here-and-now” is a well-defined notion in science, but “now” on its own is not. Philosophers are struggling greatly with this discovery: if we call reality that which exists now, what is reality after we have realized that there isn’t a well-defined “now” all over the universe? The very grammar we use to talk about things, in which verbs have only a past, future and present tense, is inadequate to describe the ways of nature.

In physics, a great debate on the nature of physical time has developed in an arc spanning from Aristotle to Newton and from Newton to Einstein. Our understanding has repeatedly deepened in the course of this process. For Aristotle, time is just a way to count what happens, but it becomes an autonomous flowing variable for Newton and is then reinterpreted by Einstein as a feature of the gravitational field. Einstein’s great theory, general relativity, keeps receiving spectacular empirical support — confirming such counter-intuitive predictions as black holes, expansion of the universe, time dilation, gravitational waves and more. These give strong credibility to Einstein’s picture and continue to rule out alternative conjectures. But there is certainly more to learn. We do not yet know the fate of matter falling into the black holes we see in the sky, where Einstein’s theory says that time comes to an end. And the full complexity of what we perceive as the flow of time is not accounted for by general relativity alone: it involves quantum theory, thermodynamics and probably more, including neuroscience and the cognitive sciences.

According to the author, there is a rather large consensus in the scientific community around the expectation that the concept of physical time is likely to move even further away from our intuition, when its quantum properties are taken into account. Tentative theories of quantum spacetime have little that resembles time as we experience it. The basic equations of the quantum theory describe processes in which things change but no single time variable can track all possible changes. Does all this suggest our perception of time is illusory? It does not. But it indicates that what we perceive as time may not be a simple and elementary aspect of nature, but rather a complex phenomenon with many layers, each needing to be addressed by a different chapter of science. This is the true question about the nature of time: which aspects of our sense of it pertain to which domain of science? The author suspects that what we call the ‘flowing’ of time has to be understood by studying the structure of our brain rather than by studying physics.

Evolution has shaped our brain into a machine that feeds off memory in order to anticipate the future. This is what we are listening to when we listen to the passing of time. Our understanding depends on the neural structures shaped by the peculiar environment in which we live. They capture a very imprecise version of the actual temporal structure of reality. Our experience is an approximation of an approximation of a description of the world from our particular perspective as beings dependent on the growth of entropy, anchored to the thermodynamical arrow of time. For us, there is, as Ecclesiastes has it, a time to be born and a time to die. The time we experience is actually a multilayered, complex concept with distinct properties rooted in distinct layers of reality. Many parts of the full story are still far from being clear. Time remains, to a large extent, a mystery, perhaps the greatest one. A mystery that relates to issues ranging from the fate of black holes to the enigma of our individual identity and consciousness.

2) How much is a word worth? [Source: Medium.com]
In today’s world what’s more hurting than just the quality of prose is the declining pay for freelance writers. Freelance writers have long tolerated a wide range of rates. Nearly a century ago, a writer named Ring Lardner declared that he would “rather write for the New Yorker at five cents a word than for Cosmopolitan at one dollar a word.” It’s hard to think of another profession in which pay for comparable work can vary so much from assignment to assignment. Freelance writers have no collective with which to bargain, they are not subject to minimum wage laws, and their pay fluctuates all the time. For those reasons, it’s hard to keep track of the averages (and few organizations are compelled to try). But back in 2001, the National Writers Union published a report on pay rates for freelance writers. The report figured that to earn the median wage for college grads, $50,000 per year, writers needed to pitch, sell, report, write, edit, publish, and be paid an average of $1 per word for 3,000 to 5,000 words a month. Adjusted for inflation, that’s about $1.40 per word today.

During the past 52 years, a single dollar has lost nearly 87% of its value, and so have the words of professional freelance writers. That has meant, unavoidably, a big change in the quality of the job. It’s hard to understand how it happened. Ring Lardner was an elite writer of his time, but even his charity rate doesn’t look bad these days. Adjusted for inflation, five cents per word (from his generation) is now worth about 70 cents, which is considered a respectable fee at legacy publications and well-funded start-ups. The $1 per word Lardner got from Cosmo, on the other hand, is worth over $14 now. Twelve of Lardner’s stories — let’s call that a year’s worth of work for a feature writer — would earn him $600,000 in 2018.

The first account of a publication offering $1 per word comes from 1908. It was for a type of story that remains the single most expensive genre in writing: anything “post-presidential.” The Fourth Estate, an early 20th-century weekly newspaper about the media, reported that Theodore Roosevelt was fielding multiple offers at the unheard-of fee (plus expenses!) to write the hunting trip he planned to take after he left office. “At the rate things are going Mr. Roosevelt will find it far more profitable to shoot game in Africa than to be President of the United States,” the Fourth Estate joked. The press buzzed: $1 a word! After that, $1 per word became a sort of celebrity rate, a way to indicate epic importance.

In the expanding post-war years, $1 a word also expanded beyond the province of imperialists and dictators. A 1952 issue of the communications journal Printers’ Ink cited $1 as the top standard word rate for big names at large-circulation women’s magazines. Ten years later, Time reported that it was the rate for excellent unsolicited submissions to Reader’s Digest. By the mid-1960s, $1 was standard at the highest-circulation national magazines. It’s what Playboy paid, and, looking through old copies, it seems like the magazine could get anyone in the world to write for it. J.G. Ballard’s 1967 story “The Dead Astronaut” commanded $4,000; that’s somewhere around $30,000 in today’s buying power. It was an exceptionally well-paid time to be a professional freelance writer. Over time, however, the rate declined — or, rather, it stayed the same; it wasn’t adjusted for inflation.

There are consequences to the declining value of the written freelance word. The most obvious is that skilled and insightful writers will ditch the profession for greener (but arguably less pro-social) pastures. Even the author of this article doesn’t know what a word is worth. The historical record certainly suggests it used to be worth more, but long-form writers also know that their work can be inefficient. They are people who care too much about their subjects, whose depth of interest defies the rational allocation of labour time. The rational thing for individual publications is almost certainly to continue tightening the screws, hold the nominal rates as close as possible to where they were in the 1960s, increase annual output from full-time staffers (who are facing more competition for their jobs), and find writers who are used to writing a lot for a little.

3) Climate change and automation threaten economic convergence [Source: Financial Times]
Economic convergence, the process by which poorer countries have been catching up with their richer counterparts, has been the hallmark of the global economy since the 1970s. In this piece, Arvind Subramanian, India’s Chief Economic Advisor, asks whether this golden age now be behind us? Some will argue that the relentless march of technology means the lives of people will continue to improve. But there are reasons to worry. The late convergers face three challenges that largely spared their predecessors in East Asia in the previous four decades: the backlash against globalization, climate change and automation.

The early convergers like China posted annual rates of export growth of more than 20%, without which their economic miracles could not have occurred. But the political repudiation of globalisation means that the late convergers of sub-Saharan Africa face a more hostile trading environment. As a result, the late convergers as a group and large individual countries, such as India cannot hope to export their way to growth without risking a protectionist backlash in advanced markets. Furthermore, successful development requires people to leave agriculture for more dynamic and more productive sectors. The move away from farming must happen against the background of rising agricultural productivity so that the population can still be fed by the remaining farmers while the “leavers” are equipped with decent education and skills that enable them to find opportunities elsewhere. But this is not happening, at least not in India, where agricultural productivity has remained stagnant. Climate change threatens to keep agriculture vulnerable and locked in low productivity in large swaths of Asian and sub-Saharan Africa. The recent Indian Economic Survey indicates that climate change could depress yields by between 20 and 30 per cent, especially in water-scarce regions.

The attraction of east Asian-style growth led by labour-intensive manufacturing is its potential to employ a substantial portion of the workforce. Pushing back against worries that the late convergers might face “premature de-industrialisation”, research by the IMF claims that service sectors can substitute for manufacturing as engines of growth (it identifies finance and business services as key). But these employ only a small fraction of the workforce. Moreover, these sectors require better skills than manufacturing did for east Asians all those years ago - skills that a large part of the labour force lacks. In rural India, for example, about 20 per cent of 13- to 14-year-olds cannot attain the basic reading and arithmetic levels that are required of much younger children. Services-led growth can only help the skilled few.

This skills deficit is compounded by a third challenge. Automation and robotics will demand better skills that the late convergers cannot dream of providing. And if robots can start cutting soft cloth, even labour-intensive manufacturing will become vulnerable. Already, there are worrying signs: per capita growth rates for the late convergers have declined in the past decade. They face the bleak prospect that the development transition will be about large-scale migration of labour from low productivity agriculture to not-so-high productivity services, from rural informality to urban informality. This is a locational change, not a structural transformation. So policymakers need to be asking if these global challenges will lead to a “late converger stall” and bring an end to the golden age of economic convergence.

4) The expert generalist: Why the future belongs to polymaths [Source: Medium.com]
Some of history’s greatest contributions have come from polymaths. Aristotle practically invented half a dozen fields of study across philosophy. Galileo was as much a physicist as he was an engineer when he helped kick-start the scientific revolution. Da Vinci might have been even more famous as an inventor than an artist if his notebooks were ever published. Even in the last 100 years, we have had people like John Von Neumann and Herbert Simon who have made breakthrough advances across fields as diverse as computer science, economics, and psychology. That is, of course, not to detract from the specialists who have pushed our progress forward. In fact, until now, these specialists have far outnumbered the polymaths in both their historical ranks and their contributions.

It takes a lot of time to master the depths of a specific field so that you can eventually add something that pushes it ahead. From this point of view, it makes sense that polymaths have been as scarce as they have been. Still, it’s clear that whenever we have had giants like Aristotle, Galileo, and da Vinci, the contributions they made even in specialised fields may not have been made in the same way if they hadn’t attacked a problem with a diverse inventory of mental knowledge and understanding. Polymaths see the world differently. They make connections that are otherwise ignored, and they have the advantage of a unique perspective. One of the reasons Aristotle created so many sub-fields of philosophy and early forms of science is because these fields were so young back then. They were branches of the same underlying tree trunk, and Aristotle had a deep enough understanding of what was contained in that trunk to then divide it into different parts and make his early contributions.

The tree trunk is reality, and the branches are the different disciplines, which then become their own trunks of knowledge with branches. What polymaths realize by studying the different branches is that many of them have the same foundation, and if this foundation is deeply understood then all they need to do is apply that ingrained knowledge to a different context rather than do the work of surface-level specialisation. The big difference between the approaches of a polymath and a specialist is that the specialist picks a spot and then goes deep, whereas the polymath is on a lane that continuously gets wider.

You learn how to learn by continuously challenging yourself to grasp concepts of a broad variety. This ironically then allows you to specialise in something else faster if you choose. This is an incredibly valuable advantage. It explains how some of history’s polymaths were able to contribute in such a specialised way even though they were primarily focused on going broad. Now, in a world where Artificial Narrow Intelligence systems are going to displace most routine, specialised work, it isn’t too much of stretch to assume that this skill of learning to learn across disciplines may just be the difference between those who reinvent themselves and those who don’t. Traditionally, the idea of having a single career over the course of a life wasn’t unreasonable. The future, however, looks different. People will have multiple careers that differ significantly. Even if they don’t, we will see more and more project-based work, which will require similar skills. In such a world, the learning ability of a polymath may just be the difference.

Further, internet has not only democratised knowledge, but it’s made it so accessible that those who are curious enough can’t help but embrace the approach of a polymath. As such, we’re going to see more and more people playing at the intersection of different disciplines. While specialisation will still have its place, the boundaries between the many aspects of reality are going to continue to be blurred, and those who can comfortably embrace such blurring will thrive. Although this may appear to many as unfamiliar, the truth is that it’s actually a far more accurate representation of what is going on. We’ve just been conditioned to think otherwise.

5) How the best restaurants in the world balance innovation and consistency [Source: HBR]
Despite being able to charge hundreds of dollars for a meal and being fully booked months in advance, top restaurants often still have a hard time turning a profit. And they face an even greater challenge: maintaining flawless consistency, while simultaneously being innovative and cutting-edge. While cooking is seen as creative, high-end cooking is mainly about constant, rigorous repetition, in a highly controlled and hierarchical environment. To receive three Michelin stars, the highest rating given by the prestigious Michelin Guide, restaurants must deliver a consistently flawless experience over many visits. This means achieving precise standardisation and strong quality control.

At The Fat Duck in the UK (which has had three Michelin stars since 2004, except in 2016 when it closed for refurbishment), cooking temperatures are systematically controlled to 0.1°C, and most recipes are specified with up to 40 steps for a single component on a plate. Each cook is highly trained and selectively recruited, yet he or she will only be tasked with producing a few components, and will practice hundreds of times under direct supervision before achieving the necessary level of craftsmanship. The preparations, produced by small teams or individual cooks, are progressively assembled, with sous-chefs (akin to middle managers) controlling the quality at every step. Before the final dishes are served, the head chef personally tastes a sample from each batch, maintaining control over every single aspect.

Noma obtained the top spot in the 50 Best for its reinvention of Nordic Cuisine, while it was only granted two Michelin stars; and Paul Bocuse’s restaurant, the oldest restaurant with three Michelin stars (keeping the ranking for over 40 years), has served virtually the same menu for decades and has never made the 50 Best list. A handful of extraordinary restaurants have managed to deliver both the flawless standards of three Michelin stars and the innovation demanded by the 50 Best list – and they’ve managed to leverage this acclaim to achieve growth. The first restaurant to achieve both lists was El Bulli in Spain. With only one Michelin star in 1987, the restaurant decided to try something new. Since the business was particularly slow during the winter, its owners, Ferran Adrian and Juli Soler, decided to close shop 2-5 months a year to travel and search for new dish ideas. In 1990, they gained a second Michelin Star, and in 1994, they became the first high-end restaurant to invest in a development team and a lab.

Other restaurants, like the Fat Duck and El Celler de Can Roca in Spain, also set up fully fledged test kitchens before attaining the top ranking in both guides. While a dedicated lab expands a restaurant’s capacity for R&D, innovation more importantly has to be embedded in the DNA of the organisation. High-end restaurants that cannot afford a team and space solely devoted to R&D still make innovation a key value alongside consistency. Whether or not they have a lab, all the top spots in both the Michelin and 50 Best list implement processes to encourage creativity and learning beyond the leadership or lab team, as well as processes to generate, prioritise, refine, and standardise ideas.

The most highly acclaimed restaurants imbed creativity and learning across the organisation by creating spaces and processes for both collective input and focused development. They show that a culture of precision and attention to detail can co-exist with constant re-invention, and by leveraging this core competence to achieve prestigious rankings, partnerships, and associated businesses, generate growth.

6) The world is not as gloomy, or wonderful as you think [Source: Financial Times]
There exists a gap between the world as we intuitively perceive it, and the world as described in spreadsheets. Nowhere is this gap more obvious than when we are invited to reflect on whether things are going well, or badly. According to Tim Harford, the British economist, with some telling exceptions, the situation is this: the world is getting better in many of the ways that matter, but we simply don’t realise that this is true. Population growth has slowed dramatically. Most of the world’s children have been vaccinated against at least one disease. Girls are rapidly catching up with boys in their access to education. The world is full of flaws, but progress is not only possible — it is happening.

A new book, Factfulness, by Anna Rosling Rönnlund, Ola Rosling and the late Hans Rosling, describes this knowledge gap, which is at times grotesque: two-thirds of US citizens believe the global proportion of people living in extreme poverty has doubled in the past couple of decades; it has halved. Nor are our misperceptions limited to global development. Surveys by the polling company Ipsos Mori show that citizens of the developed world are also ignorant about their own countries. Most people vastly overestimate the prevalence of crime (which in the UK is dramatically down since the 1990s) and teenage pregnancy(which affects fewer than 1 per cent of 13-15 year old girls). They also seriously overestimate the size of the Muslim population in the west, which suggests that the concerns of tabloid newspapers loom large in our imaginations. Mr. Harford says that this is not just a statistical phenomenon — it’s a political and psychological puzzle. How worried should we really be about unemployment, vandalism, immigration, litter, bad hospitals, or drug dealing?

There is no objective answer, but he says that there is a strong tendency for people to be concerned about these issues for their nation, but more relaxed about their local area. We don’t see a serious problem where we live, but we feel strongly that trouble is all around us, just over the horizon. The economist Max Roser — creator of Our World in Data — calls this “local optimism and national pessimism”. The mismatch is particularly stark when people are asked about their own happiness. Almost all of us are reasonably content: in the UK, 92% of us are “rather happy” or “very happy” with our lives. But they believe that fewer than half of their fellow citizens are in the same cheery situation. The UK is typical in this respect: full of happy people who believe they are surrounded by misery. This generalised pessimism seems powerful. The one global question that people reliably get right, despite ferocious misinformation campaigns, is the one where the news is bad: do climate experts believe the planet will get warmer over the next century?

So it would be tempting to conclude that we are all systematically too pessimistic about everything except our own experience. That is not quite true. Saudi Arabians for instance, are far too sanguine about the prevalence of obesity: they think a quarter of the nation is overweight or obese, but the true figure is closer to three-quarters. Most people in most countries also underestimate wealth inequality; it’s worse than we think. The optimists are not right about everything. Angus Deaton, Nobel laureate in economics, has found that we are too optimistic about our own futures: almost everywhere, people tend to feel that they will be living a strikingly better life in five years’ time. We are doomed to disappointment. Life satisfaction is already high, does not tend to move much, and if anything tends to fall as mid-life approaches. This misplaced optimism about ourselves is a striking contrast to an equally misplaced despair about our children: across Europe and North America, according to the Pew Research Center, twice as many people believe their children will be worse off financially than they are, rather than better off.

What should we conclude from all this? One plausible hypothesis is that we form many of our impressions about the world from the priorities of the mass media. That would explain why we are pessimistic about most things, but not about obesity, since television loves skinny people. A second conclusion is that many of us — citizens, the media and mainstream politicians — need to take more interest in the way the world really is. Political movements have travelled from the lunatic fringe to positions of power by reinforcing people’s worst fears. But when your policy platform is built on misperceptions, little good is likely to come of it. Optimism and pessimism both have their merits, but right now the world needs a dose of realism.

7) Meet the car mechanics of the future [Source: Financial Times
In a community college outside Detroit, students are learning skills that could become commonplace on roadside garages over the world in the decades to come: repairing sensors for self-driving cars. A new course at the Washtenaw Community College in Michigan has started for mobility technicians — the car mechanics of the future — as the state tries to pre-empt new jobs that will emerge from the developments in road transport. This kind of training will form part of the information shared between Michigan and the UK under a new deal signed on Monday. Under a memorandum of understanding signed by Michigan governor Rick Snyder and UK business minister Richard Harrington, agencies and businesses from around Detroit and parts of the UK such as Warwick and London will share technology and ideas for the future of road transport. The areas of collaboration range from insuring self-driving cars to technology for smart motorways. Michigan has signed similar agreements with Austria, the Netherlands, parts of Canada and is exploring similar deals in Asia, said Mr. Snyder.

The advent of electric cars, self-driving technology and connectivity bring huge implications for jobs, as well as the need to develop test facilities and insurance models that can cope with the new technologies. In one example, Michigan is experimenting with how to create new jobs as technology evolves, such as technicians to clean and repair sensors for self-driving vehicles. “These are the people who are going to be in every major repair place in the world,” said Mr. Snyder. “They have already started the programme because the industry is going to hire the first 5-10 years of graduates. It’s a specialised need, and the traditional auto technician is not really kitted out to do that.” Under the agreement, bodies such as the UK’s Centre for Connected and Autonomous Vehicles, and Transport Systems Catapult, will work with Michigan centres such as driverless car test ground the American Centre of Mobility — built on the site of a former bomber factory — and MCity.

The public will not accept self-driving technology in vehicles unless they see the benefits from it — and using data from internet-connected cars to inform other road users of delays or poor weather ahead is one possibility. In other words, the governor wants people who do not have cars with advanced technology to understand and appreciate their value. “Otherwise you have a serious problem with the ‘haves’ and the ‘have nots’,” he says. Such a system requires large road upgrades with information boards, such as smart motorways. “You can tear that road up once and do all the work at once, we looked around the world and the best model we found was actually in London,” he said.

8) Free shipping isn’t hurting Amazon [Source: The Atlantic
Shipping cost for the ecommerce companies is a costly proposition and it will only get bigger in near future. In 2017, Amazon spent $21.7 billion on shipping costs, nearly double the amount it spent in 2015. Some of those costs are undoubtedly because Amazon spends a lot of money sending packages for free to its 100 million Prime members around the world. This is, in some ways, a smart strategy. Prime customers get so accustomed to free shipping that they just start buying everything on Amazon.  But many of those customers could be unprofitable Prime customers. Even though they pay the annual Prime membership fee, they return so many orders that they are not convinced it covers shipping costs. As the costs of shipping rise, Amazon may find more and more of its customers similarly unprofitable.

This may be why Amazon seems to be coming up with more ways to get its customers and sellers to help subsidize the cost of free shipping. During the company’s quarterly earnings call, Amazon said it would raise the cost of annual Prime memberships to $119, from $99, effective May 11. The increase in costs comes as 100 million items are now available for two-day shipping, up from 20 million in 2014, said Brian Olsavsky, Amazon’s chief financial officer. In January, Amazon also raised seller fees for various apparel categories; it raised fees for book and video sellers last year. The cost of shipping a package, not counting the expenses of moving it around in a warehouse and getting it ready for the post office, varies depending on the size of the package. Small items cost around $2 a package, while medium-sized boxes cost around $3 to $4. It’s possible that Amazon recoups some of these costs because the company’s mark-up on items is higher than the cost of shipping.

So far, however, those rising expenses haven’t hurt the company’s profits. Amazon had a huge first quarter in 2018. The company said it made $1.6 billion in profit for the first three months of the year, more than double what it made during the same time period last year. Amazon reported earnings per share of $3.27, much higher than the $1.26 analysts were expecting. These numbers speak partly to the success of Amazon businesses that don’t include shipping packages to customers’ doors, including Amazon’s advertising business and Amazon Web Services, the company’s hugely profitable cloud-computing platform. Yet analysts say Amazon is gaining something essential when it hooks customers on Prime, even though it may be losing money on many of those customers for now. Also, Amazon has many options for offsetting these rising shipping costs. It is already charging higher fees for its third-party sellers, who use Amazon’s fulfillment network to send items to Prime and other customers.

That analysts and investors don’t seem particularly worried about rising shipping costs speaks to the peculiar nature of the Amazon investor. They are more likely than investors at many other retail companies to stomach high spending if they think it will pay off in the long term. After all, the company has made big investments before that didn’t necessarily look like they were going to pay off, and then did. It’s too late for Amazon to walk back its commitment to free two-day shipping for Prime members. But it may not have to. Customers might be so accustomed to the convenience that they’re willing to pay more and more for it, even if the company isn’t.

9) Learning is a learned behaviour. Here’s how to get better at it [Source: HBR]
A growing body of research is making it clear that learners are made, not born. Through the deliberate use of practice and dedicated strategies to improve our ability to learn, we can all develop expertise faster and more effectively. In short, we can all get better at getting better. Here’s one example of a study that shows how learning strategies can be more important than raw smarts when it comes to gaining expertise. Marcel Veenman found that people who closely track their thinking will outscore others who have sky-high IQ levels when it comes to learning something new. His research suggests that in terms of developing mastery, focusing on how we understand is some 15 percentage points more important than innate intelligence. The three practical ways to build your learning skills, based on research are: 1) organising your goal; 2) think about thinking; and 3) reflect on your learning.

Organise your goals: Effective learning often boils down to a type of project management. In order to develop an area of expertise, we first have to set achievable goals about what we want to learn. Then we have to develop strategies to help us reach those goals. A targeted approach to learning helps us cope with all the nagging feelings associated with gaining expertise: Am I good enough? Will I fail? What if I’m wrong? Isn’t there something else that I’d rather be doing? These sorts of negative emotions can quickly rob us of our ability to learn something new. Plus, we’re more committed if we develop a plan with clear objectives.

Think about thinking: Metacognition is crucial to the talent of learning. Psychologists define metacognition as “thinking about thinking,” and broadly speaking, metacognition is about being more inspective about how you know what you know. It’s a matter of asking ourselves questions like: Do I really get this idea? Could I explain it to a friend? What are my goals? Do I need more background knowledge? Or do I need more practice? Metacognition comes easily to many trained experts. When a specialist works through an issue, they’ll often think a lot about how the problem is framed. They’ll often have a good sense of whether or not their answer seems reasonable. When it comes to learning, one of the biggest issues is that people don’t engage in metacognition enough. They don’t stop to ask themselves if they really get a skill or concept.

Reflect on your learning: There is something of a contradiction in learning. It turns out that we need to let go of our learning in order to understand our learning. For example, when we step away from a problem, we often learn more about a problem. Get into a discussion with a colleague, for instance, and often your best arguments arrive while you’re washing the dishes later. Read a software manual and a good amount of your comprehension can come after you shut the pages. In short, learning benefits from reflection. This type of reflection requires a moment of calm. Maybe we’re quietly writing an essay in a corner — or talking to ourselves as we’re in the shower. But it usually takes a bit of cognitive quiet, a moment of silent introspection, for us to engage in any sort of focused deliberation.

10) The founding father of neuroscience on solitude [Source: brainpickings.com]
Half a century before Rachel Carson and a century before Carl Sagan, neuroscience founding father Santiago Ramón y Cajal (May 1, 1852–October 17, 1934) considered the crucial role of science in a nation’s welfare and greatness in his book, “Advice for a Young Investigator” — the science counterpart to Rilke’s Letters to a Young Poet and Anna Deavere Smith’s Letters to a Young Artist, and the source of Cajal’s insightful taxonomy of the six “diseases of the will” that keep the talented from achieving greatness. Cajal writes that today’s statesmen undoubtedly have limitations, one of which is not realizing (or at least not advocating) that the greatness and might of nations are products of science, and that justice, order, and good laws are important but secondary factors in prosperity.

But science, of course, only thrives when scientists thrive. For science to steer a society toward greatness, Cajal cautions, that society must nurture an optimal intellectual and moral environment for its inhabitants. Cajal considers what conditions create such an opportunity for the blossoming of noble brilliance. With the spirited conviction of one whose own life is a testament to this truth, he points to solitude - that supreme fertilizer of creative work as chief among them.

He adds, how satisfying and rewarding are the long winter evenings spent in the private laboratory, at the very time when educational centers are closed to their workers! Such evenings free us from poorly thought out improvisations, strengthen our patience, and refine our powers of observation. What care we zealously lavish on our own instruments — each one representing a vanity disowned or a bad habit unindulged! Because we love them we appreciate their fine points, we are aware of their defects, and we avoid the traps they occasionally set for us. In short, we understand their friendly soul, which always responds humbly and quickly to our needs. From the point of view of actual success, it is not the instruments that are costly and require the most time, work, and patience that are important, it is the development and maturing of talent.

-Saurabh Mukherjea is CEO, and Prashant Mittal is Strategist, at Ambit Capital. Views expressed are personal

Show More
Walmart's global empire
Three ways to make your organisation agile