Some of the most interesting topics covered in this week's iteration are related to 'how body clock affects genes, 'economics of night markets' and 'how Amazon will help you listen to music through your skull'
Image: Shutterstock.com (For illustrative purposes only)
At Ambit, we spend a lot of time reading articles that cover a wide gamut of topics, including investment analysis, psychology, science, technology, philosophy, etc. We have been sharing our favourite reads with clients under our weekly ‘Ten Interesting Things’ product. Some of the most interesting topics covered in this week’s iteration are related to ‘how body clock affects genes, ‘economics of night markets’ and ‘how Amazon will help you listen to music through your skull’.Here are the ten most interesting pieces that we read this week, ended October 6, 2017:1) Nobel Watch: To Dream, Perchance to Sleep
[Source: Inside Science
Researchers at Brandeis University worked in a dark basement to collect fruit flies from a single incubator. The instrument looked like a refrigerator, except that it kept the critters at a constant, cozy 25 degrees Celsius and cycled them in and out of total darkness, 12 hours on, 12 hours off. The researchers cycled in and out as well, coming and going from "the pit". As the lights went on and off, they often camped out. In the process they lost hours of sleep. "Working in the pit was not a joy," said Paul Hardin, a molecular geneticist at Texas A&M University who said he spent "innumerable" hours there as a young researcher. In 1990, Hardin was an author on one of the key papers from a body of research that was recognised this week with the 2017 Nobel Prize in physiology or medicine, which went to three American researchers: Michael Rosbash and Jeffrey Hall of Brandeis, and Michael Young of Rockefeller University in New York.
In an apparent ironic twist to this year's prize, the quarry claimed by countless hours of sleepless toil is itself fundamental to a good night's sleep: the molecular basis of circadian rhythms - those genes and proteins whose interactions underlie the ability of us living organisms to keep our internal body clocks entrained to a 24-hour cycle. The research behind the Nobel Prize has allowed scientists to begin to untangle the clock and uncover the myriad ways in which it influences genetic expression in the body. Just as an adult may have regular routines throughout the day, so too, do our genes follow a regular 24-hour clock. A dizzying number of genetic transcripts, proteins and other molecular players alternatively glom together or fall apart, turning on or tuning down countless patterns of gene expression that keep our cells, organs and bodies humming along with the Earth's daily rotations. Some genes turn on at certain times. Others turn off. Some genes are like master control switches that turn down (or off) others at certain times. This happens not just in humans but all living organisms. Early work in plants and insects had, in fact, established the existence of such circadian clocks, but it wasn't until the mid-1980s that the researchers uncovered the inner molecular workings behind this basic process.
In 1984, the Brandeis and Rockefeller groups, working separately, isolated a gene called “period”, which had been discovered a decade earlier after a genetic mutation in a fruit fly caused the creature to lose its ability to entrain its insect clock to a 24-hour rhythm. Hall and Rosbash then discovered a protein, named PER, that was made by the period gene, and figured out how it worked through a feedback mechanism. Its own expression in the cell would cause PER to build up in the nucleus and would eventually effectively shut off the period gene, allowing it to cycle on and off on a daily schedule. But nobody understood how PER was getting into the nuclei of the cells where it would act on the period gene. Young then discovered another gene, called “timeless”, that expressed a protein called TIM, which works in conjunction with PER to build up and control expression of the period gene. This uncovered the mechanism by which these genes work together throughout the day to first build up and turn down the expression of the period gene.
The significance of the discovery is that the clock controls almost all aspects of nutrient metabolism, which means that it has profound implications for human health and disease. When it's out of sync, health problems can arise -- from weight gain to sleep loss or worse. "Even cancer is impacted by shift work and circadian timing," says Rosbash. More fundamentally, Hardin added, the discoveries were really one of the earliest examples of genetic control of behaviour. People who do shift work may fall prey to the unforgiving cycling of the body's fickle clock, and it also plays an outsized role in the annoying jet lag anyone who has taken a long flight will experience. Some sleep disorders may also be tied to specific genetic variants in the genes underlying the circadian rhythms, which may lead to new ways of treating those disorders. For instance, earlier this year, Young's group identified an inherited genetic variant dubbed "night owl" that may affect as many as 1 in 75 people and is linked to a condition called delayed sleep phase disorder. People who suffer from this condition have difficulty going to sleep at a normal hour because their bodies stay energised far too long into the night. They are pumped up when they should be winding down, something that Young said is equivalent to suffering jet lag every single day.2) The economics of the night markets
Night markets are hardly new. Asia is stuffed with them. From Singapore’s hawker centres to the jostling markets of Bangkok, cheap, excellent food is available late into the night. Until recently, Europe or America had little to compare. Markets are common. A weekend trip to the farmers’ market to pick up some artisanal cheese is a well-cooked cliché. Many serve food to eat on the spot, but few are places where people can comfortably linger for hours, and booze and entertainment are in limited supply. Crucially, most pack up by mid-afternoon. Thereafter, restaurants are the main option, but they commit a party to a single cuisine. For those who want a choice there are food courts – depressing sections of malls, where choices are mostly limited to greasy fast food and alcohol is rarely offered.
Enter the new night market. Open until the small hours, packed with a variety of delicious street food, enlivened by music and other entertainments, lubricated by cocktails and beer, they cater to people with diverse tastes in one place. At the Queen’s Night Market in New York vendors serve their food from tents surrounding picnic tables in a car park. In the Markthalle Neun in Berlin on Thursday nights, locals and visitors cram around wooden tables under strings of bunting, eating improbable food fusions. Once a feature of summer nights, such markets are so popular that they are increasingly open all year round. The quality of the food is largely down to the specialisation of each trader. Shorter menus are easier to do well. The fact that each stall is selling something different means no diner feels short-changed by a lack of choice. As diets and eating habits have become faddier – meat-free, meat-full, gluten-free, raw, clean, dirty – it has become harder for any restaurant to cater to the needs of a group dining together. Markets sidestep that problem. Fancy dinners still have their place but a meal that costs £100 and might still not impress feels a greater risk than a plate of fried chicken for £7 ($9). Even better, at a night market you can avoid the awkward divvying up of the bill at the end of the night.
The model of a night out has changed. Young people no longer expect drinks at a bar, followed by dinner at a restaurant and dancing at a club. With drinks and music on offer alongside the food, night markets are an entire evening’s entertainment in one package. Nobody need make a reservation; people can arrive and depart as early or late as they please. Communal tables mean the chances of meeting new people rise. Even children, not always welcome in fancy restaurants, fit in. Also, as youngsters in Western countries are taking fewer drugs and drinking less, food has become more important. It is more photogenic than gin or heroin and the explosion of social media has helped such markets grow.
The ease of publicity is just one of the reasons new chefs love such markets. The start-up costs are tiny compared with those associated with a restaurant. Staff numbers are lower and limited menus mean fewer wasted ingredients, so it is easier to make profits. For ambitious chefs, who hope one day to set up a more permanent venue, they are brilliant test kitchens; the feedback is instant so food can be tweaked constantly. But established chefs are beginning to recognise their potential too. At the London Food Month Night Market in June Angela Hartnett, whose reputation sparkles with a Michelin star, served pork belly, honey-soused tomatoes, peas, girolles and baby gem. For such luminaries, night markets are an opportunity to reach a new kind of customer. Those who might not be willing to spend £70 on three courses might well take a chance on the pork belly for £8 at a market. And, if they like it, they might even be willing to fork out for a full restaurant meal at a later date.
Long-accepted economic principles are at work. In the 19th century, Alfred Marshall, an economist, identified the benefits of clustering. Proximity, he argued, created “something in the air”. One small taco stall, no matter how good, will only ever attract a limited number of customers, but put it next to 25 other food stalls and numbers swell. A cluster also allows small businesses to take advantage of economies of scale. Everyone benefits from the fairy lights and fire-pits, DJs and disco-balls which illuminate such sites, and the costs are shared. 3) Bruce Springsteen, Artful leadership and what rock star bosses do
Artists are managers and leaders, too. It takes good management to keep a band together and an act on the road. But their leadership is different from and often at odds with the leadership found at the top of corporations, countries, or armies. The work of art, as Bruce Springsteen puts it, is “natural subversion.” It is through art that the unspeakable and the unheard find a voice. Establishment leaders might praise and pay for art, but they cannot control it. That is why the artist’s leadership is usually trustworthy: It either speaks to and for people, or it has no power at all. Springsteen, whose staying power has rested on chronicling what he calls “America’s post-industrial trauma,” is a prime example. Long before economists documented the American dream’s demise, his lyrics mourned it. “Is a dream a lie if it don’t come true, or is it something worse?” he has sung for decades. And yet, while singing many disillusioned lines, he has kept that very dream alive. In his life’s work, and the book is no exception, people lose homes, jobs, loves — but never lust and pride. No wonder his autobiography “Born to Run” is a textbook on a virtue that the best managers have and the best leaders spread: resilient hope. The kind of hope borne of staring at the truth — especially the truth of loss and fear — without losing faith. The author shares three key takeaways from the book.
To Hold People’s Attention, Serve Their Imagination: “Rock ’n’ roll bands that last have to come to one basic human realisation: ‘The guy standing next to you is more important than you think he is. And that man or woman must come to the same realisation about the man or woman standing next to him or her, about you.’ The best business schools’ curricula recommend a similar blend of empathy and incentives nowadays. When it comes to leadership, however, there is plenty new. “In my line of work,” Springsteen writes, offering a superb definition of leadership, “you serve at the behest of your audience’s imagination.” And if you are fortunate enough to be entrusted with leadership — that is, with imagination on others’ behalf — he is clear on what you are meant to do: “I am here to provide proof of life to that ever elusive, never completely believable ‘us.’ A leader’s job is to embody identity for a community — to give words and flesh to elusive ideals.
Let Purpose Find Your Craft: While the young Springsteen honed his craft every night in bars on the Jersey Shore, he enjoyed his growing popularity but felt that something was missing. What worked for him, the book suggests, is a combination of taking a stance, making it last, and having freedom to run. Holding on to what is precious without losing the open road. But getting there takes hard work. You can hone your craft and let purpose find you. But you can’t hone your purpose and hope that craft will find you. He says purpose has a long gestation, and is borne of actions and encounters, not just ambition and doubts. When you do find it though, he cautions, it does not spare you torment. There is plenty throughout his life and work: the torment of depression, a struggle with his inner demons; the torment of talent, a struggle with the sense that he could always do more; the torment of service, a struggle with shouldering others’ pain.
Love Will Make You Better. Reflection Will Make You Last: You must cultivate self-awareness to become a better leader. There is plenty of self-reflection in the book, but little is conclusive, and it seldom helps much. Torment remains a puzzle in his career and his life. What really helps him is love and songs. He seeks the help — the love of friends, family, therapists so that torment might find its way into a tune that can be shared. Because, he notes, “You can sing about your misery…but there is something in the gathering of souls that blows the blues away.” Self-reflection, Springsteen seems to say, echoing Hamlet’s lesson, is not simply meant to help. Reflection tortures you with doubts. It slows you down. It is not meant to make you a better act. It is meant to make your act last. How? By forcing you to sit still when it would be easier to act out. By making you stay present to your questions so that your dream does not turn into obsession.4) Why does Sweden have so many start-ups?
[Source: The Atlantic
Sweden is a high-tax, high-spend country, where employees receive generous social benefits and ample amounts of vacation time. Economic orthodoxy would suggest the dynamics of such a welfare state would be detrimental to entrepreneurship. Studies suggest the more a country’s government spends per capita, the smaller the number of start-ups it tends to have per worker—the idea being that high income taxes reduce entrepreneurs’ expected gains and thus their incentive to launch new companies. And yet Sweden excels in promoting the formation of ambitious new businesses. Stockholm produces the second-highest number of billion-dollar tech companies per capita, after Silicon Valley. Producing start-ups matters for any economy that strives for efficiency, job creation, and all-around dynamism, but it is especially relevant for countries, such as the U.S., where new-business creation has slowed. Despite the current cultural fascination with start-ups, only 8% of all firms in the U.S. meet that definition today, compared to 15% in 1978. In Sweden, the trend is reversed: The pace of new-business creation has been accelerating since the 1990s. As the US’s GDP growth remains sluggish, Sweden’s economy grew at a rate of 4% in 2015 and 3% in 2016. Sweden’s GDP has also outperformed that of other major European countries since the mid-1990s. So, what has Sweden been doing right?
There are several dimensions to answering that question, many of which involve changes that took place in the past 30 years. Since 1990, Sweden has made it easier for upstarts to compete with big, established firms. The 20th-century economist Joseph Schumpeter theorised that economies thrive when “creative destruction” occurs, meaning new entrants are able to replace established companies. Sweden used to have a heavily regulated economy in which public monopolies dominated the market, which made it difficult for such replacements to occur, but regulations have since been eased. While Sweden was making it harder for monopolies to dominate the market, the US was changing its regulatory landscape to favour big companies and established firms. Sweden’s reforms were a response to a financial crisis in the 1990s, post which the government, in an effort to jump-start economic growth deregulated industries including taxis, electricity, telecommunications, railways, and domestic air travel. Deregulation helped lower prices in industries, such as telecommunications, which attracted more customers. So-called “product market reforms” made it easier to licence new companies, and helped force inefficient legacy firms out of the market. A new Competition Act in 1993 sought to block big mergers and anti-competitive practices.
In the 2000s, Sweden also got rid of its inheritance tax and a tax on wealthy people, which further incentivised people to earn large sums of money and, often, invest it back into the economy. Today, there are significant tax breaks for starting and owning a business; for example, entrepreneurs can now have a larger share of their income taxed as capital income, which has a lower tax rate. Sweden has a reputation for having high income taxes, but today, taxes are much lower than they used to be, and are lower for corporations than they are in other developed countries. Before the 1990s, there was also little foreign competition in Sweden. Protectionist legislation prohibited foreigners from taking substantial ownership in Swedish companies, and fewer than 5% of private-sector employees worked in foreign-owned companies in the 1980s. Then, Sweden opened its market to foreign competition in the 1990s, which helped in a few ways. Instantly, there were more companies that could acquire mature start-ups, which added to the incentives to start new businesses. And inefficient domestic firms that weren’t able to compete with foreign firms tended to go out of business, creating a vacuum in which new companies could arise. The share of foreign ownership of Swedish companies shot up from 7% in 1989 to 40% in 1999.
All this deregulation coincided with the rise of the internet, which meant that more people were creating businesses at the same time that they were experimenting with new technology. Though Sweden’s computer adoption rate was similar to that of the US, it primed entrepreneurs to think digitally when the reforms of the 1990s opened the country to development. Every inhabitant of Sweden under 40 basically grew up with a PC in their home. Also, because Sweden’s size means there is a limited market for its companies’ products, those companies often plan to sell internationally from the outset, and so are subject to a great deal of international competition, which tends to make them nimbler. Sweden’s impressive start-up record can also be attributed to some broader aspects of how the country is set up. Its social safety net, for instance, helps entrepreneurs feel safe to take risks. In Sweden, university is free, and students can get loans for living expenses, which allows anyone to pursue higher education. Health care is free too, and childcare is heavily subsidized. None of these benefits are contingent on having a job, which means people know that they can take entrepreneurial risks and still know many of their necessities will be covered.5) Is AI riding a one-trick pony?
[Source: MIT Technology Review
In the 1980s, Geoffrey Hinton was, as he is now, an expert on neural networks, a much-simplified model of the network of neurons and synapses in our brains. However, at that time it had been firmly decided that neural networks were a dead end in AI research. Although the earliest neural net, the Perceptron, which began to be developed in the 1950s, had been hailed as a first step toward human-level machine intelligence, a 1969 book by MIT’s Marvin Minsky and Seymour Papert, called Perceptrons, proved mathematically that such networks could perform only the most basic functions. These networks had just two layers of neurons, an input layer and an output layer. Nets with more layers between the input and output neurons could in theory solve a great variety of problems, but nobody knew how to train them, and so in practice they were useless. Hinton’s breakthrough, in 1986, was to show that technique called ‘backpropagation’ or backprop could train a deep neural net, meaning one with more than two or three layers.
Backprop is remarkably simple, though it works best with huge amounts of data. That’s why big data is so important in AI—why Facebook and Google are so hungry for it. In this case, the data takes the form of millions of pictures, let’s say some with hot dogs and some without. When you first create your neural net, the connections between neurons might have random weights—random numbers that say how much excitement to pass along each connection. The goal of backprop is to change those weights so that they make the network work. Suppose you take your first training image, and it’s a picture of a piano. You convert the pixel intensities of the 100x100 picture into 10,000 numbers, one for each neuron in the bottom layer of the network. As the excitement spreads up the network according to the connection strengths between neurons in adjacent layers, it’ll eventually end up in that last layer, the one with the two neurons that say whether there’s a hot dog in the picture. Since the picture is of a piano, ideally the “hot dog” neuron should have a zero on it, while the “not hot dog” neuron should have a high number. But let’s say it doesn’t work out that way. Backprop is a procedure for rejiggering the strength of every connection in the network so as to fix the error for a given training example.
The way it works is that you start with the last two neurons, and figure out just how wrong they were: how much of a difference is there between what the excitement numbers should have been and what they actually were? When that’s done, you take a look at each of the connections leading into those neurons—the ones in the next lower layer—and figure out their contribution to the error. You keep doing this until you’ve gone all the way to the first set of connections, at the very bottom of the network. At that point you know how much each individual connection contributed to the overall error, and in a final step, you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output. The incredible thing is that when you do this with millions or billions of images, the individual layers of these image-recognition nets start being able to “see” images in sort of the same way our own visual system does. That is, the first layer might end up detecting edges, in the sense that its neurons get excited when there are edges and don’t get excited when there aren’t; the layer above that one might be able to detect sets of edges, like corners; the layer above that one might start to see shapes; and the layer above that one might start finding stuff like “open bun” or “closed bun”.
This is the thing that has everybody enthralled. It’s not just that neural nets are good at classifying pictures of hot dogs or whatever: they seem able to build representations of ideas. Neural nets can be thought of as trying to take things- images, words, recordings of someone talking, medical data and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world. Hinton believes this is what the brain itself does. That said, these “deep learning” systems are still pretty dumb, in spite of how smart they sometimes seem. Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled. A deep neural net that recognizes images can be totally stymied when you change a single pixel, or add visual noise that’s imperceptible to a human. Indeed, almost as often as we’re finding new ways to apply deep learning, we’re finding more of its limits.
Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way—which perhaps explains why its intelligence can sometimes seem so shallow. Indeed, backprop wasn’t discovered by probing deep into the brain, decoding thought itself; it grew out of models of how animals learn by trial and error in old classical-conditioning experiments. And most of the big leaps that came about as it developed didn’t involve some new insight about neuroscience; they were technical improvements, reached by years of mathematics and engineering. Hinton himself says, “Most conferences consist of making minor variations … as opposed to thinking hard and saying, ‘What is it about what we’re doing now that’s really deficient? What does it have difficulty with? Let’s focus on that.’” It can be hard to appreciate this from the outside, when all you see is one great advance touted after another. But the latest sweep of progress in AI has been less science than engineering, even tinkering. And though we’ve started to get a better handle on what kinds of changes will improve deep-learning systems, we’re still largely in the dark about how those systems work, or whether they could ever add up to something as powerful as the human mind.6) Ford’s partnership with Lyft finally gives it a clear plan for self-driving cars
In the wild west of self-driving, everybody is competing with everyone and simultaneously working with them. In an unsurprising turn of events, Ford and Lyft became the latest alliance in an increasingly muddled network of self-driving alliances. The partnership means that eventually Lyft users will be able to hail self-driving Ford vehicles from the app. That’s pretty much all the information available about the terms of this relationship. Few companies have given many details — if they themselves have figured it out — on how they will divide profits or who will own the cars. It’s an obvious move for both companies but perhaps a more necessary one for Ford. Lyft has already partnered with five other companies - automakers like GM and Jaguar as well as self-driving software developers and plans to build out its own self-driving system. Until now, Ford hasn’t had a clear business plan for how to get people in its self-driving cars beyond saying it will work with partners.
Lyft, last valued at $7.5 billion, has made its strategy clear: It wants to be the network of choice for consumers of self-driving cars while also working to develop its own self-driving system so it isn’t beholden to the timeline of its partners. Ford’s strategy was less clear — a deficiency that some say led to the departure of its former CEO Mark Fields. Under Fields, sources say the company was slow to move on its advanced technology efforts. Most people in the industry agree that consumers won’t be buying self-driving cars, they’ll be using them in either a ride-hailing or car-sharing network. Without a deal with Google’s self-driving arm, now called Waymo, Ford was left to build the autonomous brain in-house with an unestablished path to market.
Eventually, the company acquired on-demand shuttle service Chariot in September 2016 and then bought a majority stake in self-driving start-up Argo.AI for $1 billion. Still, while the Chariot acquisition provided one version of what the automaker’s path to market for self-driving cars could be, there was little chance that it would be able to sustain the company’s self-driving business on its own. Now Ford is working with Lyft alongside some of its major competitors like General Motors and Jaguar and other companies that pose a threat to its autonomous efforts like Alphabet’s Waymo, nuTonomy and Drive.AI. With Uber working quietly on its technology as it endures the attention on its legal battle with Alphabet, Lyft has scooped up a series of major partners. These partners not only give Lyft a better chance of rolling out driverless cars first, they give consumers more options. That’s not to say that these companies couldn’t also eventually work with Uber — in fact some of them, like GM’s Cruise and Alphabet’s Waymo, are working on their own ride-hail networks. But Lyft is undoubtedly making more progress publicly on creating a robust network of self-driving cars.7) Brexit is Britain’s gift to the world
[Source: Financial Times
The UK it seems is experimenting on itself for the benefit of humanity. Advanced societies rarely do anything so reckless, which is why the Brexit experiment is so valuable. It keeps producing discoveries that surprise both Leavers and Remainers. Here are some early lessons for other countries: When you focus on a wedge issue, you divide society. The Brexit vote has introduced unprecedented rancour into a traditionally apolitical country. Insults such as “enemies of the people”, “saboteurs”, “racists” and “go home to where you came from” are now daily British fare. Brexit rows split generations at family weddings and Christmas.
All this was avoidable. Until the referendum, few Britons had strong views on the EU, just as few Americans thought about transgender bathroom habits until their politicians discovered the issue. If you have to address wedge issues, its best to aim for compromise rather than a winner-take-all solution such as a referendum. All countries need real-time election regulators. There have always been people who lied to win votes. But now they have social media. Every slow, understaffed, 20th-century election regulator must therefore retool itself into a kind of courtroom judge who can call out falsehoods instantly. The model is the UK Statistics Authority’s reprimand of Boris Johnson, after he repeated the nonsense that leaving the EU would free up £350m a week for the National Health Service. Revolutionaries invariably underestimate transition costs. Maybe if you have a blank slate, being out of the EU is better than being in it. But the calculation changes once you’ve been in the EU for 43 years. All your arrangements are then predicated on being in, and suddenly they become redundant.
Almost every system is more complex than it looks. Most people can’t describe the workings of a toilet, writes Steven Sloman, cognitive scientist at Brown University. The EU is even more complicated, and so leaving it has countless unforeseen ramifications. Most Britons had no idea last year that voting Leave could mean closing the Irish border, or giving ministers dictatorial powers to rewrite law. Because of complexity, so-called common sense is a bad guide to policy making. Complexity is also an argument against direct democracy. Immigrants fulfill a role. Any society in which they live comes to depend on them. You may calculate that your distaste for immigrants is worth some lost functioning, but you have to acknowledge the trade-off. You have to choose who to surrender your sovereignty to. Brexiters are right to say that the EU has usurped some of British sovereignty. But as John Major, former British prime minister, remarks, in a connected world the only fully sovereign state is North Korea.
Carrying out Brexit means not fixing what Johnson in February 2016 called “the real problems of this country — low skills, low social mobility, low investment, etc — that have nothing to do with Europe”. Negotiations get harder when you lose your counter-party’s trust. That’s what Greece discovered during its negotiations with the EU. Mocking the other side in public — as Greece’s Yanis Varoufakis did, and as British politicians now do regularly — is therefore a losing tactic. There is no reset button in human affairs. The UK cannot return to its imagined pre-EU idyll, because the world has changed since 1973. Nor can Britons simply discard the Brexit experiment if it goes wrong, and revert to June 22 2016. The past is over, so it’s a poor guide to policymaking.8) Banking remains far too undercapitalised for comfort
[Source: Financial Times
Just over 10 years ago, the UK experienced, with Northern Rock, its first visible bank run in one-and-a-half centuries. That turned out to be a small event in a huge crisis. The simplest question this anniversary raises is whether we now have a safe financial system. Alas, the answer is no. Banking remains less safe than it could reasonably be. That is a deliberate decision. Banks create money as a byproduct of their lending activities. The latter are inherently risky. That is the purpose of lending. But banks’ liabilities are mostly money. The most important purpose of money is to serve as a safe source of purchasing power in an uncertain world. Unimpeachable liquidity is money’s point. Yet bank money is least reliable when finance becomes most fragile. Banks cannot deliver what the public wants from money when the public most wants them to do so. Martin Wolf believes that this system is designed to fail.
To deal with this difficulty, a source of so much instability over the centuries, governments have provided ever-increasing quantities of insurance and offsetting regulation. The insurance encourages banks to take ever-larger risks. Regulators find it very hard to keep up, since bankers outweigh them in motivation, resources and influence. A number of serious people have proposed radical reforms. Economists from the Chicago School recommended the elimination of fractional reserve banking in the 1930s. Mervyn King, former governor of the Bank of England, has argued that central banks should become “pawnbrokers for all seasons”: thus, banks’ liquid liabilities could not exceed the specified collateral value of their assets. One thought-provoking book, “The End of Banking” by Jonathan McMillan, recommends the comprehensive disintermediation of finance. Reforms have not yet made the banks’ role as risk-taking intermediaries consistent with their role as providers of safe liabilities. All these proposals try to separate the risk-taking from the public’s holdings of unimpeachably safe liquid assets. Combining these two functions in one class of institutions is a recipe for disaster, because the first function compromises the second, and so demands huge and complex interventions by the state.
Wolf was involved in the recommendations from the UK’s Independent Commission on Banking for higher loss-absorbing capacity and the ring-fencing of the UK retail banks. Both are steps in the right direction. Even so, as Sir John Vickers, chairman of the ICB, noted in a recent speech, the reforms have not yet made the banks’ role as risk-taking intermediaries consistent with their role as providers of safe liabilities. That is largely because they remain highly undercapitalised, relative to the risks they bear. Senior officials argue that capital requirements have increased 10-fold. Yet this is true only if one relies on the alchemy of risk-weighting. In the UK, actual leverage has merely halved, to around 25 to one. In brief, it has gone from the insane to the merely ridiculous. The smaller the equity funding of a bank, the less it can afford to lose before it becomes insolvent. A bank near insolvency must not be allowed to operate, since shareholders have nothing left to lose from taking huge bets.
There is, however, a simple way of increasing the confidence of a bank’s creditors in the value of its liabilities (without relying on government support). It is to reduce its leverage from 25 to one to, say, five to one, as argued by Anat Admati and Martin Hellwig in The Bankers’ New Clothes. As Sir John notes, this would impose private costs on bankers, which is why they hate the idea. But it would not impose significant costs on society at large. Yes, there would be a modest increase in the cost of bank credit, but bank credit has arguably been too cheap. Yes, the growth of bank-created money might slow, but there exist excellent alternative ways of creating money, especially via the balance sheets of central banks. Yes, shareholders would not like it. But banking is far too dangerous to be left to them alone. And yes, one can invent debt liabilities intended to convert into equity in crises. But these are likely to prove difficult to operate in a crisis and are, in any case, an unnecessary substitute for equity.9) The missed opportunity to reimagine Mumbai
Through the post-independence decades, Mumbai’s industrial strength had been built on the back of its textile mill heartland. This ended in 1982 with the year-long mill workers’ strike led by Datta Samant. Of the 58 mills in the city, 26 were deemed sick and taken over by the Government. Unsurprisingly, what should be done with this vast swathe of centrally located high-value land proved to be a contentious question. In 1996, Maharashtra’s Bharatiya Janata Party-Shiv Sena Government set up a study group, headed by Charles Correa, to come up with an integrated development plan. The plan proposed a three-way split: one-third of the land to be used for public spaces that could accommodate gardens, schools and hospitals, one-third to be developed by the Government for affordable housing, and one-third to be given to the erstwhile owners for residential or commercial purposes. The plan, incidentally, included the redevelopment of the Elphinstone Road station, with a broader overbridge allowing exit into a large plaza—measures that would have left it far better situated to deal with the rush of commuters that ended in tragedy last week.
Nothing came of this. “Open land” was redefined so that the land reserved for public use shrank from 166 acres to 32 acres. The bulk of the land was used for private development without any semblance of planning. This resulted in, as Correa puts it in his essay, The Tragedy Of Tulsi Pipe Road, the absence of enabling sub-structure of roads, of engineering services, of a rational decision-making system. Incidentally, this was Correa’s second great disappointment. The first came decades earlier in the 1960s when his plan for the creation and development of New Bombay was implemented half-heartedly by the Government. If venality was to be the cause of his mill land plan’s downfall, it was apathy that did the job this time. Instead of a well-connected sister city built around commercial districts, relieving the strain of Mumbai’s exploding population, New Bombay remained a dormitory town for decades. The suburban railway was extended to it only in the 1990s.
At the heart of India’s urban planning failure is a failure to understand what urban planning truly is. Correa held that “market forces do not make a city, they destroy them”. He was wrong in this. While regulation and planning are necessary, the inhabitants of a city will naturally organise themselves in a manner that allows them to best participate in economic activity. This dictates the urban space around them. At the other end of the spectrum, Jane Jacobs, who deconstructed urban planning in 1961 with “The Death And Life Of Great American Cities”, believed that urban growth should be organic and central planning, broadly, was an evil. She was vastly overstating the case as well. But where both schools of thought converged was in their concern for the inhabitants of a city—the belief that cities should be organized in a manner that enhances the well-being and economic participation of all strata of urban society.
Indian urban planners have rarely observed this, defaulting instead to a largely mechanistic vision focused more on infrastructure than the interaction of citizens and infrastructure. Inevitably, this has resulted in static, non-participative master plans that are outdated even as they are made, or just plain don’t work. Witness the endless attempts to redevelop slum land in Mumbai and resettle its inhabitants. Or the crores of public funds poured into cleaning Mumbai’s Mithi river—a Sisyphean task when the slums and shanties along stretches of the river and its numerous feeder drains don’t have access to sewage or garbage disposal systems. Or the fact that it took until the Metro rail policy released last month to recognise the need for nodal transportation bodies that coordinate the development of discrete public transportation systems in a city. Last year, Prakash Javadekar, then minister of state for environment, forest and climate change, said the greatest failure “after independence is in our urban and town and country planning”. Recognising the problem is only the first step.10) Amazon bones up on history with eye on ‘smart glasses’
[Source: Financial Times
Silicon Valley seems to have an obsession with high-tech eyewear. Unlike most headsets that have been launched to date, Amazon’s Alexa-powered glasses are expected to focus on audio features rather than graphics or video. From bulky contraptions, such as Facebook’s Oculus Rift virtual reality headset and Microsoft’s “mixed reality” HoloLens, to more lightweight smartphone accessories such as Google Glass and Snap’s Spectacles, the tech industry seems convinced that bringing screens, speakers and sensors as close as possible to our eyes and ears will lead to greater productivity, more immersive entertainment or hands-free computing. So far, consumers are not buying it: none of these head-worn devices has sold in significant volumes. But that does not seem to be stopping tech companies from trying new designs and applications for face computers.
According to people familiar with its plans, Amazon is working on Alexa-powered glasses that will look almost identical to a normal pair of spectacles, yet allow the wearer to speak to and hear its virtual assistant from anywhere. There will be no headphones on this device. Instead, the sounds of Alexa’s responses will be transmitted directly from the frame of the glasses, through the skull. That way, wearers will be able to hear the world around them clearly without having to pop earphones in and out, as they might with Apple’s AirPods to summon Siri. People nearby will not be able to hear sounds from the headset. The hope seems to be that calling on Alexa using the glasses when out in the street will be as easy as speaking to one of its Echo speakers in the kitchen.
The bone conduction audio technique enabling Amazon’s ambitious design has been around for hundreds of years. While some scholars say it originated much earlier, the Washington University of Medicine says bone conduction was discovered in 1551 by Italian physician Girolama Cardano, who transmitted sounds to his ear by holding the shaft of a spear between his teeth. His research was forgotten until the technique was rediscovered 200 years later in Germany but it was not until the first electric bone conduction vibrator appeared in 1923 that it really started to help hearing-impaired people. In the 1970s, bone-anchored hearing aids, which require surgery to install, were pioneered in Sweden.
From these medical origins, bone conduction has started to make its way into consumer electronics over the past decade, with no surgery required. An early adopter was Jawbone, which before its move into fitness tracking wristbands was best known for its noise-cancelling Bluetooth earpieces. The San Francisco-based company’s brand originated from its ability to use vibrations from the wearer’s jawbone to filter out background noise from phone calls, so that a caller’s voice came through more clearly. Getting this to work properly, though, meant ensuring the right fit, so that a sensor was in contact with the jaw at all times — something some customers had problems with. Since then, several companies have experimented with using bone conduction in a range of products. However doubts remain on the efficacy of this technique. Bone-conducting audio systems still do not sound quite as rich and clear as normal headphones. While the technology can be good enough for the spoken word, it is less suitable for listening to music.- Saurabh Mukherjea is CEO (Institutional Equities) and Prashant Mittal is Analyst (Strategy and Derivatives) at Ambit Capital Pvt Ltd. Views expressed are personal.