Forbes India 15th Anniversary Special

Ten interesting things we read this week

Some of the most interesting topics covered in this week's iteration are related to 'How spam works', 'How disagreement enhances creativity', and 'Bushman's affluent society'.

Published: Nov 11, 2017 02:28:52 AM IST
Updated: Nov 11, 2017 10:55:54 AM IST

Ten interesting things we read this weekImage: Shutterstock (For illustrative purposes only)

At Ambit, we spend a lot of time reading articles that cover a wide gamut of topics, including investment analysis, psychology, science, technology, philosophy, etc. We have been sharing our favourite reads with clients under our weekly ‘Ten Interesting Things’ product. Some of the most interesting topics covered in this week’s iteration are related to ‘How spam works’, ‘How disagreement enhances creativity’, and ‘Bushman’s affluent society’.

Here are the ten most interesting pieces that we read this week, ended November 10, 2017:

1) You’ve got (unwanted) mail [Source: The Ken]
This piece from The Ken highlights the mechanism through which ‘spam’ works.  Data from Talos—the security intelligence and research group of Cisco—shows that of the total email volume sent worldwide in September 2017, 85% was spam. If one were to look specifically at India, data from cybersecurity firm Trend Micro shows that the spam rate in this country stands at 84%. Further, data from Kaspersky suggests that of the total spam sent globally, 8.4% comes from India. As per, India ranks 8 in the list of ‘the world’s worst spam haven countries’ for enabling spamming. Some of the other interesting statistics around spam were: The inflow of emails started picking pace after 8:00am and peaked around noon, post which there was a gradual fall in volume. Around 35% of the emails did not have an opt-out link. The opt-out status was a failure for around 31% of the mail, and it was not available for 17% of the emails.

On 11 October, 2017, the UK’s Information Commissioner’s Office (ICO) fined a bank and an advertising agency for spamming customers. Vanquis Bank was fined £75,000 (~Rs6.4 million), and the ad firm Xerpia was charged with £50,000 (~Rs4.3 million), for sending almost three million spam emails and messages to their customers. This would not have been possible in India because, under the existing law, penalties which may apply to such kind of communication are not enforceable. Experts suggest there is a deficiency with regards to regulations in two distinct aspects: a) Absence of a comprehensive provision which deals with data security or informational privacy. So for example, if you write to a service provider and ask them what data they are storing with respect to you or whether they delete particular data like your email id on your request, there is no such provision under regulations for these kind of remedies or requests. b) Deficiency in the enforceability and the regulatory framework under the law i.e. we don’t have in India a regulator and enforcement entity to ensure that these kinds of violations are penalised.

Personal data has emerged as an asset class. This new commodity has a huge demand, but unlike oil, it does not deplete, is reusable and is very easily accessible. While trying to figure out how spammers got their hands on personal data, the author came across data brokers like Bulk Database Solutions and Parkdata. Both the sites are offering a HUGE amount of data dirt cheap (see the chart on the front page). For a price of Rs2,500 (~$40), Parkdata is offering a database which contains 500 million Indian email addresses and phone numbers. These vendors declined to share details on how they acquired this data and the salesperson from Parkdata says that it’s a ‘trade secret’ which cannot be disclosed.

The sorry state of affairs doesn’t end here. 1 in 131 emails contained malware in 2016, this was the highest rate in 5 years. Global scale ransomware attacks like WannaCry spread because of successful spam campaigns. On the condition of anonymity, a source who used to work for a telephone marketing process for selling herbal products to people in the USA, explains how his company acquired data. “For our process, the company would buy fresh leads from an American data broker called Bizedata,” he says. “After running these leads a couple of times on the floor, when the managers felt they had churned out as much profit as they could, the data was then sold off to others in the market. When the new buyer feels the same, that this data does not have much to offer at present, he would then sell it to someone else,” he adds. Since personal data is not an exhaustive commodity and is reusable, this reselling of a data set leads to a cycle where each player involved keeps on accumulating more information while sharing the same with others.

2) AI advances that will render us obsolete [Source: Financial Times]
The AI company DeepMind announced last week it had developed an algorithm capable of excelling at the ancient Chinese board game. The big deal is that this algorithm, called AlphaGo Zero, is completely self-taught. It was armed only with the rules of the game — and zero human input. AlphaGo, its predecessor, was trained on data from thousands of games played by human competitors. The two algorithms went to war, and AGZ triumphed 100-nil. In other words, disregarding human intellect allowed AGZ to become a supreme exponent of its art. In January, researchers at Carnegie-Mellon University unveiled an algorithm capable of beating the best human poker players. The machine, called Libratus, racked up nearly $2mn in chips against top-ranked professionals of Heads-Up No-Limit Texas Hold ‘em, a challenging version of the card game. Flesh-and-blood rivals described being outbluffed by a machine as “demoralising”. Again, Libratus improved its game by detecting and patching its own weaknesses, rather than borrowing from human intuition.

AGZ and Libratus are one-trick ponies but technologists dream of machines with broader capabilities. DeepMind, for example, declares it wants to create “algorithms that achieve superhuman performance in the most challenging domains with no human input”. Once fast, deep algorithms are unshackled from the slow, shallow human intellect, they can begin crunching problems that our own lacklustre species has not confronted. Rather than emulating human intelligence, the top tech thinkers toil daily to render it unnecessary. For that reason, we might one day look back on AGZ and Libratus as baby steps towards the Singularity, the much-debated point at which AI becomes super-intelligent, able to control its own destiny without recourse to human intervention. The most dystopian scenario is that AI becomes an existential risk.

Suppose that super-intelligent machines calculate, in pursuit of their programmed goals, that the best course of action is to build even cleverer successors. A runaway iteration takes hold, racing exponentially into fantastical realms of calculation. One day, these goal-driven paragons of productivity might also calculate, without menace, that they can best fulfill their tasks by taking humans out of the picture. As others have quipped, the most coldly logical way to beat cancer is to eliminate the organisms that develop it. Ditto for global hunger and climate change. These are riffs on the paper-clip thought experiment dreamt up by philosopher Nick Bostrom, now at the Future of Humanity Institute at Oxford University. If a hyper-intelligent machine, devoid of moral agency, was programmed solely to maximise the production of paper clips, it might end up commandeering all available atoms to this end. There is surely no sadder demise for humanity than being turned into office supplies.

Professor Bostrom’s warning articulates the capability caution principle, a well-subscribed idea in robotics that we cannot necessarily assume the upper capabilities of AI. It is of course pragmatic to worry about job displacement and we are ripe for automation. But only fools contemplate the more distant future without anxiety — when machines may out-think us in ways we do not have the capacity to imagine.

3) Kids, would you please start fighting? [Source: NY Times]
Noted author and Wharton professor, Adam Grant in this piece discusses how legendary partnerships like Wright brothers, The Beatles, Elizabeth Cady Stanton and Susan B. Anthony, and Steve Jobs and Steve Wozniak flourished. He says it’s because they criticised, fought and argued, just to come up with a better solution! None of these people succeeded in spite of the drama — they flourished because of it. Wright brothers had grown up playing together, they had been in the newspaper business together, they had built an airplane together. They even said they “thought together.” It’s why one of the cardinal rules of brainstorming is “withhold criticism.” You want people to build on one another’s ideas, not shoot them down. But that’s not how creativity really happens.

When the Wright brothers said they thought together, what they really meant is that they argued together. They often squabbled for weeks over the design of a propeller for their plane, a pivotal decision. “After long arguments we often found ourselves in the ludicrous position of each having been converted to the other’s side,” Orville reflected, “with no more agreement than when the discussion began.” Only after thoroughly decimating each other’s arguments did it dawn on them that they were both wrong. They needed not one but two propellers, which could be spun in opposite directions to create a kind of rotating wing. The skill to get hot without getting mad, to have a good argument that doesn’t become personal, is critical in life. But it’s one that few parents teach to their children. We want to give kids a stable home, so we stop siblings from quarreling and we have our own arguments behind closed doors. Yet if kids never get exposed to disagreement, we’ll end up limiting their creativity.

Teaching kids to argue is more important than ever. Now we live in a time when voices that might offend are silenced on college campuses, when politics has become an untouchable topic in many circles, even more fraught than religion or race. But, our legal system is based on the idea that arguments are necessary for justice. For our society to remain free and open, kids need to learn the value of open disagreement. It turns out that highly creative adults often grow up in families full of tension. Not fistfights or personal insults, but real disagreements. As the psychologist Robert Albert put it, “the creative person-to-be comes from a family that is anything but harmonious, one with a ‘wobble’.” Wilber and Orville Wright came from a wobbly family. Their father, a preacher, believed so much in embracing arguments that despite being a bishop, he had multiple books by atheists in his library and encouraged his children to read them.

If we rarely see a spat, we learn to shy away from the threat of conflict. Witnessing arguments, and participating in them, helps us grow a thicker skin. We develop the will to fight uphill battles and the skill to win those battles, and the resilience to lose a battle today without losing our resolve tomorrow. For the Wright brothers, argument was the family trade and a fierce one was something to be savored. Conflict was something to embrace and resolve. “I like scrapping with Orv,” Wilbur said. Brainstorming groups generate 16 percent more ideas when the members are encouraged to criticize one another. Disagreement is the antidote to groupthink. We’re at our most imaginative when we’re out of sync. There’s no better time than childhood to learn how to dish it out — and to take it.

Grant believes that children need to learn the value of thoughtful disagreement. Sadly, many parents teach kids that if they disagree with someone, it’s polite to hold their tongues. What if we taught kids that silence is bad manners? It disrespects the other person’s ability to have a civil argument and it disrespects the value of your own viewpoint and your own voice. It’s a sign of respect to care enough about someone’s opinion that you’re willing to challenge it. We can also help by having disagreements openly in front of our kids. Most parents hide their conflicts: They want to present a united front, and they don’t want kids to worry. But when parents disagree with each other, kids learn to think for themselves. They discover that no authority has a monopoly on truth. They become more tolerant of ambiguity. Rather than conforming to others’ opinions, they come to rely on their own independent judgment.

Instead of trying to prevent arguments, we should be modeling courteous conflict and teaching kids how to have healthy disagreements. He provides four rules: 1) Frame it as a debate, rather than a conflict; 2) Argue as if you’re right but listen as if you’re wrong; 3) Make the most respectful interpretation of the other person’s perspective; and 4) Acknowledge where you agree with your critics and what you’ve learned from them. Good arguments are wobbly: a team or family might rock back and forth but it never tips over. If kids don’t learn to wobble, they never learn to walk; they end up standing still.

4) Why ‘Bushman Banter’ was crucial to hunter gatherers’ evolutionary success
[Source: The Guardian]
The Ju/’hoansi people of the Kalahari have always been fiercely egalitarian. They hate inequality or showing off, and shun formal leadership institutions. It’s what made them part of the most successful, sustainable civilisation in human history. In the 1960s, the Ju/’hoansi “Bushmen” of the Kalahari desert became famous for turning established views of social evolution on their head. Until then, it had been widely believed that hunter-gatherers endured a near-constant battle against starvation. But when a young Canadian anthropologist, Richard B Lee, conducted a series of simple economic input-output analyses of the Ju/’hoansi as they went about their daily lives, he found not only did they make a good living from hunting and gathering, but they did so on the basis of only 15 hours’ work per week. On the strength of this, anthropologists redubbed hunter-gatherers “the original affluent society”.

The importance of understanding how hunter-gatherers made such a good living has only recently come to light, thanks to a sequence of genomic studies and archaeological discoveries. These show that the broader Bushmen population are far older than we had ever imagined, and have been hunting and gathering continuously in southern Africa for well over 150,000 years. The speed of the Ju/’hoansi’s transformation from an isolated group of hunter-gatherers to a marginalised minority in a rapidly developing nation state is without parallel in modern history. Taken together with new archaeological and genomic evidence, this brings fascinating insights into how to respond to some of the most pressing social, economic and environmental sustainability challenges we face today.

Ju/’hoansi still make use of well over 150 different plant species, and have the knowledge to hunt and trap pretty much any animal they choose to. As a result, they only ever worked to meet their immediate needs, did not store surpluses, and never harvested more than they could eat in the short term. For them, that fundamental axiom of modern economics, “the problem of scarcity”, simply did not apply. This was possible because, above all, they were – and still are – “fiercely egalitarian”. Men and women enjoyed equal decision making powers, children played largely non-competitive games in mixed age groups, and the elderly, while treated with great affection, were not afforded any special privileges. But, how did a society like the Ju/’hoansi with no formalised leaders maintain this egalitarianism?

In Ju/’hoan society, envy functioned like the “invisible hand” famously imagined by the economist Adam Smith. How this worked is best exemplified in the customary “insulting” of a hunter’s meat. While a spectacular kill was always cause for celebration, the hunter responsible would not be praised – instead, he was insulted. Everyone knew the difference between a scrawny kill and a good one, of course, but nonetheless continued to pass insults even while they were busy filling their bellies with meat— the most highly prized of all foods. Half a century ago, a Ju/’hoan man told Lee, “When a young man kills much meat, he comes to think of himself as a chief or a big man – and thinks of the rest of us as his servants or inferiors. We can’t accept this ... so we always speak of his meat as worthless. This way, we cool his heart and make him gentle.” This behaviour was not limited to hunting. Similar insults were meted out to anyone who assumed airs and graces, encountered a windfall or got too big for their leather sandals. Under constant scrutiny, the atmosphere created was generally harmonious, co-operative, and in which even those with the natural charisma and character to lead did so only with great circumspection.

5) Can Blockchain save us from the internet’s original sin? [Source:]
The front page of The Wall Street Journal on October 23, 2017 said: "Amazon Lures 238 Bids for its Second Home." What’s wrong with this picture? It's not a good thing that a single company can get the political leaders of so many American cities and states to scramble over each other to try to lure $5 billion in spending on some new buildings. This story shows that Amazon's influence over American urban life is far more than one company deserves: over tax policies, over city planning decisions, over the aesthetics and culture of our communities. Society's interests lie in sustaining a dynamic, innovative and evolving economy, not one in which hegemonic companies have oversized sway over everyone's decision-making. This is the core problem of centralisation in the internet age – a pet topic for those who believe the ideas behind blockchain technology can point us toward a better economic model. Amazon is not alone, of course. But it's in a very select group. An acronym has emerged to define the small club of digital behemoths to which it belongs: GAFA (Google, Amazon, Facebook and Apple).

The other members of this club are wielding similarly outsized influence on our society. Facebook's "master algorithm," which in determining what we see and read is literally dictating how we think while Google’s lead in quantum computing race is likely to confer upon it unimaginable competitive advantages in data-processing capabilities. So how did the GAFA get to be so powerful? It comes down to an original sin in the first design of the internet. The inventors of packet switching and of the basic protocols on which the modern web is built did a masterful job figuring how to move information seamlessly across a distributed network. What they didn't do was resolve the problem of trust. Since information is power, it is often highly sensitive. So when people share it with each other, they need to know that data can be trusted. But since there was no truly decentralized trust mediation system in place in the 1990s, an asymmetric solution was found.

On the one hand, the distribution of public information was disintermediated, which put all centralised providers of that information, especially newspapers and other media outlets, under intense business pressure from blogs and other new information competitors. But on the other, all valuable information – particularly money itself, an especially valuable form of information – was still intermediated by trusted third parties. It was a centralised solution bolted onto a decentralized information infrastructure. So, we got website hosting services to manage each site's files. We got certificate authorities to authenticate reliable addresses. We got banks and credit card providers to run the payment system. And since we craved the network that Facebook's community offered and that Amazon's marketplace could reach and Google's search engine could tap, we fed ever more valuable information into the hands of these entities – those that won the early, defining battles to establish dominance of those services. As a result, a new internet version of the trusted third party was born, and it was just as powerful, if not more so, than those archetypal trusted third parties of the pre-internet era: banks. Only these newcomers' currency isn't dollars, it's data.

Lately, problems like Facebook's "fake news" dilemma and Equifax's cyberbreach have finally begun shining a light on the fundamental flaws of a centralised system for controlling sensitive information. Since producers now depend on Amazon to reach their customers, their entire business model – from production processes to their planning strategies – is determined by whatever information is generated by the Seattle company's algorithm. That's an inherent impediment to effective innovation and creates a dependency that limits competitive capabilities. If you think this level of domination is bad, consider what will happen when we arrive at a world in which artificial intelligence, machine learning and the Internet of Things have combined to ensure that virtually every decision we make is automated by some algorithm. The question "who owns the data?" is going to become a much bigger problem.

Blockchain might offer some answers. While in the blockchain space, there are unsolved challenges relating to how to scale permissionless blockchains such as bitcoin, as well as questions about how much autonomy people want or should have over their own money and their data, its core concept of a decentralised trust mechanism is where a solution lies. Within the model that Satoshi Nakamoto's invention produced – a system for how to agree on the validity of information shared by strangers in an environment of mistrust – we have a new framework for thinking about who gets to manage data in the internet age. The idea that the global economy of the future will be one in which individuals and small businesses have direct control over their data, and yet can still operate in open markets and generate network effects is an exciting prospect. It's a future in which a more level playing field gives rise to true competition and unleashes the kind of open-source innovation that's needed to solve many of the problems we face.

6) Fashion, Maslow and Facebook’s control of social [Source:]
There's a common idea that in some way fashion designers get together in a room and decide what the fashion will be next year. That's a pretty fundamental misunderstanding. Rather, they propose what might fit the zeitgeist. Sometimes that's incremental and sometimes it's a radical break - sometimes the pendulum needs to swing from one extreme to another. Sometimes they get it wrong, but when they get it right it captures an age. The “New Look” prepared by Christian Dior in 1947 proposed that people wanted to move on from the clothes of wartime austerity, and from austerity itself, and that this was a good way to do it, and Dior was right. You can see the same dynamic with punk, which unlike the New Look wasn't proposed by designers at all but came from the bottom up, but which served the same purpose - here was a look and an attitude that expressed how people felt, and, again, was a reaction against a very different kind of look that went before. Punk was picked up and perhaps popularised or accelerated by designers but the point is the same - no-one sits in a room to decide what the fashion is going to be. Fashions express what people themselves want.

Ben Evans, the author of this piece, says that he’s reminded of these shifts when he thinks of Facebook and how much it can change behaviours - about how much it can decide what the new thing will be. There was a time when instant messaging or the asymmetric feed were simply better person-to-person mechanics than email. Now, though, we're shifting around at the top of Maslow's hierarchy of needs, experimenting with different ways to explore and express our personality and our needs, and so, in a sense, of the zeitgeist. Many of these trends have also expressed the same sense of a pendulum - we swung from the chaos of MySpace to the structured order of Facebook, and then swung again to the fun and exuberance and creativity of Snap. But Snap of course is not the only one, there are dozens of apps and experiences, from GIF keyboards to live streaming apps to animoji, all trying to capture a little piece of Maslow. Social is pop culture.

This is why he says it’s wrong to say, deterministically, that Facebook gets to decide what we do on its platform or what we see in the newsfeed. This is like saying that a fashion designer gets to decide what we'll wear. While they can decide what colours to put in the shop, change the cut and/or employ all marketing tools of influence of the industry, fashion collections still fail. The looks that work reflect what people actually want, not because they knew they wanted that or because you could ask them in any mechanistic way or because you're tracking metrics, but because they're a proposal that turns out to capture how people feel. You can optimise a product, and measure it, but people still have to want it. Similarly, Facebook can fill the home page with a feature, and a retailer can fill a shop with a look, but that doesn't mean you can make people take it.

Indeed, when something becomes fashionable, it will inevitably become unfashionable. So the very fact that any social media company has found a behaviour that people want means that at some point they'll stop wanting it. People stopped wearing the new look, they stopped wearing miniskirts, and they stopped wearing punk. There is always a pendulum. A good designer can feel this before it happens, not after, and so for Facebook: when Facebook says "games have great metrics and make us lots of money, but we think they make the Facebook experience worse so we'll kill them", and, more recently, when Snap says "we think the algorithmic linear feed is bad", this is also a proposal, and, again, it might be wrong. It might be suggested by detailed daily metrics, or a vague instinctive sense of, again, the zeitgeist, but the crucial point is that whether this is right - whether people like it - is never fundamentally determined by the company. This applies at every level of scale - whether it's creating an entirely new product or tuning some small feature based on a daily or hourly feedback loop.

Hence, one of the ways he describes Facebook is that it is extremely good at surfing user behaviour - it tries to work out where users are going and go with them, whereas Snap tries much more to try to get ahead of this. In different ways both of these are also what a great designer does. Of course, for Facebook that sometimes means creating things, even things that users say they don't like. Dior's New Look made people angry and so did the original newsfeed itself. But then, fashion designers create looks all the times - that doesn't mean they can make anyone wear it. You can shape things, sometimes. You can ride and channel the trend. But he believes we attribute vastly too much power to a handful of product managers in Menlo Park, and vastly too little power to the billions of people who look at their phone screen and wonder which app to open. Facebook writes algorithms, and designers cut the cloth, but that doesn't mean they control what people look at or what people wear.

7) China harnesses big data to buttress the power of the state [Source: Financial Times ]
Over the period of “reform and opening” since the late 1970s, China has generally sought to “bide its time and hide its strength”. But no longer. At the congress, Xi Jinping, the president, presented “socialism with Chinese characteristics” as a “a new choice” for developing nations to follow. But what lends heft to this globalist intent are technological advances that are already invigorating the Chinese economy and may also help to address some of the historic failings of the country’s polity. The data revolution is fusing with China’s party-state to create a potential “techno-tatorship”; a hybrid strain in which rigid political control can coexist with ample free-market flexibility. Duncan Clark, chairman of BDA China, a consultancy focused on digital technology, sees a sharp difference between the current flowering of China’s digital economy and previous attempts by Beijing to coax “indigenous innovation” out of its state-owned companies by pumping public funds into technology projects championed by the government.

His point is highlighted by a boom that is making China the centre of the global digital economy revolution. China accounts for more than 40% of the world’s ecommerce transactions, up from less than 1% a decade ago, according to the McKinsey Global Institute. It is the world leader in payments made by mobile devices, with 11 times the transaction value of the US. The country is at the head of sharing economy technology advances, with its bike-sharing and ride-sharing markets eclipsing all others in size and growth. Anyone who doubts the entrepreneurial energy behind such trends should consider the following statistic, again from the McKinsey Global Institute. One in three of the world’s 262 unicorns — start-ups that are valued at more than $1bn — is Chinese. Such is the economic dynamism unleashed by digital technology.

But China is also enlisting data to help resolve systemic inadequacies as ancient as its authoritarian state. All Chinese dynasties have struggled with a basic law of power dynamics: that centralised authority tends to atrophy and corrupt along with distance from the court. But Beijing is hoping data can have evolutionary applications. A senior financial official, explained how the mass of personal information collected through digital transactions allows China to check individuals’ behaviour and penalise those who step out of line. First of all, he said, the big ecommerce companies, such as Alibaba, Tencent and, are obliged to share their data with central authorities such as the People’s Bank of China (PBoC), the central bank. Then the PBoC shares the data with about 50 state-owned banks, creating a database that covers about 400m people, detailing their payment history, creditworthiness and even networks of social contacts. As a result he says the number of bad debts being built up by households has come down sharply since this system was launched.

A similar system for corporations is also under development at the PBoC’s Credit Reference Centre. It takes into account information on companies’ social security payments, housing provident fund payments, administrative penalties and awards, tax arrears and court judgments to assess whether a company is a good corporate citizen and a sound credit risk. This database will be used to help resolve one of China’s biggest frailties: a debt addiction that requires four units of credit to create one unit of growth in gross domestic product, officials said. That said, data-centric approaches to governance can have shortcomings. The data can be ignored or manipulated by humans, or privileged institutions can lobby for special treatment using old fashioned political leverage.

8) The disappearing American grad student [Source: NY Times]
The Tandon School — a consolidation of N.Y.U.’s science, technology, engineering and math (STEM) programmes on its Brooklyn campus — is an extreme example of how scarce Americans are in graduate programmes in STEM. While at the undergraduate level, 80% are United States resident, at the graduate level, 80% hail from India, China, Korea, Turkey and other foreign countries. Overall, these programmes have the highest percentage of international students of any broad academic field. In the fall of 2015, about 55% of all graduate students in mathematics, computer sciences and engineering were from abroad. In arts and humanities, the figure was about 16%; in business, a little more than 18%. The dearth of Americans is even more pronounced in hot STEM fields like computer science, which serve as talent pipelines for Google, Amazon, Facebook and Microsoft: About 64% of doctoral candidates and almost 68% in master’s programmes last year were international students. In comparison, only about 9% of undergraduates in computer science were international students.

For the most part, Americans don’t see the need for an advanced degree when there are so many professional opportunities waiting for them. Dan Spaulding, who oversees human resources at Zillow Group, the online real estate company, said that in specialised areas like machine learning and artificial intelligence, his company favours graduate degrees, but for the vast majority of its technical jobs, a bachelor’s degree in computer science is adequate. He said, he has heard concerns from students and managers about an international chill, but for now the supply of students with computer science skills hasn’t been affected. “A great many of them are coming in with programming skills first and looking to radiate out into other business disciplines, product management, product design,” he said. “I just think going deep academically is not a priority for as many computer science students today.” In 1994, only about 40% of students who were enrolled in computer science PhD programmes were from outside the country.

As the economy improved, the percentage of Americans in graduate programmes dropped. “Going to grad school became less of a priority for so many students,” said Stuart Zweben, professor emeritus of computer science and engineering at Ohio State University. “You had to really be interested in research or something special.” The balance of computer science graduate programmes began to tilt toward so-called non-resident aliens in the late 1990s, when well-capitalised dot-coms began scouring for programmers, sometimes encouraging summer interns to drop out of school, Dr. Zweben said. Students from other countries have long seen graduate school as their best path to employment and residency in the United States, and for the industry connections they are not likely to find in their home countries.

9) The atomic theory of origami [Source:]
In 1970, an astrophysicist named Koryo Miura conceived what would become one of the most well-known and well-studied folds in origami: the Miura-ori. The pattern of creases forms a tessellation of parallelograms, and the whole structure collapses and unfolds in a single motion—providing an elegant way to fold a map. It also proved an efficient way to pack a solar panel for a spacecraft, an idea Miura proposed in 1985 and then launched into reality on Japan’s Space Flyer Unit satellite in 1995. Back on Earth, the Miura-ori has continued to find more uses. The fold imbues a floppy sheet with form and stiffness, making it a promising metamaterial—a material whose properties depend not on its composition but on its structure. The Miura-ori is also unique in having what’s called a negative Poisson’s ratio. When you push on its sides, the top and bottom will contract. But that’s not the case for most objects. Try squeezing a banana, for example, and a mess will squirt out from its ends. Researchers have explored how to use Miura-ori to build tubes, curves and other structures, which they say could have applications in robotics, aerospace and architecture. Even fashion designers have been inspired to incorporate Miura-ori into dresses and scarves.

Now Michael Assis, a physicist at the University of Newcastle in Australia, is taking a seemingly unusual approach to understanding Miura-ori and related folds: by viewing them through the lens of statistical mechanics. Traditionally, statistical mechanics tries to make sense of emergent properties and behaviors arising from a collection of particles, like a gas or the water molecules in an ice cube. But crease patterns are also networks—not of particles, but of folds. Using these conceptual tools normally reserved for gases and crystals, Assis is gaining some intriguing insights. For instance, research shows that by inverting a few creases, by pushing on a convex segment to make it concave and vice versa, they could make the structure stiffer. Instead of being a flaw, they found, defects could be a feature. Just by adding or subtracting defects, you can configure—and reconfigure—a Miura-ori to be as stiff as you want. This drew the attention of Assis. His expertise is in statistical mechanics, which applies naturally to a lattice pattern like Miura-ori. In a crystal, atoms are linked by chemical bonds. In origami, vertices are linked by creases.

In Assis’ analysis of origami, a higher temperature causes defects to appear. But in this case, temperature doesn’t refer to how hot or cold the lattice is; instead, it represents the energy of the system. For example, by repeatedly opening and closing a Miura-ori, you’re injecting energy into the lattice and, in the language of statistical mechanics, increasing its temperature. That said, at relatively low temperatures, the defects behave in an orderly fashion. And at high enough temperatures, when defects cover the entire lattice, the origami structure becomes relatively uniform. But in the middle, both the Miura-ori and another trapezoidal origami pattern appear to go through an abrupt shift from one state to another—what physicists would call a phase transition. This shows origami is complex; it has all the complexities of real-world metamaterials.

Whether these conclusions actually apply to real-world origami is up for debate. Robert Lang, a physicist and origami artist, thinks that Assis’ models are too idealised to be of much use. For example, Lang said, the model assumes the origami can be made to fold flat even with defects, but in reality, defects can prevent the sheet from flattening. But the assumptions in the model are reasonable and necessary, especially if we want exact solutions, Assis said. In many engineering applications, such as the folding of a solar panel, you want the sheet to fold flat. The act of folding can also force defects to flatten. Unfortunately, the question of global flat-foldability is one of the hardest mathematics problems around and the gap between theory and designing real metamaterials and structures remains wide. To find out, researchers will need to carry out experiments to test Assis’ ideas and gauge whether the models can actually inform the design of origami structures.

10) From climate change to robots: What politicians aren’t telling us [Source: Financial Times]
Text for google: Politics in many western countries has become a displacement activity
Climate change is making natural disasters more frequent, and more Americans now live in at-risk areas. But meanwhile, Donald Trump argues on Twitter about what he supposedly said to a soldier’s widow. So far, Trump is dangerous less because of what he says or does than because of the issues he ignores. He’s not alone: politics in many western countries has become a displacement activity. Most politicians bang on about identity while ignoring automation, climate change and the imminent revolution in medicine. They talk more about the 1950s than the 2020s. According to the author, this is partly because they want to distract voters from real problems, and partly because today’s politicians tend to be lawyers, entertainers and ex-journalists who know less about tech than the average 14-year-old.

But the new forces are already transforming politics. Each new natural disaster in the USA will prompt political squabbles over whether Washington should bail out the stricken region. At-risk cities, such as Miami and New Orleans will gradually lose appeal as the risks become uninsurable. Miami could fade as Detroit did. American climate denial may fade too, as tech companies displace Big Oil as the country’s chief lobbyists. Already in the first half of this year, Amazon outspent Exxon and Walmart on lobbying. Meanwhile, northern Europe, for some years at least, will benefit from its historical unique selling point: its mild and rainy climate. Its problem will be that millions of Africans will try to move there. As Africa gets hotter, drier and overpopulated, people will struggle to feed themselves, says the United Nations University. So they will head north, in much greater numbers than Syrians have, becoming the new bogeymen for European populists. Also, everywhere, automation will continue to eat low-skilled jobs. While younger people might take up empathy related jobs like care-givers or yoga teachers, older working-class men will probably embrace politicians like Trump.

The concept of working age will take a beating too. Many poorer people will work into their seventies, then die, skipping the now standard phase of retirement. Meanwhile, from the 2020s the rich will live ever longer as they start buying precision medicine. They will fix their faulty DNA and edit their embryos. Even if governments want to redress inequality, they won’t be able to, given that paying tax has become almost voluntary for global companies. The country hit hardest by automation could be China. China’s model of exploiting cheap factory labour without environmental regulations has run its course. Even if China’s economy keeps growing, low-skilled men won’t find appealing careers, and they won’t even have the option of electing a head of state like Trump. The most likely outcome: China’s regime joins the populist trend and runs with aggressive nationalism.

Troubled regimes will also ratchet up surveillance. Now they barely know what you say. In 10 years, thanks to your devices, they will know your next move even before you do. Already, satellites are monitoring Egypt’s wheat fields, so as to predict the harvest, which predicts the chance of social strife. Meanwhile, western politicians will probably keep obsessing over newsy identity issues.

- Saurabh Mukherjea is CEO (Institutional Equities) and Prashant Mittal is Analyst (Strategy and Derivatives) at Ambit Capital Pvt Ltd. Views expressed are personal.