W Power 2024

The Adaptive Benefits of High Trust

People who take seemingly irrational risks in the absence of a long trust-development history – known as high trusters’ – often obtain superior outcomes. Here’s why

Published: Sep 27, 2010 06:34:07 AM IST
Updated: Sep 27, 2010 07:37:55 AM IST
The Adaptive Benefits of High Trust
Nancy Carter, a doctoral candidate in the Organizational Behaviour group at the Rotman School of Management

People vary in their propensity to trust. Some assume that others are generally trustworthy and make themselves vulnerable to their counterparts until evidence challenges their assumptions. Others assume that people are generally untrustworthy and act accordingly until their counterparts demonstrate their trustworthiness gradually over time. The latter, low-trust orientation reflects the dominant ‘rational’ model of trust development. However, newer models allow for the occurrence of greater risks early in relationships and argue that such risks often yield positive results for ‘high-trusters’.

Most economic models of decision-making suggest that, to avoid exploitation, people should generally be defensive, ‘low trusters’. Rational choice theories, for example, assume that all actors will seek to maximize their own personal utility in social interactions (i.e., behave self-interestedly): decision makers will seek their own advantage as they guard against the effects of others’ self-serving pursuits. Two parties in a prisoners’ dilemma who are acting according to the precepts of Game Theory, for instance, will both choose self-interestedly, and both will suffer relative to other, more mutually-rewarding, possible outcomes. In a commons dilemma, mutual defensively self-serving choices lead to the famous ‘tragedy of the commons’.

These dominant expected-utility models of choice are inherently risk-averse and socially defensive in orientation. Low trusters seem socially savvy in light of such models, and it is tempting to see high trusters as gullible ‘Pollyannas’.  However, rather than being naïve, high trusters might instead be more sensitive to information that predicts whether those with whom they interact are trustworthy or untrustworthy.

Toshio Yamagishi of Japan’s Hokkaido University has theorized that generalized trust is a form of social intelligence that, counter to game theory’s predictions, can be highly adaptive. His theory suggests that high trusters, who take more social risks and are therefore more vulnerable to exploitation, obtain more differentiating social data and learn more – for example, “Ah, this is what someone who will deceive me does…” In contrast, by vigilantly defending themselves from all possible exploitation, low trusters seem to be suspicious of everyone: they send signals that limit the development of potentially- beneficial relationships and, therefore, in the absence of differentiating social data, they learn less about distinguishing trustworthy from untrustworthy others. Thus, by defending themselves from the costs associated with exploitation, low trusters can incur potentially massive opportunity costs.

In a recent study we tested whether high trusters -- who may have learned to be more sensitive to negative social information than low trusters -- are also better at lie detection. If they are, their ability at lie detection may be one of the key reasons why high trusters achieve the kinds of social interaction successes that Yamagishi has documented.

Psychologists have long studied the ability to detect deception, with rather bleak conclusions: human beings are surprisingly poor lie detectors. A recent meta-analysis concluded that people achieve an average of 54 per cent correct lie-truth judgments, correctly classifying 47 per cent of lies as deceptive and 61 per cent of truths as non-deceptive. Although research finds variation across groups, this meta-analysis concluded that even professionals who work to detect lies – psychiatrists, judges, police, etc. – do no better than the general public at detecting lies of the sort studied, and the mean performance of several of these professional groups was actually lower than the general public (though not significantly).

In the necessarily messy and ambiguous context of social interactions and perceptions, this logic also suggests that, although high trusters should be better lie detectors, they might not be better truth detectors. High trusters (who take more social risks according to Yamagishi's argument) will experience more betrayals and lies over time than more risk-averse low trusters, and therefore will have the opportunity to learn hard lessons from their errors. However, the error of assessing a truthful counterpart to be deceitful does not offer the same immediacy or clarity of developmental feedback; it just doesn't hurt as much and isn't as memorable. Thus, although they may have become attuned -- consciously or unconsciously -- to signals of potential betrayal, the remainder of a high truster's experience is not likely to be filled with perfect predictions of others' trustworthiness or even particularly good feedback; thus, the likelihood of their errors in a primarily deceitless domain may be no better or worse than the likelihood of errors, overall, for low trusters.

Our research
Our research investigated this potential pattern. We predicted that high trusters would be better lie detectors but not better truth detectors than low trusters, i.e., that generalized trust would be positively related to lie detection but not to truth detection ability. Prior to testing these hypotheses, we wanted to understand whether people normally held these same beliefs as, on their face, the predictions seem counterintuitive. Thus, before our main experiment we surveyed 46 MBA students to assess lay expectations about lie detection abilities.

The participants, who had several years of full-time work experience on average, each read a scenario about a recent spate of dishonesty in their organization’s recruitment and employment interviews. The problem had cost the organization “… dearly in terms of employee time, divisional productivity, and frankly, morale.” Participants had to choose one of two senior managers, who were “comparable in terms of both their experience and job-relevant capabilities”, to interview new job applicants. As described, the only difference between managers was that one was a high truster and the other a low truster.

A great majority of the participants (39 of 46; 85%) chose the low truster, confirming our expectation that people generally assume and believe that generalized trust and lie detection ability are negatively correlated – that low trusters make better lie detectors than high trusters. Asked why they chose the low truster, the most common answers indicated a belief in the general gullibility, and to a lesser degree the inferior intelligence, of high trusters. These results indicated that our main hypotheses ran exactly counter to typical beliefs about the relationship between generalized trust and lie detection ability.

In our main experiment, 29 participants, ranging in age from 19 to 36, were recruited through on-campus invitations. Stimulus materials were videos of second-year MBA students in simulated employment interviews regarding a real job. The interviewees were provided with the job description and told that an expert in lie detection would interview them. The instructions explained that they would be randomly assigned to a “truth” condition in which they should respond to all questions in an entirely truthful fashion, or a “lie” condition in which they should lie about at least three significant things during the interview. Interviewees in the “lie” condition were told to create their own lies to make them appear to be more attractive job applicants. Interviewees dressed as they normally would for a real job interview.

All of the interviewees were instructed to do their best to ‘get the job’. In the truth condition, the instructions emphasized that they should not lie under any circumstances; in the lie condition, instructions emphasized that they should tell at least three substantial lies that they thought would significantly increase their chances of getting the job. All interviewees had a chance to review the kinds of standard interview questions they could expect in advance. Interviewees were guaranteed payment of $20; they were told that they would receive an additional $20 if the interviewer, a lie detection expert, believed that they were telling the truth. The financial incentive for being believed applied in both conditions. Past lie-detection research has made it clear that the targets of lie detection judgments must have significant incentives to create worthwhile stimuli. In reality, the interviewer had no special lie detection expertise, but it was clear that the interviewees believed that he did.

The Adaptive Benefits of High Trust
J. Mark Weber, an assistant professor of Organizational Behaviour at the Rotman School

Of 16 videos initially created, eight were selected for the final study – four from the ‘lie’ condition and four from the ‘truth’ condition – based on gender balance, appropriateness of attire, comprehensibility, and the number and substance of lies told, with a preference for interviews that contained more, and more-substantial lies. Several days in advance, participants completed a web survey that included Yamagishi’s measure of generalized trust. Each participant viewed the eight videos, one at a time, in random order, and made a series of judgments about each interviewee immediately after each video. Their judgments included (1) whether the interviewee had lied or not; (2) how confident they were about this conclusion; and Likert-scaled evaluations of interviewees’ (3) overall truthfulness (“this person was truthful in response to the interview questions”); (4) global honesty (“in general I think this person is honest”) and (5) their hiring intentions (“I would hire this person for this position”).  At the end of the study, participants were also asked to describe the aspects of interviewees’ behavior to which they had attended when making their truthfulness judgments.

Given that detecting liars amid truth-tellers is an exercise in correctly discerning ‘signal’ from ‘noise’, we used Signal Detection Theory (SDT) to analyze the data. Regression analyses using the SDT-relevant dependent variables indicated that, contrary to lay expectations, high trusters were more accurate in detecting liars than low trusters. Although people seem to believe that low trusters are better lie detectors and less gullible than high trusters, our results suggest that the reverse is true: high trusters were better lie detectors than low trusters; they also formed more appropriate impressions and hiring intentions, and reported attending to more helpful diagnostic information.

Yamagishi presented three potential adaptive explanations for a positive relationship between generalized trust and ‘social intelligence’ -- the ability to understand own and other people’s internal state and use that understanding in social relations. First, high generalized trust drives social risk-taking, and the possibility of exploitation pushes high trusters to invest in learning how to identify people who are not trustworthy. Low trusters need no such skills since a social posture of defensiveness is a reliable (if costly) exploitation prophylactic. Second, advanced sensitivity to trustworthiness cues reduces a person’s vulnerability to detrimental consequences. Those who are less sensitive are better off assuming that unknown others are generally untrustworthy, leading to less generalized trust among the less socially intelligent. Assuming that people are liars prevents a person from being duped. In contrast, being effectively sensitive makes it safe to assume that others generally tell the truth, as this sensitivity will help detect a lie before a person falls victim to it. Finally, other unknown factors might also contribute to these effects.

Our studies cannot determine which of these causal forces is most powerful; all three accounts may be likely. Undoubtedly, some people are more natural lie detectors, just as some people have higher general intelligence, allowing them to act with greater confidence and less risk, learning more rapidly along the way. It is also plausible that some people take risks and learn from their mistakes – in so doing coming to develop the skills that facilitate and encourage high trust.

Other individual differences, e.g., Machiavellianism (‘the employment of cunning and duplicity in general conduct’ -- derived from Italian Renaissance diplomat and writer Niccolò Machiavelli) and pro-social orientation have also been associated with successful social adaptation and social perception. Machiavellianism scores tend to be correlated with emotional detachment, low concern for ethics, a general lack of sincerity in interpersonal relations, and a willingness to exploit others. Research has also found that ‘High Machs’ are convincing liars and more successful at social manipulation when environmental constraints are low, suggesting a high degree of social intelligence coupled with extremely low generalized trust. This seems contradictory to both the idea of ‘generalized trust as social intelligence’ and our  findings.

However, Machiavellianism seems to be primarily related to social success that relies on convincing and exploiting others, rather than accurately perceiving them, as in the case of lie detection. In one study, researchers found no difference in lie detection accuracy between High and Low Machs, and Machiavellianism has been found to be negatively related to emotional intelligence. Further, the Machiavellian social strategy has been characterized as ‘defect’ that is successful in only a limited range of contexts. Data also suggests that Machiavellianism hurts performance in marketing careers and that there is no relationship between Machiavellianism and adaptiveness. One study also found that Low rather than High Machs employ more subtle, variable, and adapting social strategies.
 
Although we did not collect Machiavellianism data in this study, we found generalized trust and Machiavellianism to be negatively related in a different, as yet unpublished study. Thus, although Machiavellianism may be positively related to certain kinds of social success in specific social contexts, it seems less clear that it reflects social intelligence, and its negative relationship to generalized trust is not surprising.

In a number of ways, the current research aligns with the literature on pro-social orientations. In a seminal paper, Harold Kelley and Anthony Stahelski observed that cooperative actors accurately perceived the world as a heterogeneous mix of cooperators and competitors, whereas competitors perceived the world as homogeneously competitive. They noted that, as a result, competitors create social dynamics that elicit competition. Cooperators, in contrast, had more accurate social perceptions and did not create negatively self-fulfilling prophesies. Kelley and Stahelski also observed that competitors were competitive regardless of their counterparts’ behavior, whereas cooperators tended to match their behavior to their counterparts’. Subsequent research has shown that pro-socially oriented people are better predictors of others’ choices than individualistically- or competitively-oriented people. This suggests that pro-social actors are more behaviorally flexible and responsive and have more accurate social perceptions than their more competitive counterparts. This logic, and these findings, clearly align with our findings.

Our research has several implications for research on lie detection, generalized trust and trust development; it also offers a potential mechanism by which seemingly irrational, risky behaviors can lead to socially adaptive advantage. First, few previous studies have documented such a strong relationship between a personality variable and lie detection performance. This opens up many interesting possibilities for future research.

Second, seemingly irrational risk-taking in the absence of a long trust-development history between parties has been an important puzzle in the social sciences. The dominant rational models of choice and trust development cannot easily accommodate such behaviour, yet such risk-taking often leads to superior outcomes. The current findings suggest that high trusters may be able to take more social risks early in their relationships than low trusters because they are better at detecting deceit in their exchange partners. This could reduce not only the potential costs associated with exploitation but also the economic and social opportunity costs incurred by low trusters who forgo potentially-worthwhile relationships.

In sum, looking at the world’s high trusters as pie-in-the-sky Pollyannas seems to deserve some rethinking. High trusters may deserve more credit than they normally receive.

Nancy Carter is a doctoral candidate in the Organizational Behaviour group at the Rotman School of Management.  J. Mark Weber is an assistant professor of Organizational Behaviour at the Rotman School. The paper on which this is based, “Not Pollyannas: Higher Generalized Trust Predicts Lie Detection Ability”, was published in the July 2010 edition of Social Psychological and Personality Science. Rotman faculty research is ranked #11 in the world by The Financial Times.

[This article has been reprinted, with permission, from Rotman Management, the magazine of the University of Toronto's Rotman School of Management]

Post Your Comment
Required
Required, will not be published
All comments are moderated