People vary in their propensity to trust. Some assume that others are generally trustworthy and make themselves vulnerable to their counterparts until evidence challenges their assumptions. Others assume that people are generally untrustworthy and act accordingly until their counterparts demonstrate their trustworthiness gradually over time. The latter, low-trust orientation reflects the dominant ‘rational’ model of trust development. However, newer models allow for the occurrence of greater risks early in relationships and argue that such risks often yield positive results for ‘high-trusters’.
Most economic models of decision-making suggest that, to avoid exploitation, people should generally be defensive, ‘low trusters’. Rational choice theories, for example, assume that all actors will seek to maximize their own personal utility in social interactions (i.e., behave self-interestedly): decision makers will seek their own advantage as they guard against the effects of others’ self-serving pursuits. Two parties in a prisoners’ dilemma who are acting according to the precepts of Game Theory, for instance, will both choose self-interestedly, and both will suffer relative to other, more mutually-rewarding, possible outcomes. In a commons dilemma, mutual defensively self-serving choices lead to the famous ‘tragedy of the commons’.
These dominant expected-utility models of choice are inherently risk-averse and socially defensive in orientation. Low trusters seem socially savvy in light of such models, and it is tempting to see high trusters as gullible ‘Pollyannas’. However, rather than being naïve, high trusters might instead be more sensitive to information that predicts whether those with whom they interact are trustworthy or untrustworthy.
Toshio Yamagishi of Japan’s Hokkaido University has theorized that generalized trust is a form of social intelligence that, counter to game theory’s predictions, can be highly adaptive. His theory suggests that high trusters, who take more social risks and are therefore more vulnerable to exploitation, obtain more differentiating social data and learn more – for example, “Ah, this is what someone who will deceive me does…” In contrast, by vigilantly defending themselves from all possible exploitation, low trusters seem to be suspicious of everyone: they send signals that limit the development of potentially- beneficial relationships and, therefore, in the absence of differentiating social data, they learn less about distinguishing trustworthy from untrustworthy others. Thus, by defending themselves from the costs associated with exploitation, low trusters can incur potentially massive opportunity costs.
In a recent study we tested whether high trusters -- who may have learned to be more sensitive to negative social information than low trusters -- are also better at lie detection. If they are, their ability at lie detection may be one of the key reasons why high trusters achieve the kinds of social interaction successes that Yamagishi has documented.
Psychologists have long studied the ability to detect deception, with rather bleak conclusions: human beings are surprisingly poor lie detectors. A recent meta-analysis concluded that people achieve an average of 54 per cent correct lie-truth judgments, correctly classifying 47 per cent of lies as deceptive and 61 per cent of truths as non-deceptive. Although research finds variation across groups, this meta-analysis concluded that even professionals who work to detect lies – psychiatrists, judges, police, etc. – do no better than the general public at detecting lies of the sort studied, and the mean performance of several of these professional groups was actually lower than the general public (though not significantly).
In the necessarily messy and ambiguous context of social interactions and perceptions, this logic also suggests that, although high trusters should be better lie detectors, they might not be better truth detectors. High trusters (who take more social risks according to Yamagishi's argument) will experience more betrayals and lies over time than more risk-averse low trusters, and therefore will have the opportunity to learn hard lessons from their errors. However, the error of assessing a truthful counterpart to be deceitful does not offer the same immediacy or clarity of developmental feedback; it just doesn't hurt as much and isn't as memorable. Thus, although they may have become attuned -- consciously or unconsciously -- to signals of potential betrayal, the remainder of a high truster's experience is not likely to be filled with perfect predictions of others' trustworthiness or even particularly good feedback; thus, the likelihood of their errors in a primarily deceitless domain may be no better or worse than the likelihood of errors, overall, for low trusters.
Our research investigated this potential pattern. We predicted that high trusters would be better lie detectors but not better truth detectors than low trusters, i.e., that generalized trust would be positively related to lie detection but not to truth detection ability. Prior to testing these hypotheses, we wanted to understand whether people normally held these same beliefs as, on their face, the predictions seem counterintuitive. Thus, before our main experiment we surveyed 46 MBA students to assess lay expectations about lie detection abilities.
The participants, who had several years of full-time work experience on average, each read a scenario about a recent spate of dishonesty in their organization’s recruitment and employment interviews. The problem had cost the organization “… dearly in terms of employee time, divisional productivity, and frankly, morale.” Participants had to choose one of two senior managers, who were “comparable in terms of both their experience and job-relevant capabilities”, to interview new job applicants. As described, the only difference between managers was that one was a high truster and the other a low truster.
A great majority of the participants (39 of 46; 85%) chose the low truster, confirming our expectation that people generally assume and believe that generalized trust and lie detection ability are negatively correlated – that low trusters make better lie detectors than high trusters. Asked why they chose the low truster, the most common answers indicated a belief in the general gullibility, and to a lesser degree the inferior intelligence, of high trusters. These results indicated that our main hypotheses ran exactly counter to typical beliefs about the relationship between generalized trust and lie detection ability.
In our main experiment, 29 participants, ranging in age from 19 to 36, were recruited through on-campus invitations. Stimulus materials were videos of second-year MBA students in simulated employment interviews regarding a real job. The interviewees were provided with the job description and told that an expert in lie detection would interview them. The instructions explained that they would be randomly assigned to a “truth” condition in which they should respond to all questions in an entirely truthful fashion, or a “lie” condition in which they should lie about at least three significant things during the interview. Interviewees in the “lie” condition were told to create their own lies to make them appear to be more attractive job applicants. Interviewees dressed as they normally would for a real job interview.
All of the interviewees were instructed to do their best to ‘get the job’. In the truth condition, the instructions emphasized that they should not lie under any circumstances; in the lie condition, instructions emphasized that they should tell at least three substantial lies that they thought would significantly increase their chances of getting the job. All interviewees had a chance to review the kinds of standard interview questions they could expect in advance. Interviewees were guaranteed payment of $20; they were told that they would receive an additional $20 if the interviewer, a lie detection expert, believed that they were telling the truth. The financial incentive for being believed applied in both conditions. Past lie-detection research has made it clear that the targets of lie detection judgments must have significant incentives to create worthwhile stimuli. In reality, the interviewer had no special lie detection expertise, but it was clear that the interviewees believed that he did.
Of 16 videos initially created, eight were selected for the final study – four from the ‘lie’ condition and four from the ‘truth’ condition – based on gender balance, appropriateness of attire, comprehensibility, and the number and substance of lies told, with a preference for interviews that contained more, and more-substantial lies. Several days in advance, participants completed a web survey that included Yamagishi’s measure of generalized trust. Each participant viewed the eight videos, one at a time, in random order, and made a series of judgments about each interviewee immediately after each video. Their judgments included (1) whether the interviewee had lied or not; (2) how confident they were about this conclusion; and Likert-scaled evaluations of interviewees’ (3) overall truthfulness (“this person was truthful in response to the interview questions”); (4) global honesty (“in general I think this person is honest”) and (5) their hiring intentions (“I would hire this person for this position”). At the end of the study, participants were also asked to describe the aspects of interviewees’ behavior to which they had attended when making their truthfulness judgments.
[This article has been reprinted, with permission, from Rotman Management, the magazine of the University of Toronto's Rotman School of Management]