Self-selection into laboratory experiments: pro-social motives versus monetary incentives

Laboratory experiments have become a wide-spread tool in economic research. Yet, there is still doubt about how well the results from lab experiments generalize to other settings. In this paper, we investigate the self-selection process of potential subjects into the subject pool. We alter the recruitment email sent to first-year students, either mentioning the monetary reward associated with participation in experiments; or appealing to the importance of helping research; or both. We find that the sign-up rate drops by two-thirds if we do not mention monetary rewards. Appealing to subjects’ willingness to help research has no effect on sign-up. We then invite the so-recruited subjects to the laboratory to measure their pro-social and approval motivations using incentivized experiments. We do not find any differences between the groups, suggesting that neither adding an appeal to help research, nor mentioning monetary incentives affects the level of social preferences and approval seeking of experimental subjects.


Introduction
Laboratory experiments have become a wide-spread tool in economic research. They have delivered many new insights about what preferences people hold, how people maximize their preferences, and how people interact. Yet, there is still doubt about how well the behavior in lab experiments generalizes to non-laboratory setups, and how well the behavior of lab subjects generalizes to other groups of decision makers. In this paper, we focus on the second issue and investigate what drives the self-selection process into economics lab experiments.
Research on self-selection into psychology experiments (e.g., Rosnow 1969, 1975) has often concluded that subjects participate for pro-social reasons, wanting to help the researchers, or because of their need for social approval. This has led some economists to conjecture that lab experiments exaggerate the extent of social preferences: if subjects come to the lab to help researchers, one would expect that these subjects are more pro-social (Levitt and List 2007). Economics lab experiments, however, differ from most psychology experiments in that participants are paid according to their decisions. It could thus be that the self-selection process into economics lab experiments differs and is at least partly based on the expected monetary payment. In fact, one may be concerned that subjects in economics lab experiments are less pro-social than the overall student population. This concern may be further compounded by the overrepresentation of economics students in the typical experimental subject pool, which, it has been argued, are less pro-social because of indoctrination or selfselection (Frank et al. 1993;Frey and Meier 2003;Bauman and Rose 2011).
To investigate the selection process into economics experiments, we conducted a field experiment in which we altered the recruitment message sent to first-year university students inviting them to join the experimental subject pool. 1 Subjects could be in one of three treatments: in the Money&Appeal treatment the recruitment email stated that subjects can earn money and contained an appeal for their help. To check whether volunteering for lab experiments is about earning money, we dropped any mentioning of money from the email in the second treatment and only focused on the appeal for help (AppealOnly treatment). To check whether volunteering is about pro-social motivation and need for approval, we changed the recruitment email such that participants were only informed about the monetary payment (MoneyOnly treatment). We compare the sign-up rates in the three treatment groups to understand the self-selection mechanism into lab experiments. To inform our treatment comparisons, we use a simple model of self-selection in which subjects sign up to experiments because of monetary and/or pro-social or approval-need motives. We show that the comparison between Money&Appeal and AppealOnly can be used to gauge the importance of monetary reasons for participating in experiments; the comparison between Money&Appeal and Mon-eyOnly measures the importance of pro-social reasons.
We find that the sign-up rate drops by about two-thirds if we do not mention monetary rewards. Appealing to subjects' willingness to help research has no effect on sign-up. In the Money&Appeal and MoneyOnly treatments, 13.8 and 14.6 percent of contacted students sign up respectively. These two rates are not statistically significantly different from each other. In the AppealOnly treatment, only 5.0 percent sign up, significantly less than in either of the other treatments. These results suggest that the expected monetary payments are the main driver of selection into economics lab experiments.
While we do not find an effect of the appeal on net sign-up rates, it could still be that including the appeal into the recruitment email attracts different types of subjects in terms of their pro-social motivation. For example, it could be that the appeal decreases subjects' beliefs about the amount they may be paid for participating in the experiments, reducing sign-up and offsetting a possible positive direct effect of the appeal. We thus invited the so-recruited subjects for a lab experiment to verify whether the null effect of the appeal on net sign-up also translates into a similar null effect on pro-social and approval motivations of the selected subjects. In a series of incentivized experiments, we measured subjects' social values, cooperativeness, and sensitivity to experimenter demand, using a series of commonly used tasks in the literature on pro-social and approval motives. If one is concerned about the external validity of lab experiments, our test of experimenter demand should be especially of interest: subjects who attend lab experiments to help the researchers will also alter their behavior during the experiment in light of what they expect helps the researchers most.
We do not find any significant differences in the distribution of pro-social and approval motives across treatments. Adding an appeal to help research does not only leave the sign-up rate unchanged, it also does not affect the level of social preferences and approval seeking in the so-recruited population. This result strengthens the findings from the field experiment that pro-social motivations play only a minor role in the process of selection into economics experiments.
Our results suggest that it is unlikely that the self-selection of participants into experiments leads to an over-estimation of social preferences. Our paper also addresses a second concern that has not been as prominent in the literature: Assume that selection into lab experiments is indeed driven by pro-social motives. Then selfselected participants should have higher measures of social preferences. If studies find no such difference between subjects and non-subjects (as it is the case, for example, for Cleave et al. 2013 andFalk et al. 2013), then the conclusion must be that lab measurements of social preferences are poorly correlated with true prosocial motives. Viewed this way, if selection is driven by pro-social motives, we face a Catch-22 and the use of lab experiments to gauge the importance of social preferences is never a good idea: either they exaggerate the prevalence of social preferences (because of the selection) or they are not able to predict pro-social behavior outside the lab. The jury is still out on how well lab-experimental measures of social preferences predict behavior outside the lab (compare, for example, Rustagi et al. 2010with Stoop et al. 2012see Camerer 2011;Coppock and Green 2013 for overviews); our results, however, suggest that a simple story of self-selection via social preferences cannot be used to argue against these labexperimental measures.
We contribute to the fast-growing literature that examines the generalizability of lab experimental results, part of which focuses on the effect of selection. Most of these studies take the selection process as given and examine whether the selected sample differs from the total population (e.g., Cleave et al. 2013;Falk et al. 2013;Slonim et al. 2013), from other non-selected samples (e.g., Eckel and Grossman 2000;Gaudecker et al. 2012;Anderson et al. 2013), or from other selected samples (e.g., Burks et al. 2009;Belot et al. 2010;Anderson et al. 2013). In contrast, our paper tries to influence the selection process directly to understand what brings subjects to the lab. 2 The paper most closely related to ours is Krawczyk (2011), who also changes the invitation to lab experiments and observes, like we do, that subjects enter the pool more often when monetary rewards are prominent. 3 However, our study extends Krawczyk's design in several important dimensions: (i) Since Krawczyk compares only two treatments (emphasizing either the monetary or nonmonetary rewards of participation), he can only state that the monetary motive is more important than non-monetary motivations. With our three treatments, we can separately identify the effect of monetary and non-monetary motives. (ii) The AppealOnly and MoneyOnly treatments suppress all information about monetary or non-monetary rewards respectively, while Krawczyk only changes the emphasis and always mentions money. We also made sure that no information about monetary payments for experiments at our university was available on the web while we sent out the recruitment emails. This strengthens our finding that even under our treatment variation subjects' behavior does not differ across treatments. (iii) Finally, and most importantly, we run a whole battery of incentivized preference elicitation experiments with our differently-recruited subjects to test different nuances of prosocial and approval seeking behavior. Krawczyk focuses instead on subjects' willingness to participate in an unpaid survey and an un-incentivized altruism measure derived from the survey as outcome variables.
The paper is structured as follows. In the next section we present the design and results of our field experiment. We also present a very simple model of selfselection into lab experiments to provide for a clearer interpretation of design and results. Section 3 reports the design and results of the lab experiment. Section 4 concludes. 2 Other studies have investigated how differences in recruitment procedures may affect participation rates and subsequent behavior. Harrison et al. (2009), for instance, study how information about guaranteed show-up fee for participating in experiments affects selection on risk attitudes. There has also been research on the effects of various forms of recruitment outside economics. Tomporowski et al. (1993), for example, compare the performance on laboratory tests of attention and memory of subjects who received monetary incentives, course-credit incentives, or for whom participation was a course requirement. Tishler and Bartholomae (2002) review the literature on the role of financial incentives in healthy volunteers' decision to participate in clinical trials. 3 Relatedly, Cubitt et al. (2011) report that monetary incentives increase rates of survey participation. They also find that the different participation incentives do not affect responses in the survey. Also related is the study by Guillén and Veszteg (2012) who find that the decision to repeatedly participate in laboratory experiments is positively related to subjects' financial performance in previous experiments.

Design
In October 2010 we sent an email to 5,725 first-year undergraduate students at the University of Nottingham inviting them to volunteer for research studies involving laboratory and internet experiments conducted at the School of Economics Centre for Decision Research and Experimental Economics (CeDEx). Students had only recently arrived at the university and were thus not accustomed to economics lab experiments. Students were randomly assigned to one of three treatments, which differed in the content of the recruitment email message (reproduced in Online Appendix A).
In all treatments the email contained a brief description of what an economics experiment is, and included a link to a website where students could complete the registration. The treatments differed in whether or not the email contained several sentences (1) informing students that participation in experiments is usually rewarded with a cash payment; and (2) appealing to students' willingness to help scientific research.
This information is typically provided in invitation letters used to recruit students to volunteer for experimental research, and this was also the case for invitations sent in previous years at Nottingham. In our Money&Appeal treatment the email message contained both the information about cash payments (e.g., ''You will typically receive some reward (usually cash) in return for your participation'') and the appeal for students to volunteer (e.g., ''Your participation is crucial to the success of our research, and we will highly appreciate your contribution and be really grateful for your collaboration''). In the MoneyOnly treatment we used the same email as in Money&Appeal but deleted all sentences emphasizing the value of participation. Finally, in the AppealOnly treatment students received the same email as in Money&Appeal but without references to the existence of financial incentives. 4 Students were randomly assigned to treatments depending on the last digit of their student ID number. With this procedure we assigned 1,722 students to Money&Appeal, 1,734 students to MoneyOnly and 2,269 students to AppealOnly. 5 The emails were sent out on October 7th 2010. We focus on responses to the invitation emails between October 7th and October 24th. On October 24th, in fact, all subjects who had by then agreed to volunteer were sent an email inviting them to a laboratory experiment, described in detail in Sect. 3. 4 Students completed the registration via a website (http://www.nottingham.ac.uk/*lezorsee/orsee/ public/) hosting the web-based online recruitment system ORSEE (Greiner 2004). The website also contains additional details about the Research Centre, and a statement about our privacy policy and a description of the rules and practices that participants must agree to abide by. Students can also use the website to browse through a list of frequently asked questions. During the whole period of the experiment (October 7th 2010 to November 15th 2010) we temporarily disabled any feature of the website (and of any University webpage linking to it) that could void our treatment manipulations (e.g., we temporarily removed the FAQ: ''Do you pay me for participating in experiments? How much?''). 5 Since each student ID number can end with 1 of 10 possible digits and there are only 3 treatments, we had to assign 4 possible end-digits to one treatment.

Pro-social motives versus monetary incentives 199
As highlighted by Levitt and List (2007) and List (2007), a long tradition of research in social psychology has focused on the characteristics of subjects volunteering for psychological experiments. This research suggests that the decision to participate in an experiment may be driven by pro-social motives, such as altruism or cooperativeness towards the experimenter, and by a desire to seek social approval. 6 The recruitment email used in Money&Appeal (and AppealOnly) appealed to these motives. Students were told that their participation was of great value for academic research, and that researchers would highly appreciate students' contribution and be grateful for their collaboration. The appeal plays on students' pro-social and approval motives in two ways. First, the explicit reference to the researchers as beneficiaries of the volunteering activity aims to reduce the psychological distance between the donor (the subject) and the donee (the researchers), and thus trigger a stronger empathic response from the subject. Previous research has shown that being able to identify the beneficiary of a prosocial act increases pro-social behavior (the ''identifiable victim effect''; see, e.g., Jenni and Loewenstein 1997;Small and Loewenstein 2003). More generally, there is evidence that reducing social distance between the interacting parties increases pro-social behavior (Hoffman et al. 1996;Bohnet and Frey 1999;Charness and Gneezy 2008). Moreover, the emails highlight the potential rewards in terms of approval and appreciation of participating, and thus make these motives more salient.
On the other hand, the email received by students in the MoneyOnly treatment did not contain any appeal to volunteer. We can thus assess the strength of the prosocial and approval-need motives for volunteering by comparing the recruitment response rates in Money&Appeal and MoneyOnly: if pro-social and approval need motives are important determinants of the decision to volunteer, Money&Appeal should yield a higher response rate than MoneyOnly.
As already noted by Kagel et al. (1979), an important difference between psychology and economics experiments is that in the latter participants typically receive nontrivial financial incentives for participation. The decision to volunteer for economics experiments could thus be predominantly driven by the prospects of financial reward, and this may reduce the role of the pro-social and approval need motives relative to psychological experiments. In fact, the use of financial incentives in economics experiments may lead to an over-representation of subjects who are mainly interested in maximizing their own material gain.
To examine the role of financial incentives in the decision to volunteer for experiments we compare recruitment response rates in Money&Appeal and AppealOnly. In both treatments students received the same appeal to volunteer for experimental research. Students in Money&Appeal were also informed about the existence of financial incentives for participating in experiments, whereas students in AppealOnly did not receive this information. Thus, if the decision to volunteer for experimental research is driven by financial incentives, Money&Appeal should yield a higher response rate than AppealOnly.

Model
To further formalize our research strategy, we suggest the following stylized model of self-selection, which captures the intuition that subjects might participate because of monetary incentives and/or because of pro-social or approval-need motives.
In the model, subjects decide whether to participate in lab experiments or not; we take the sign-up decision to the subject pool as a proxy for the willingness to participate. The utility of not participating is normalized to zero. The utility of participation is c is the cost of participation, which includes the time spent at the lab, the expected cognitive effort to be expended, potential boredom, etc. We assume that this cost is heterogeneous in the population to allow for interior solutions. m is the prior belief about the expected monetary payoff. The posterior belief (after the invitation emails) is (a ? 1)m. We assume that the two Money treatments (Money&Appeal and MoneyOnly) increase the posterior by am while AppealOnly leaves the posterior at m. The field experimental data will inform us on the weight a subjects place on monetary incentives for participation. If the sign-up rate in Money&Appeal is higher than in AppealOnly, we would conclude that a [ 0. If the sign-up rate is the same across these two treatments, we would conclude that monetary payments do not play a big role for the decision to sign up (i.e., a = 0). As discussed earlier, subjects could also sign up for pro-social and approval-need motives. In our utility function, we capture this by the term b(b ? 1)s, where s is the baseline strength of subjects' social preferences/approval need, i.e., the level of altruistic action that they are willing to engage in without appeal. The laboratory experiment described in Sect. 3 will allow us to measure s for each subject. To preview part of our results, we will find that s [ 0 for many subjects, replicating the previous literature on social preferences. We assume that in MoneyOnly only the baseline level of s is present, and that the two Appeal treatments increase this term to (b ? 1)s for the reasons we discussed in the previous sub-section. We will use the field and lab experimental data to estimate the weight b subjects place on the prosocial part of their utility function when they decide whether to participate in lab experiments. 7 If the sign-up rate in Money&Appeal is higher than in MoneyOnly, we would conclude that b [ 0; if they are the same, we would conclude that b = 0. 7 There are several reasons why people may not act on their pro-social motivations in certain situations. For example, people may be altruistic towards others only if these are in their reference group. Chen and Li (2009), for instance, find that subjects are more altruistic when matched with an 'ingroup' member than with an 'outgroup'. In the context of our study, students may perceive the researchers/experimenters as an 'outgroup' member and may therefore not act altruistically towards them. The laboratory experiment by Frank (1998) is consistent with this view as it shows that subjects do not seem to care about the experimenter's welfare. Heyman and Ariely (2004) propose that social relationships can be based on economic or social exchanges, where the former are regulated by monetary and market considerations while the latter rely on nonmonetary exchanges. The extent to which subjects are willing to act on their pro-social motivation may thus depend on whether they view laboratory experiments participation as an economic or social exchange.

Pro-social motives versus monetary incentives 201
One could also imagine that subjects do not interpret our Appeal treatments as an appeal, but rather as mere information that lab experiments are useful for researchers. 8 We would, however, expect that most subjects already believed this to be the case before our emails (along a simple revealed preference argument: why would researchers conduct experiments if they were not useful for them?). If this is the case, our treatment manipulation would not affect subjects' baseline level of altruism, and we would need to assume that b = 0 and the pro-social term of the utility function is bs in all treatments. As a consequence, the two Money treatments would be identical. We can nevertheless use our design to draw conclusions about b. If the Money treatments are able to elicit a higher sign-up rate than AppealOnly (i.e., a [ 0), then we would expect a higher average level of s in AppealOnly compared to the Money treatments: since the expected monetary payoff in AppealOnly is lower, only very altruistic subjects will sign up while also subjects with low s will sign up in the Money treatments. Note that this reasoning rests on the assumption that b [ 0. If b = 0, then AppealOnly could lead to a lower sign-up rate, but the average level of s will be the same across all three treatments. We can thus use the lab experimental measures of social preferences to (indirectly) test whether b C 0, even if b = 0. 9

Results
We focus mainly on the comparisons of Money&Appeal to MoneyOnly and Money&Appeal to AppealOnly as these comparisons involve only one treatment manipulation each. We summarize our main findings in the following results: RESULT 1-Including an appeal to subjects' pro-social and approval motives in the invitation email does not affect response behavior. RESULT 2-Not informing subjects of the existence of financial incentives for participating in experiments reduces the response rate significantly.
We document these treatment effects in Figs. 1 and 2. Figure 1 shows the proportions of invited students who registered to our database of volunteers for experiments in the three treatments. Figure 2 shows how registration rates developed over time.
About 13.8 percent of the invited students completed the registration process in Money&Appeal. The proportion of registered students in the MoneyOnly treatment is 14.6 percent, whereas only 5.0 percent of the students in AppealOnly registered to 8 We thank an anonymous referee for pointing this out. 9 There may also be a potential composition effect between Money&Appeal and MoneyOnly that we can investigate using the lab experiment. Assume that subjects do not know the level of ða þ 1Þm. After all, the emails only mentioned that there will be a payment but did not specify the level. Subjects may then try to infer ða þ 1Þm from the invitation email. Subjects may conclude that ða þ 1Þm is lower in Money&Appeal than in MoneyOnly since the experimenters apparently believe they need to offer both a monetary incentive and an appeal to make subjects sign up (this idea is similar to Bénabou and Tirole 2003). This could lead to different types of subjects selecting into the two Money treatments, even if the overall sign-up rate is the same. This channel relies on the assumption that subjects in Money&Appeal believe that the experimenters believe that bb [ 0, which is not an unreasonable belief given the email we sent, but do not rely on the true values of b and b. our database. As shown in Fig. 2, these treatment differences are almost entirely attributable to differences in response behavior in the first forty-eight hours of the field experiment. Overall, we do not find a statistically significant difference in registration rates between MoneyOnly and Money&Appeal (v 2 (1) = 0.49, p = 0.486). The difference in registration rates between AppealOnly and Money&Appeal is instead highly significant (v 2 (1) = 93.21, p \ 0.001). 10 We further explore these treatment effects using regression analysis that allows us to control for observable differences across students assigned to the different  10 The difference between MoneyOnly and AppealOnly is also highly significant (v 2 (1) = 108.01, p \ 0.001).

Pro-social motives versus monetary incentives 203
conditions. 11 We use a logit regression model where the dependent variable assumes value 1 if a student registered to our database during the treatment period and value 0 otherwise. We use two treatment dummies as regressors (MoneyOnly and AppealOnly), with students in Money&Appeal representing the benchmark category. We include in the regression a dummy for gender (1 if the student is female), a dummy indicating whether a student is classified as overseas for fees purposes, and field of study dummies (we include a separate dummy for students majoring in Economics, and use as benchmark category the Faculty of Sciences, which had the largest number of first-year undergraduate students in 2010). Finally, we also include a set of dummies for students taking courses at Schools whose teaching facilities are (partially) based at campuses located at different distances from the main campus where the experiments are normally run (used as benchmark category). The regression results are reported in Table 1 (expressed as changes in the odds of registering to the database of volunteers for economics experiments). For subjects in the Money&Appeal treatment the odds of volunteering for economics experiments are 0.17, ceteris paribus. The regression shows that being in the AppealOnly treatment reduces the odds of volunteering by a factor of 0.33. The effect is significant at the 1 percent level. Being in the MoneyOnly treatment has instead a positive effect on the odds of volunteering. However, the odds of registering only increase slightly (by a factor of 1.08) and the effect is statistically insignificant (p = 0.434).
Among the controls, we find a strong positive effect of majoring in economics. This result compares with that by Krawczyk (2011), who also reports that economics students are significantly more likely to volunteer for experimental economics research. This effect could be explained by the fact that economics students may have greater interest and familiarity with experimental economics relative to students from other disciplines. 12 We also find that taking a course at Schools which are not located at the main campus (where the experiments are conducted) decreases the odds of volunteering. The size of this negative effects increases with the distance from the main campus. This seems reasonable, as students registered at Schools not located at the main campus may face higher costs of participation, e.g., transportation costs. This sensitivity to distance to the lab could also partly explain why economics students are more likely to register since the lab is located in the same building as the School of Economics (the campus dummies in Table 1 do not control for intra-campus distance and the familiarity with different parts of the campus).
Overall, these results suggest that financial incentives play an important role in the decision to volunteer for economics experiments. Removing any reference to 11 For each first-year undergraduate student we contacted via email we have information on gender, field of study and fee status (Home/EU or Overseas). 12 Given the discussion in the literature about economics students behaving more selfishly than other students in experiments (e.g. Marwell and Ames 1981;Frank et al. 1993;Frey and Meier 2003), one may wonder whether economics students are also more sensitive to the email containing information about financial incentives. To test for this, we ran an additional regression where we interact the treatment dummies with the Economics dummy in Table 1. We find that neither interaction term is significantly different from zero (all p [ 0.712). financial incentives from the invitation email reduces the registration rate by about two-thirds. In contrast, removing the appeal to subjects' pro-social and approval motives has a small and insignificant effect on registrations, suggesting that these motives play a minor role in the decision to volunteer. 13 In terms of the model presented in Sect. 2.2, we would thus conclude that a [ 0 and b = 0. This rests on the assumption that the Appeal emails were indeed interpreted as appeals, i.e., b [ 0; we will address the case of b = 0 in the next section.

Laboratory experiment
The main purpose of the lab experiment was to obtain laboratory measurements of pro-social and approval need motives for the volunteers recruited through our field experiment (we also collected a range of measurements on subjects' risk preferences and cognitive skills). In terms of our model, we can thus estimate the level of s for 13 One referee pointed out that the drop in registrations in AppealOnly may reflect a ''social learning'' effect: if subjects in AppealOnly had somehow become aware that economics experiments usually involve monetary incentives (e.g., by word-of-mouth from students from previous years), they may have decided to decline the invitation to sign up for the unpaid experiments as they waited to receive an invitation for the paid studies. While we think that this is not very likely (the recruitment took place at the very beginning of the academic year, when first-year students had only limited opportunities to engage with students from previous years), we view such a mechanism in line with our conclusion that selection into experiments is mostly driven by the desire to earn money. each subject. Moreover, while the results of our field experiment show that appealing to subjects' pro-social motivations has an insignificant effect on the net sign-up rate between Money&Appeal and MoneyOnly, it is possible that the differences in recruitment messages may have had an impact on the composition of the subject pool, i.e. on the types of subjects who sign up. We can use the lab experiment to ascertain whether this is the case. Finally, as discussed above, we can use the lab experiment to test whether subjects sign up for pro-social reasons (i.e., whether b [ 0 in our model) even if the appeal manipulation was unsuccessful (i.e. if b = 0).

Design
Subjects were invited to participate on October 24th, and the experiment took place between October 29th and November 9th. All students received the same invitation email (reproduced in Online Appendix B), regardless of the treatment they were assigned to in the field experiment. 67 subjects recruited via Money&Appeal and 78 recruited via MoneyOnly participated in the lab experiment. Since only few subjects reacted to the AppealOnly recruitment email, we were only able to sign up 29 AppealOnly subjects for the lab experiment. Each session consisted of six parts. Subjects were informed at the beginning of the experiment of the existence of the six different parts, but detailed instructions about each part were only given upon completion of the previous one (all instructions are reproduced in Online Appendix B). Any information about earnings from any part of the experiment was only given at the end of part 6. Subjects were paid according to the sum of the earnings they made in each part of the experiment. Earnings were computed in points during the experiment and converted into British Pounds at a rate of £0.10 per point. Table 2 shows the timeline of a session.
In the first two parts of the experiment we collected measurements of subjects' pro-social motivation using two commonly used tasks in the social preference literature: the Decomposed Game Technique (part 1) and a Public Goods Game (part 2). The Decomposed Game Technique elicits subjects' 'social value orientation', i.e. the goal that individuals pursue in social interactions involving trade-offs between own and others' material well-being (see, e.g., Liebrand 1984;Offerman et al. 1996;Park 2000;Brosig 2002;Levati et al. 2011). Subjects were randomly paired with another participant and made a series of 24 choices between two allocations, each specifying different amounts of money for the decision-maker and the opponent. Subjects' payoffs in a choice situation depended on the choices they and their opponent made in that situation. The total earnings from part 1 were determined by the sum of the earnings made in the 24 choice situations (taken from Park 2000 and reproduced in Online Appendix C). Subjects were matched with a different person after part 1.
In part 2, we measured subjects' cooperativeness using the one-shot Public Goods Game introduced by Fischbacher et al. (2001). Subjects were matched in groups of 4 and endowed with 20 tokens each, which they could keep or contribute to a public good. Earnings were computed as: Following Fischbacher et al. (2001) we elicited two types of contribution decisions. First, subjects submitted an unconditional contribution c i to the public good. Subjects then had to specific a contribution c i for each of the possible 21 (rounded) average contributions (from 0 to 20) of the three other players in their group. The two types of decisions were elicited in an incentive compatible way: at the end of the experiment, a random mechanism selected for each group one member for whom the conditional contribution was relevant and three members for whom the unconditional contributions were relevant. The (rounded) average of the three unconditional contributions determined which of the 21 selected conditional contribution decisions was relevant. The sum of these four contributions was the total amount contributed by the group to the public good.
In part 3 of the experiment we elicited subjects' risk attitudes using the lottery choice task also used by Cleave et al. (2013). Subjects were presented with the six 50-50 lotteries shown in Table 3, and had to choose one to be played out at the end of the experiment. Moving from Lottery 1-6, the risk associated with each lottery decreases, and the expected payoffs of lotteries 2-6 also decrease. More risk-averse subjects should thus choose lotteries further down the table.
In parts 4 and 5 we elicited subjects' social approval need. In part 4 subjects completed two unincentivized questionnaires, the social desirability scale (SDS-17;Stöber 2001), and Gudjonsson's compliance scale (GCS; Gudjonsson 1989), both reproduced in Online Appendix D. The SDS-17 is (somewhat confusingly) a 16-item scale measuring the extent to which subjects over-report socially desirable behaviors and attitudes (e.g., ''I always eat a healthy diet'') and under-report social undesirable ones (e.g., ''I occasionally speak badly of others behind their back''). The GCS is a 20-item scale that measures subjects' propensity to comply with requests made by others, especially if in a position of authority. Subjects indicate their agreement with statements such as ''I find it very difficult to tell people when I disagree with them''. Answers to both questionnaires were recorded using a true/false scale. 14 In part 5 we measured subjects' approval need using the task proposed by Tsutsui and Zizzo (2013) to assess subjects' sensitivity to experimenter demand (Zizzo 2010). Subjects were presented with six pairs of 50-50 lotteries (shown in Table 4), and for each pair they had to select one lottery, A or B. Lottery A and B were identical in the first row. In subsequent rows, A became more attractive and B less attractive. At the end of the experiment, a random mechanism selected one pair for each subject and played the lottery from the pair that was chosen by the subject. Subjects were 'nudged' to select the (weakly) dominated lottery B: a smiley face was placed in the column corresponding to lottery B, and a sentence in the instructions read ''It would be nice if at least some of you were to choose B at least some of the time''. We take the number of times a subject selects lottery B over lottery A as a measure of their sensitivity to experimenter demand.
In part 6 of the experiment, subjects completed two cognitive skills tests: the 3-question cognitive reflection test (CRT; Frederick 2005) and the 5-question financial literacy test developed for the English Longitudinal Study of Ageing (e.g., Banks and Oldfield 2007), both reproduced in Online Appendix E. For each correct answer subjects received £0.20.
Upon completion of part 6, subjects received feedback about the earnings accumulated through the six parts of the experiment. A short questionnaire followed, collecting basic socio-demographic information, risk attitudes (using the SOEP general risk question, e.g., Dohmen et al. 2011), and trust attitudes (using the General Social Survey trust question, e.g., Glaeser et al. 2000). Once subjects had finished the questionnaire they were paid their earnings and left the laboratory. Sessions lasted about 90 min, and average earnings (inclusive of a £1.50 show-up fee) were £12.50 (s.d. £2.40). The experiment was programmed in zTree (Fischbacher 2007).

Results
We summarize our main findings in the following result: RESULT 3-We do not find any significant treatment differences in subjects' pro-social and approval seeking behavior between Money&Appeal and Money-Only, or between Money&Appeal and AppealOnly. Starting with the Decomposed Game, we follow a standard technique developed by social psychologists (see, e.g., Liebrand 1984 andBrosig 2002) to classify subjects depending on the predominant patterns of their choices in the game. Subjects are classified as: (i) 'aggressive', if they (mainly) make choices that minimize the opponent's payoff; (ii) 'competitive', if they maximize the difference between their payoff and the opponent's payoff; (iii) 'individualistic', if they maximize their own payoff; (iv) 'cooperative', if they maximize aggregate payoffs; and (v) 'altruistic', if they maximize their opponent's payoff. 15 Figure 3 shows the distribution of types across treatments. The distributions are not significantly different (Fisher's exact tests; Money&Appeal vs. MoneyOnly: p = 0.544; Money&Appeal vs. AppealOnly: p = 0.937).
Turning to the public goods game, we follow Fischbacher et al. (2001) and use the conditional contributions to classify subjects into four different types: 'free riders', 'conditional cooperators', 'triangle contributors' and 'others'. Figure 4 shows that the distribution of types is extremely similar across treatments, and we cannot reject the null hypothesis of equal distribution of types (Fisher's exact tests; Money&Appeal vs. MoneyOnly: p = 1.000; Money&Appeal vs. AppealOnly: p = 0.887). 16 We now turn to our measures of social approval need. Table 5 shows the descriptive statistics of the SDS-17 and GCS scales as well as the number of times subjects selected the weakly-dominated lottery in the Tsutsui and Zizzo's experimenter demand task. In all cases, higher values are indicative of higher need of social approval.
We do not find marked differences across treatments in any of our measurements, and we cannot reject the hypothesis that the samples have been drawn from the same distribution using Mann-Whitney tests in the SDS-17 (Money&Appeal vs. MoneyOnly: p = 0.588; Money&Appeal vs. AppealOnly: p = 0.686) or GCS (Money&Appeal vs. MoneyOnly: p = 0.465; Money&Appeal vs. AppealOnly: Pro-social motives versus monetary incentives 209 p = 0.217). We also do not find a significant difference in the Tsutsui and Zizzo's task between Money&Appeal and AppealOnly (p = 0.368), but we do find a marginally significant difference between Money&Appeal and MoneyOnly (p = 0.083). However, the point estimate goes in the opposite direction of what one would expect: appealing to the social approval needs (as in Money&Appeal) selects subjects who are a little less prone to experimenter demand. While the main aim of our lab experiment was to collect measurements of prosocial and approval motives, we also measured subjects' risk preferences (part 3) and cognitive skills (part 6). We again do not find significant treatment differences in these measures. 17 Overall, we do not find evidence of systematic differences in our lab measurements of pro-social motivation and social approval need across volunteers attracted by different recruitment messages. This reinforces our result from the field experiment that pro-social and approval motives play a small role in the decision to take part in economics experiments.

11%
In terms of the model presented in Sect. 2.2, we conclude from the lab experiment that s [ 0 for most subjects. Moreover, the lab experiment also shows that the distribution of pro-social and approval motives is not significantly different across the two Money treatments, suggesting that the invitation emails do not have an impact on the composition of the subject pool.
Finally, as noted in Sect. 2.2., we can also use the lab experimental measures to indirectly test whether subjects sign up for pro-social reasons (i.e., whether b [ 0). Assume for the moment that our Appeal treatments were not successful in increasing the pro-social motives, but that subjects do care about money (i.e., b = 0 and a [ 0). Then, b [ 0 would predict that subjects in AppealOnly are more pro-social compared to subjects in the two Money treatments (since the monetary payoff convinces also subjects with low pro-social motives to sign up), while b = 0 would predict no such difference. Since we find that none of our various measures of pro-social motives is significantly different between AppealOnly and either of the Money treatments, we conclude as before that b = 0, i.e., that pro-social motives play no big role when signing up for lab experiments. It has to be noted though that these tests are not very powerful since we have only few subjects in the AppealOnly treatment; we can, however, use a more powerful procedure. Given the assumption of b = 0, the two Money treatments are identical as they only differ in the (noneffective) appeal. We can thus pool the data from these two treatments and compare them to AppealOnly, thereby increasing power. Reassuringly, these tests also show not significant differences across treatments, underlining our conclusion that b = 0. 18

Conclusions
In this paper we have studied the self-selection process of subjects into laboratory experiments. We sent differently-worded emails to first-year university students inviting them to sign up for the subject pool. In three treatments, we either mentioned the monetary payments that go along with participation in experiments; or appealed to subjects' willingness to help research; or both. We find that the signup rate drops by about two-thirds if we do not mention monetary rewards. Appealing to subjects' willingness to help research does not have a statistically discernible effect on sign-up. In a second step, we invited the so-recruited subjects to the lab to test whether they differ in their social values, cooperativeness, social approval need or sensitivity to experimenter demand. We do not find any significant differences across the three treatments. We conclude that the main reason for students to self-select into becoming labexperimental subjects is to earn money. This is in line with the observation that many subjects repeatedly participate in lab experiments, often more than a dozen times, treating participation almost like a small side job. Given that money is the driver of selection, it is unlikely that the selection aspect of lab experiments leads to over-estimation of social preferences in the population from which participants are sampled. Of course, other factors could lead to distortions in the estimation of social preferences and affect the external validity of lab experiments, e.g., the fact that in a lab experiment participants are aware that they are being scrutinized (see Levitt and List 2007;Falk and Heckman 2009;Camerer 2011 for discussions).
Usual lab experimental subjects differ in two ways from the general population. We examined the self-selection process from the group of all students into the subject pool. We cannot say anything about the second margin of selection: students will differ from non-students by, for example, being younger, smarter, and by having a better socioeconomic background and higher lifetime earnings. Anderson et al. (2013) and Falk et al. (2013), among many others, study this margin of selection and usually find only small differences; if anything, students seem to be less pro-social than non-students.