Unlocking the Statistics of Slavery

Statistically, the history of slavery in our world can be divided neatly into two. In the first half, from the very earliest of human writing and records, slaves made up a part of that record. From...


Unlocking the Statistics of Slavery
Kevin Bales S tatistically, the history of slavery in our world can be divided neatly into two. In the first half, from the very earliest of human writing and records, slaves made up a part of that record. From Sumerian cuneiform to Egyptian hieroglyphs to Greek and Latin scripts and the incorporation of the useful Arabic zero, slavery was a blatant and measurable part of human existence. The clay counting tablets of Mesopotamia, for example, recorded slaves amongst the cattle and grain, and while the papyrus records of the pharaohs rarely survive, the great stone carvings along the Nile enumerate slave captures and clearly assign ownership.
Consider the "Battlefield Palette," (see page 20), thought to originate around 5,200 years ago. Carved into a soft, sedimentary stone, it is considered an important link to the distant past. While most of the story it tells is through pictures, it is thought to be both the earliest depiction of a battle scene and include some of the first representations of the glyphs that, in time, would become the Egyptian writing system of hieroglyphs.
Two of these glyphs are important. One represents the standard or totem that denotes power and authority, and the other is the "man-prisoner," or "captive," glyph. While the meanings of these first written "words" are potent, the picture itself is perfectly clear enough. Note the bound men being marched away at the top left, the hands that control them emerging from the "standard glyph" of power and authority. Below, their slaughtered compatriots have been stripped naked and are being feasted upon by vultures, crows, and a lion. One bound captive has been killed and a bird is pecking out his eyes. Just above them to the right, another captive (seen only from the waist down) is being marched away, his hands tied behind his back. Driving him along is the only fully clothed figure in the stone.
When Rome grew into an empire, its economy running on slavery the way the United States today runs on oil, the counting, buying, selling, transferring, giving, and inheritance of slaves must have filled entire record halls. When David Eltis and David Richardson began their project to illustrate the entire trans-Atlantic slave trade over a 366-year period (1501-1867), they found surviving records covering four-fifths of all voyages made-34,934 deliveries in the trade that carried some 12.5 million slaves to the New World.
Possibly the last truly accurate measurement of slavery occurred in 1860, when the United States Census enumerated those held in legal bondage. In that year, the total number of slaves was 3,950,529, accounting for 13% of the U.S. population.
A precise count of slaves was crucial, since the Constitution of the United States calculated how many representatives each state could send to Congress based on population. Although they could not vote and were essentially items of property, each slave was included in the population count as three-fifths of a person, thus greatly aggrandizing the voting power of each slave owner as well as assuring that numerically fewer Southerners could match, through their property, the representatives of the more-populous North.
The second half of the statistical history of slavery begins when slavery becomes illegal. From that time and through the rest of the 20th century, however, there has been no reliable measurement of the extent of slavery in any country, with the possible exception of the records of slave laborers kept by the Nazi regime during the Second World War (Allen, 2004). While legal slavery meant formal records were created, the ongoing criminalization of slavery (even when antislavery laws were rarely enforced) meant there were now no, or very few, hidden accounts of slavery. Notable limited exceptions include files kept by social service agencies on escaped slaves, or the very few legal records linked to the arrest of slaveholders.
At the beginning of the 21st century, just as interest and awareness in modern slavery and its supporting conduit activity of human trafficking was growing rapidly, no reliable information existed on the prevalence of slavery-but it is worth noting that there were widely circulating, but baseless, estimates ranging from no slaves in the world to a few million to a much-quoted total of 100 million.
Within this context, I carried out a systematic collection of information from 1994 until 1999 to construct a global estimate (Bales, 1999), with a revised version (Bales, 2002) and a detailed explanation of my methodology (Bales, 2004). This estimate put the number of slaves in the global population at 27 million. This was an estimate, as I made clear, built from secondary source information, processed by a team that assessed each source for validity as far as possible, with care taken to treat each source with skepticism and to record only the conservative end of any range of estimates.
From the beginning, I was highly critical of my data and very much aware of the shortcomings of the data. I noted, for example, that I was "potentially building upon bad estimates to construct worse ordinal or ranking estimates. Even worse…there was no way to know if this was the case or not." That being the situation, and with that and other provisos, I made my data freely available, leading to its use by the statistician and methodologist Robert Smith and others.
Also at the beginning of the 21st century, a number of other groups and individual scholars began attempting to measure slavery in local areas, nations, regions, and globally. In doing so, they quickly divided into two groups and then, in parallel, proceeded through four stages of methodological development.
These two groups were divided by their approach to data transparency, reproducibility, and replication. Some researchers, primarily social scientists in academic appointments and some nongovernmental organizations (NGOs), operated on the basis that it was important to make their data freely available in ways that would allow other researchers to test, replicate, and potentially reproduce their results in commensurate studies-thus adhering to one of the fundamental principles of the scientific method.
The other group, for a number of reasons, did not feel able to share their data freely. This was sometimes due to political sensitivities, or notions or requirements of proprietary interest in the data collection, concerns about the data itself, or the methods of collection or analysis. Government sources, in particular, were loath to make data public. So were commercial organizations whose business model was to use the freely available data to construct indices and synthetic reports that they then sold to clients, but which were not transparent about data origins.
It is worth noting that transparency, replication, and reproducibility are issues of increasing concern more broadly across the sciences. A recent article in Nature and a recent report on biomedical research both point to a growing unease over the lack of data sharing and replication.
While there is general disquiet over this issue, there is clear consensus about its remedies: 1) openly sharing results and the underlying data with other scientists; 2) collaboration with other research groups, both formally and informally; 3) publicly publishing the detail of study protocols; and 4) reporting guidelines and checklists that help researchers meet certain criteria when publishing studies. At the time of writing this article, no consensus has arisen concerning the practice of data transparency in the field of the measurement of slavery prevalence, nor have reporting guidelines been agreed upon and set for slavery researchers.
Within this context, the first stage in the measurement of contemporary slavery, exemplified by my work, relied upon secondary sources, including governmental records, NGO and service provider tallies, and reports in the media; in short, any source that might shed light on the extent of slavery. Even when sources were systematically assessed for reliability, these estimates (Bales, 2004;ILO, 2005) could only be seen, at best, as an approximation of the global situation.
One expansion of this method (Hidden Slaves) in the United States was an attempt to triangulate secondary sources with surveys of service providers and government and law enforcement records. While the estimates derived in this first methodological stage were not widely different from each other, it was impossible to ensure their comparability or validity.
The second stage was set in motion by the pioneering work of Pennington, Ball, Hampton, and Soulakova in 2009. This team introduced a series of questions concerning human trafficking into a random sample health survey of five Eastern European countries (Belarus, Ukraine, Moldova, Romania, and Bulgaria). Employing random sample surveys, they were able to build the first representative estimate of the proportion of each country's population that had been caught up in human trafficking.
It is worth noting that the terms "modern slavery" and "human trafficking" are sometimes used interchangeably, trafficking is simply one of many processes by which a person might be brought into a state of enslavement. While the human trafficking process suffers from being defined in different ways in a number of legal instruments and operational definitions, it is normally understood to mean the recruitment and then movement of a person into a situation of enslavement and exploitation.) The work of Pennington, et al., was critical to advancing the measurement of the prevalence of slavery for two reasons. Firstly, it demonstrated that, at least in some countries and circumstances, enslavement could be measured through random sample surveys of the full population. Secondly, by fixing valid data points for these five countries, it became possible to begin the process of estimating the range of modern slavery in countries by using these, and other emerging survey results, to extrapolate the prevalence of slavery in other countries (Datta and Bales).
In addressing the question of range, it had become clear by 2009 that cases of slavery (although not measures of slavery prevalence) were being reported in virtually all countries with populations over 100,000 (Bales, 2004;UN-GIFT, 2009). For that reason, the low end of the global range of prevalence, for countries in which measurement was possible, could be assumed to be greater than zero.
In the same year, a U.S. Agency for International Development (US-AID) and Pan American Development Foundation report included a random sample survey of child restavek trafficking and slavery in Haiti. This form of the enslavement of children into domestic service and other types of exploitation had been widely investigated (Cadet and Skinner), in part because of its ubiquity in urban settings, but never estimated through surveys.
This US-AID survey estimated that 225,000 children were enslaved in Haitian cities, equaling 2.3% of the national population. This estimated proportion of the Haitian population was assumed to be in the upper range of the global distribution of slavery prevalence for two reasons. The first was that most investigators had noted the pervasive nature of this form of slavery as compared to slavery in other countries; the second was that of the few existing representative sample measures of slavery by country, Haitian restavek slavery was, by far, the largest.
The culmination of this second stage came with an emerging sense of the range of prevalence across countries and an increase in the amount of data available from random sample surveys. In addition to data from the Pennington, et al., and Haiti surveys, random sample surveys of slavery were also identified in three more countries (Niger, Namibia, and the Democratic Republic of Congo; "Namibia Child Activities Survey" and Johnson, et al., 2010). The combination of these disparate surveys, and their use in building an extrapolation estimation process, generated the 2013 global estimate of 29.8 million slaves in the first edition of the Global Slavery Index.
The third stage in the estimation of the prevalence of slavery came with the introduction of systematic and comparable representative random samples in a number of countries. In late 2013, the Walk Free Foundation commissioned seven national surveys (Pakistan, Indonesia, Brazil, Nigeria, Ethiopia, Nepal, and Russia), using the Gallup International World Poll. These comparable surveys were rolled into the iterative extrapolation process that generated a global estimate of 35.8 million people in slavery worldwide, published in the 2014 Global Slavery Index.
The World Poll survey data are representative of 95 percent of the world's adult population. The World Poll uses face-to-face or telephone surveys, conducted through households (where a household is defined as any abode with its own cooking facilities, which could be anything from a standing stove in the kitchen to a small fire in the courtyard) in more than 160 countries and more than 140 languages. The target sample is the entire civilian, non-institutionalized population, aged 15 and older.
With the exception of areas that are scarcely populated or present a threat to the safety of interviewers, samples are probability-based and nationally representative. The questionnaire is translated into the major languages of each country, and field staff conduct in-depth training and receive a standardized training manual. Quality control procedures ensure that correct samples are selected and the correct person is randomly selected in each household. A detailed description of the World Poll methodology is available online: http://bit.ly/2u9v6Iz. This mixture of comparable representative surveys using the same format and wording, and the "found" surveys, each unique in design and sampling, were used to build an extrapolation process that also included a series of variables measuring a range of factors that might predict vulnerability or propensity to slavery within a country. In many ways, this introduction of an extrapolation process for estimating slavery in those countries without direct surveys, linked to a number of predictors of enslavement, was the platform for building the fourth stage of prevalence estimation.
It is important to contextualize this fourth stage, since all previous stages lead to this system of longitudinal and iterative testing of prevalence measures. Slavery estimation had moved from secondary source "guesstimation" to comparable random sample surveys to an algorithmic process ensuring comparability and the potential for replication, reproducibility, and further "ground-truthing" research. Because this fourth stage of testing can continue to be elaborated over much iteration, it is unlikely a fifth stage will emerge in the near future.
Given, as well, that this technique can also be combined with Multiple Systems Estimation (MSE) (van Dijk and van der Heijden provide a full discussion of MSE in this issue) to generate prevalence measures for highly developed nations for which surveys are not appropriate, it is possible to imagine a global estimate in which most country estimates rest on a firm quantitative methodological foundation.
If there is a fly in this optimistic ointment, it is that while measurement issues are slowly being resolved, little progress has been made in arriving at a shared operational definition for the object of study: slavery.

The Definitional Challenge
It is worth noting that, for most of human history, slavery was both ubiquitous and undefined; slavery was so common that defining it was not necessary. Over time, laws did set out who might be enslaved or manumitted, but it was an activity so well understood that it was rarely given a precise definition. In a number of historical contexts, detailed criteria set out who might be enslaved, such as the Slave Codes of the U.S. Deep South, Roman slave laws, or even Nazi Nuremberg-Reich Citizenship-laws, which allowed the separation within the population of people without rights.
These, however, are not definitions in a human rights framework, but tools designed by slaveholders to specify and control the enslaved and/or enslavable. Nor was slavery normally defined in the early treaties and laws that regulated and then abolished legal slavery in the 18th and 19th centuries.
For example, the 13th Amendment to the U.S. Constitution simply reads, "Neither slavery nor involuntary servitude…shall exist within the United States." It was only in the 20th century, when virtually all slavery was ostensibly illegal, that it was felt that the human activity known as "slavery" needed a specific definition. This perceived need was exacerbated in the early 1990s, when a mushrooming of human trafficking paralleled an equally growing traffic in arms and drugs across the same borders.
In response to this suddenly visible movement of trafficked people into developed countries, especially into commercial sexual exploitation, a number of groups and political actors pressed for new regulation. Some commentators describe a "moral panic" in this period, pushed by diverse groups. A key outcome of this sudden interest and energy was a number of new international conventions and national laws, all of which tended to define slavery or human trafficking differently. This is not the place to review all these varying definitions, but it is worth noting, as an example of the mix of definitional frameworks, that some activities, such as forced or compelled marriage or organ trafficking, were defined as subsets within slavery and others were not. Still other new legal definitions defined slavery itself as a subset of another activity, such as human trafficking. The lack of agreement between these legal instruments has created confusion across jurisdictions and generated a lack of conceptual clarity when confronting activities that may or may not be considered within the wider category of slavery.
A second result is that courts have issued rulings that either set down divergent definitions or interpret the same definition very differently. Remarkably, international law also says that the prohibition of slavery is jus cogens-an internationally applicable peremptory norm from which no derogation is ever permitted. Thus, we find ourselves with a universally and comprehensively forbidden crime, but one that is defined in different, often even contradictory, ways.
These disparities in legal definitions create difficulties in developing an operational definition for another reason: The voices and views of those who have been enslaved have been excluded in their construction. After all, slavery is, first and foremost, a lived experience-not a legal definition, an analytical framework, or a philosophical construct. At the moment it is occurring, slavery is first the experience of an individual person, and second a relationship between at least two people: the slave and the slaveholder. Slavery also carries cultural, political, and social meanings; meanings that are important to understand if we are to grasp the context of slavery and the factors that might predict its occurrence.
Within these different dimensions of enslavement, the lived experience of slaves is of primary importance. not least because the way in which slavery is classified and defined, in law and in public opinion, determines who is eligible for relief and who is not; who may live with some measure of personal autonomy and who may die in bondage.
It has been necessary to discuss the legal definitions of slavery as a preamble to understanding the lack of a generally accepted operational definition of slavery because of controversy, and misunderstanding, within the larger anti-slavery field concerning how slavery is defined. Many actors in the larger academic, as well as the applied antislavery movement, have argued that, because slavery is now an illegal activity, legal definitions must be paramount. But legal definitions are written for a specific purpose: to guide the implementation of law; to make clear, within the legal framework, when a specific crime has been committed, and that is not, the aim of an operational definition.
An operational definition aims to identify, in a precise way, the nature and characteristics of an object of scientific research. It is fundamentally a definition that sets out clearly what is, and what is not, the subject of inquiry and measurement. Attempts to use any of the widely disparate legal definitions to guide research into the social activity known as slavery have not been illuminating or successful-with one exception.
Over a three-year period (2010-2012), a group of legal, social science, and other experts met to resolve the definitional confusion, asking if there were a definition that might both apply and be useful within the law and operationally to guide social science, and especially quantitative, research. The resulting consensus of this group was that the definition of slavery available in the existing international legal framework that provided the clarity and usefulness was that in the 1926 Slavery Convention of the League of Nations: "Slavery is the status or condition of a person over whom any or all of the powers attaining to the right of ownership are exercised." The committee of experts added explanatory guidelines to clarify the application of the 1926 definition. The aim was to elucidate the "powers attaching to the right of ownership" so the attributes of any instance of suspected enslavement might be compared to the criteria inherent to the 1926 Convention. To accomplish that, it is necessary to, firstly, locate the legal definition in the lived reality of enslavement, and, secondly, specify the attributes of ownership more clearly that apply to the law of property and make clear how these attributes apply to the situation of enslavement.
The core of this adaptable definition is the powers attaching to ownership. The most-central of these is the right to possessaccording to Honoré, this is "the foundation on which the whole superstructure of ownership rests" (1961). Possession is demonstrated by control-normally, exclusive control. This is best demonstrated in what Hickey describes as the "maintenance of effective control" (2010), meaning exercising control over time and likely to include other instances or indicators of ownership.
These other instances are the right to use, right to manage, right to income-"use" being the right to enjoy the benefit of the possession; "manage," the right to make decisions about how a possession is used; and "income," the right to profits generated by a possession. In addition to these central rights of ownership is the right to capital, which refers to the right to dispose of the possession by transfer, consumption, or destruction.
These "instances of ownership"-control, use, management, and profit-may be regarded as the central rights of ownership. It is their presence and exercise that can be applied and tested within a situation, such as slavery, where actual legal possession is not permitted. Given the illegality of slavery in all countries, they provide a critical power of definition and identification of the crime of slavery-these "instances" can be treated as measurable indicators in an operational definition of slavery.
The other attributes of possession, as normally expressed, pertain primarily to ownership that is sanctioned by law and so are less useful in understanding modern forms of illegal enslavement. That is not to say, however, that modern slaveholders do not seek to exercise these "rights" when they can. These other attributes of possession include rights of security-protection against illegal appropriation of a possession; transmissibility-the right to transfer legal ownership; and two indicators of the permanence of possession: absence of termthe lack of a time restriction on ownership (an attribute in that slavery is a relationship of control that exists for an indeterminate period of time); and the residual character of ownership-a possession may be loaned or rented, but will return to its owner and never cease to be property.
The key product of the committee of experts was the Bellagio-Harvard Guidelines on the Legal Parameters of Slavery. This short document sets out how the definition of slavery used in the 1926 Convention is coherent and useful in both legal and social science contexts.
It is unlikely that there will be a consensus on a shared operational definition among researchers into slavery any time soon, but at the very least, a discussion and exploration of such potential operational definitions should occur. The arguments offered for the "exceptionalism" of certain types or methods of slavery tend to create studies and reports that are noncomparable; that is, they "compare apples and oranges." The fundamental result is a growing body of literature that is much less useful in addressing the crime of slavery than it might be.
If the definitional problem were not sufficiently challenging, it is exacerbated by the lack of transparency and reproducibility in research methods and data.

The Need for Transparency and Reproducibility
If the social sciences have achieved a basic set of methodologial tools with which to measure slavery, and an operational definition that might guide and achieve comparable research on contemporary slavery, a serious challenge remains in a lack of data transparency, which makes the fundamental scientific requirement of reproducibility impossible. In many ways, it is surprising that such a lack of transparency exists, given the nature of the phenomenon being studied.
Slavery and human trafficking are serious crimes, with terrible repercussions on the lives of the enslaved. The deaths, diseases, injuries, and mental health impacts of slavery are well-known. Slavery is a threat to life, health, well-being, and the social stability of communities, as well as a known facilitator of conflict, rape, violence in many forms, and brutal treatment of children. Both the immediate effects of slavery and its sequelae across not only generations but time are also well known.
Given those demonstrated and widely known facts, the ongoing lack of transparency, and the lack of data sharing in the study of slavery, is not just a threat to good science. It prevents comparable analyses that might reduce suffering and the extreme human cost of slavery.
Shared data have the potential of leading to the amelioration and reduction of this horrific crime. For that reason, a quick review of the nature of why science operates through open dialogue and the sharing of data and results, and why that practice is critically needed today in the study of slavery, is necessary.
Within the sciences, including the social sciences, the internal political economy-the measurement of worth and meaning-is not financial. It is much closer to what anthropologists call a "gift economy." Spufford provided a wonderful explanation of the gift economy in the medical sciences: "In a gift economy, status is not determined by what you have, but by what you give away. The more generous you are, the more you are respected; and in turn, your generosity lays an obligation on other people to behave generously themselves, to try to match your generosity and so claim equal or greater status.…When scientists practice [their gift economy], the gift they give away is information" (2003).
While there are informal expectations within the academic gift economy, it is also rigorously and formally governed by the rules of scientific publishing. These rules include requirements that published articles must make data freely available for re-analysis, and that sources of data and ideas are clearly acknowledged and cited.
It is important to note that these rules do not hamper competition; in fact, they increase it and foster it, since giving everyone access to the same shared information and data doesn't just level the playing field; it opens the field to any and all comers. This competition can be harsh, energetic, even bruising, but that is also a reflection of the fact that the reward for competing successfully is nothing as mundane as money. It is a much more powerful motivator: respect.
Of course, if the only reason for transparency and reproducibility was to gain respect in a circular game of of academic one-upmanship, there would be little point to observing such rules, but the highly productive scientific gift economy is only a foundation for a much-more-important and pragmatic activity.
Science is based upon the accretion of ideas and findings. Every scholar may believe their ideas and findings are important, but more widely, in society as a whole, certain ideas and findings are considered critically important and valuable in their power to transform or protect human life. Medical research is a clear example, and the hoarding of a new idea or data with the potential to save lives or reduce suffering would be seen as not just unacceptable, but shameful.
So, too, it must be argued, would be withholding ideas, findings, or data pertaining to a locus of suffering, a crime as monstrous as slavery. When businesses seek to monetize information about slavery as a condition of their business plans, they have to lock away ideas and data, since free data cannot be monetized. When non-governmental organizations seek to lock away and control data, for whatever reason, they place themselves in the same category of selfish negligence as such businesses, since ideas and data withheld cannot be used to solve pressing problems, reduce suffering or, even, free slaves.
In many areas of research with a direct impact on lives and well-being, shared systems for information and data exchange are common. The open and freely searchable European Bioinformatics Database, for example, hosts a whole series of separate specialist databases. One of these alone, the Malaria Data site, holds records of 371,255 compounds and 25,726 publications.
The systematic study of contemporary slavery is relatively recent, but the destructive potential of the object of study suggests that a system of information and data exchange is overdue. In the same way that the scientific study of slavery is hampered by definitional confusion, it is also held back by a failure to respect the rules of science. In some arcane areas of academic endeavor, that might not matter, but slavery-for obvious reasons-is not one of them. This is why [the special] issue of CHANCE [was] not simply useful to scholars, but important in the wider sense. The articles in [that] issue seek to achieve two key goals. The first is to make clear the current state of play in the field of measuring slavery; the second is to demonstrate what can be achieved when researchers in this field operate by the shared rules of scientific endeavour. All of the authors are keenly aware that they are working in a new field; that they are, at times, setting out new ideas, procedures, and methods, and most of all, that to make progress, they must do so in a way that is transparent and open to critique and improvement.
The work presented in this special edition on developments in vulnerability modeling and how that might be used in an extra-polation process to estimate the prevalence of slavery is both groundbreaking and a work in progress. The explanation of the use of methodologically sophisticated Gallup World Poll surveys to measure slavery prevalence better explores what happens when a trusted and refined tool is brought to bear on a hidden human activity.
Since much of the information available globally is the product of governments, it is crucial to assess the reliable of such data, and what tools might be used to resolve data integrity. The exploration of the innovative application of the technique of MSE to measuring the prevalence of slavery appears to offer a solution to the problem of estimating the extent of slavery in well-developed countries.
The final article [in the special issue] suggests not just the way forward, but the tools and practices that will be needed to move forward expeditiously; ideally into a world where the metrics of slavery are used to guide the eradication of slavery.