Questions for Providers of Expert Opinion on Logged GNSS Evidence

This is the third in a series of papers with the twin ultimate aims of providing exhaustive guidance for expert witnesses asked for opinion on GNSS data evidence and the development of a standard for GNSS data logging. This paper examines admission of GNSS evidence, then draws on a series of questions noted in an earlier work, and attempts to show how an expert can go about answering those questions to the satisfaction of a court. The paper concludes with a recommendation for a “checklist” that an expert witness can go through in order to establish the level of trust that can be assigned to logged GNSS data.


INTRODUCTION
The use of satellites to generate evidence is on the rise. Data generated by satellites is used in an ever-expanding constellation of legal, quasi-legal and political contexts. Understanding the data produced by satellites as well as interpretations of the data and its limitations are serious issues confronting the increasing array of users. Apart from issues raised through controversies around photographs and their meaning (e.g. whether chemical weapons were produced in Iraq), there is little evidence of serious engagement with the generation of satellite data and its interpretation. As the equipment associated with GNSS becomes more portable and accessible, and derivative evidence more ubiquitous, it seems essential that those producing and presenting this potentially powerful evidence in legal settings should have a clear understanding of both its value and limitations. Data and interpretations (or opinions) based on the data should be reported and presented in ways that are consistent with known abilities.
This article endeavours to direct attention to factors that may affect the validity and reliability of evidence generated through GNSS. Our concern is that GNSS-based evidence is generated, interpreted and reported in ways that fairly present its known value. For reasons developed below, we are not confident that conventional rules and safeguards, legal personnel, investigators and jurors are appropriately resourced or generally capable of exploring and evaluating this evidence. In consequence, we recommend that engineers and other experts should pro-actively develop guidelines and standards to ensure the production and presentation of reliable GNSS evidence.
This article begins by introducing the reader to GNSS and some of its potential evidentiary uses. It then explains the importance, for those seeking to adduce, rely upon or present GNSS evidence, of addressing reliability issues; and particularly the recent and authoritative advice from peak scientific organisations on the production and presentation of forensic science evidence. We then turn to consider a range of factors that might impact upon the validity and reliability of GNSS evidence before concluding with a brief discussion.
GNSS might be used as evidence in: border incursions, burglary locations, vehicle velocity, placing people or vehicles at a particular place at a particular time, etc.
A couple of examples are logged speed data in a vehicle accident where speed is thought to have played a part in that accident, corroborating evidence of a witness or defendant who claimed a certain set of movements, placing a defendant at the scene of a crime, and so on. If this data is contested, it comes back to the expert to comment on the degree to which this evidence can be trusted.

ADMISSION AND USE OF GNSS EVIDENCE IN LEGAL PROCEEDINGS
The technical and engineering frameworks that led to the generation of GNSS systems provide an important foundation for the use of GNSS data and interpretations of data as evidence in legal proceedings. These systems place GNSS in a favourable position to address prevailing admissibility standards for expert evidence and enable those responsibly presenting GNSS evidence to explain the data and legitimate interpretations of the data.
In the following sections we will explore issues that may threaten the reliability of data and interpretations generated through GNSS. In this section we introduce a range of influential legal expectations associated with the adduction of expert evidence in some of the most influential adversarial jurisdictions. These rules and procedures are intended to provide some guarantee about the probative value of the evidence and, through reporting obligations and cross-examination, provide basic means of identifying limitations.
It is our general contention that those building and managing GNSS systems -i.e. relevant experts -should develop frameworks and standards that govern how data is generated and interpreted. To assist that goal, in this section we identify three factors that should bear on the way engineers (and others) approach the production of GNSS evidence. 1 They are: (i) the emergence of reliability-based admissibility and procedural rules; (ii) interventions and advice from peak scientific organisations (e.g. the NAS and PCAST); and, relatedly, (iii) the importance of incorporating limitations and uncertainties in the presentation of GNSS data and the interpretation of data to facilitate rational evaluation (and to mitigate resource asymmetries). We begin with admissibility.
Most adversarial legal systems require those proffering expert opinion evidence in reports or proceedings to satisfy procedural rules and admissibility standards. The most important and influential of these is undoubtedly the Daubert standard. In Daubert v Merrell Dow Pharmaceuticals, Inc. (1993) the US Supreme Court insisted that to be admissible expert opinion evidence must be 'reliable'. 2 For scientifically-based evidence the Court stipulated that the '[p]roposed testimony must be supported by appropriate validation'. 3 These expectations have since been incorporated into the revised Federal Rules of Evidence and many state counterparts. Not only should expert opinion assist the decision-maker but the Federal Rules now require 'sufficient facts or data', the use of 'reliable principles and methods', and the reliable application of 'the principles and methods to the facts of the case'. 4 In Daubert, the Supreme Court provided a list of factors (the Daubert criteria) that might be used by trial judges to assist with their gatekeeping responsibilities. These criteria direct attention to: whether the procedure is testable and has been tested; whether the procedure has been published and peer reviewed; the rate of error; the existence of standards; and the extent to which the procedure had attained general acceptance among the relevant specialist communities.
US concern with reliability has been influential on other jurisdictions, particularly common law or adversarial legal systems. The Canadian Supreme Court, most conspicuously, endorsed the concern with reliability, the trial judge's gatekeeping responsibility, and the expectation that issues of reliability should not be abandoned to the trial and the impression of decision-makers. 5 In R v J-LJ the Court indicated that the 'admissibility of expert evidence should be scrutinised at the time it is proffered, and not allowed too easy an entry on the basis that all of the frailties could go at the end of the day to weight rather than admissibility.' The Court was anxious that the 'search for truth' in the courtroom should not include 'expert evidence which may "distort the fact-finding process."' 6 New Zealand is another jurisdiction where the Daubert criteria have received appellate endorsement -explicitly in the Privy Council -in relation to the admissibility of expert opinion evidence. 7 Other jurisdictions have embraced reliability or indicia of reliability less directly. In England and Wales, recent amendments to rules of procedure extend judicial attention beyond the expected impartiality of the expert witness to questions of validity, reliability and limitations with the evidence. The Criminal Procedure Rules and Criminal Practice Direction closely resemble the Daubert-inspired admissibility standard proposed by the Law Commission of England and Wales following its review Expert Evidence in Criminal Proceedings in 2011. 8 Australian courts also place emphasis on expert impartiality as well as the expectation that expert reports provide sufficient information for trial judges to determine whether opinion evidence satisfies admissibility standards based around the need for 'specialised knowledge'. Australian procedural rules, specifically Codes of Conduct for Expert Witnesses, require expert reports to identify limitations, uncertainties, relevant literatures, areas where additional research or testing is required, and even non-trivial controversies. 9 Procedural rules, practice directions and codes of conduct are not admissibility rules per se and non-compliance is likely be considered as an issue for weight. 10 Other legal systems, including those that were not historically adversarial, may not be formally constrained by admissibility rules or the need for validity and reliability. However, there is a general trend toward reliability that extends beyond the common law world. Nevertheless, to the extent that legal institutions purport to operate in the post-Enlightenment tradition of evidence and proof, all decision-makers -whether lawyers, judges or jurors -must be placed in a position where they are capable of rationally evaluating any expert evidence admitted.
In practice, admissibility and procedural rules, along with calls for judicial gatekeeping, have not led to the exclusion of very much expert opinion evidence. The reluctance to exclude is, somewhat counter-intuitively, most conspicuous in relation to opinion evidence adduced by the state in criminal prosecutions. Most legal systems, including those with explicit reliability standards such as the US and Canada, continue to invest trial safeguards with the ability to identify and convey issues with expert opinions and the credibility of experts. 11 Daubert exemplifies the prevailing commitment in most adversarial systems.
Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible [expert] evidence. 12 Non-adversarial systems tend to be even less restrictive toward the admission of relevant evidence, although there has been an historical preference for court-appointed experts rather than those adduced by parties. In these traditions reliance is placed on the experience of the decision-maker (usually a legally-trained experienced judicial officer) as well as provision for legal representation and scope for witness questioning.
Most of these rules and expectations apply to criminal and civil proceedings. Indeed, the Daubert 'revolution' emerged out of civil proceedings in US federal courts. All systems of civil justice maintain rules regulating the production of expert reports, discovery, as well as pre-trial processes and expert testimony (including new procedures such as concurrent evidence). Most of the issues that arise in criminal proceedings have analogies or parallel the way expert opinion evidence is used in civil proceedings. While judges, especially judges hearing cases without a jury, might be willing to admit and consider the weight of evidence that does not strictly comply with admissibility or procedural rules, when endeavouring to evaluate the evidence decision-makers in every type of proceeding should be placed in a position to understand and evaluate (contested) expert evidence.
Unfortunately, opposing parties -and especially the defence in criminal proceedings -are not necessarily in a position to identify and convey issues with technical forms of (expert) evidence. Pressures on budgets, particularly the resources available to criminal defendants, has tended to accentuate problems. Dangers may be acute where, as with GNSS evidence, the underlying systems appear robust or self-evident and issues and limitations may be unfamiliar to investigators (e.g. police) and other non-expert users and audiences. It is with the knowledge that traditional trial safeguards have not worked well, along with the need to place decision-makers in a position to evaluate all of the evidence, that we place emphasis on the importance of identifying and explaining potential issues in the sections that follow.
Secondly, in conjunction with the legal move toward 'reliability', during the last decade a number of independent scientific and technical organisations have intervened to provide advice about legal engagement with forensic science evidence. These interventions, following formal reviews -often in the wake of mistakes (e.g. the mis-identification of Brandon Mayfield) and wrongful convictions (e.g. those emerging out of the DNA-based Innocence Projects) -produced unprecedented criticism of both traditional practices in many areas of forensic science and medicine and placed emphasis on the need not only to validate procedures, but to make sure that applied practices are consistent with validation protocols and standards, and results expressed in ways that incorporate limitations, uncertainty and the risk of error (where known).
The most important of these reviews were undertaken by the U.S. National Academy of Sciences (NAS) published in 2009 and the President's Council of Advisers on Science and Technology (PCAST) published in 2016. The NAS report insisted that: Two very important questions should underlie the law's admission of and reliance upon forensic evidence in criminal trials: (1) the extent to which a particular forensic discipline is founded on a reliable scientific methodology that gives it the capacity to accurately analyze evidence and report findings and (2) the extent to which practitioners in a particular forensic discipline rely on human interpretation that could be tainted by error, the threat of bias, or the absence of sound operational procedures and robust performance standards. 13 The NAS report found that many procedures had not been formally evaluated, standards were often vague, few attempts had been made to measure error and uncertainty, and that some types of expression in routine use were 'scientifically implausible'. 14 Both the NAS and PCAST placed unprecedented emphasis on validation (i.e. 'reliable scientific methodology'), the determination and disclosure of error and limitations, developing empirically-based standards, and appropriate forms of expression, as well as the need to address human factors (especially the threat of cognitive bias) where evidence incorporates human interpretation.
These authoritative scientific organisations also directed attention to the need to study and manage risks posed by cognitive bias. Forensic scientists had traditionally ignored dangers raised by exposure to domain irrelevant information and suggestive procedures. Steeped in research from cognitive science and rigorous methodological designs, particularly from biomedical research (where double-blind clinical trials are routine), independent scientists recommended that forensic scientists determine what information is required for specific analyses and blinding the analyst to gratuitous (or domain-irrelevant) information. Strategies, such as sequential unmasking and diachronic documentation, were presented as alternatives where complete blinding was impractical. 15 Reviewing progress in the years following the NAS report (between 2009 and 2016), PCAST found that many procedures in routine use were either not foundationally valid or not valid in the way they were routinely applied in casework. Moreover, many results were not reported in empirically-based terms that drew attention to the risk of error, human factors and uncertainties. Relatively few reports provided clear explanations of what was done and the reasoning processes involved.
Finally, attentive scientists were critical of the performance of lawyers, judges and legal institutions. They explained that courts could not be expected, or relied upon, to address 'reliability issues' with expert evidence.
For a variety of reasons-including the rules governing the admissibility of forensic evidence, the applicable standards governing appellate review of trial court decisions, the limitations of the adversary process, and the common lack of scientific expertise among judges and lawyers who must try to comprehend and evaluate forensic evidence-the legal system is ill-equipped to correct the problems of the forensic science community. In short, judicial review, by itself, is not the answer. 16 PCAST went further recommending that the US federal government should not adduce expert evidence in criminal proceedings unless the underlying procedure was both foundationally valid and valid in the way it was applied and the results reported.
To the extent that GNSS data is entering legal proceedings as evidence in its own right, or to ground expert interpretations, in the wake of interventions by the NAS and other peak scientific organisations, it seems essential that proponents attend to these authoritative recommendations and advice. Unlike many traditional forensic sciences developed in decades before the reliability revolution and the NAS report (e.g. latent fingerprint, handwriting and bitemark comparison), those working with GNSS are relatively well positioned to answer questions about validity and reliability. Moreover, engineers are, at least in principle, capable of managing cognitive issues. It would seem incumbent upon those seeking to adduce or present GNSS-based evidence to ensure that its strengths and potential frailties are disclosed (at least in reports), and that decision-makers are not misled by the evidence or the expectation that non-trivial frailties will be identified by the defence (or opposing parties).
Thirdly, recourse GNSS raises real risks that non-experts and others will be inclined to rely on naïve or impressionistic approaches to the evidence without insight into the range of potential vulnerabilities and limitations. There is also a danger that proponents, whether experts or others (such as investigators), will advance exaggerated or asymmetrical interpretations in the naive belief of accuracy, or the expectation that limitations will be identified by legal opponents and understood by decision-makers. In line with our commitment to presenting expert evidence in ways that embody its known value (and this includes limitations), and sensitive to the frailties of the legal reliability revolution and trial safeguards in practice, what follows is a discussion of factors that may individually or in combination threaten GNSS data and conclusions generated from them. These are the sorts of issues that should be considered, even if only to be dismissed, in cases where GNSS evidence is contested or perhaps should be questioned.
We accept that GNSS evidence is likely to satisfy most admissibility rules and procedural requirements. Nevertheless, those producing and proffering opinions should be endeavouring to provide decision-makers with the means of evaluating their evidence, especially derivative opinions. Those producing GNSS evidence should explain their reasoning processes and pro-actively identify limitations, uncertainties and present interpretations in terms that are epistemologically appropriate.

ISSUES FOR USERS
This section draws attention to a number of issues that individually or in combination may influence the data and/or interpretations of data generated through GNSS. While not every issue will arise in every case, these sorts of issues should be systematically considered by those producing and relying upon GNSS evidence. Figure 1 provides an indication of some of the areas of vulnerability in GNSS systems. Figure 1: Overview of a GNSS data reporting system, with potential problem areas highlighted in red.

A. System accuracy and integrity
When logged data from a GNSS receiver is used in court as evidence, the positioning system in its entirety is effectively being asked a question, and an expert commenting on only the data obtained from a receiver is asked a different question. For instance, in a case where a person in possession of, or proximity to, a receiver needs to be located at a particular time, the positioning system may be asked "where was that person?" whereas the expert may be asked "how much trust can be put in the position recorded by the receiver at that time?" The question of trust runs right to the core of describing the performance of the positioning system typically expressed for most applications in terms of metrics that describe the accuracy, reliability and integrity of the position computed or derived from the signal measurements between the receiver and the satellites. The accuracy question can be broken down into separate questions, including trust in the positioning system.
Typically, the receivers under discussion are not designed to provide the information required immediately to answer many of the questions posed by investigators and lawyers and the reliance on proxies, where available, has become the default mechanism for composing a response by an expert witness. There are however, a number of approaches that can be applied before and/or after logging of the data to answer one or more of these at once. These are: 1 Receiver based techniques 2 Integrity 3 Encryption 4 External monitors 5 Authentication by corroboration The purpose of this paper is to consider how each question might be addressed in order for a court to have confidence in the logged GNSS data presented as evidence.

Q1 Was the GNSS System operating properly?
After the fact, it is possible to check if there were significant problems with a GNSS by referring to the advisory notices published for that system. For GPS, these are called Notice Advisories to NAVSTAR Users (NANUs) and times and dates can be searched at the US Navigation Center (www.navcen.uscg.gov/?pageName=selectNanuByNumber). For Galileo, they are called Notice Advisories to Galileo Users (NAGUs), (see www.gsc-europa.eu/system-status/user-notifications). Other systems provide similar services.
Included in the navigation data are flags indicating individual satellite health, which the receive can monitor. These health bits are not updated very rapidly, so it may occur that a satellite has begun to behave erratically without the health bit yet warning users. At any given time, the status of individual GPS satellites can be checked at the US Navigation Center (www.navcen.uscg.gov/?Do=constellationStatus).
This possibility of a health bit not yet being set when it is needed was the motivation behind the development of Space-Based Augmentation Systems (SBAS) to provide integrity for aviation users, i.e. a ground based network of monitors would swiftly detect a satellite failure and report that failure via satellite using the same L1 carrier frequency as GPS L1 to the receiver, which could then eliminate that satellite from its position calculations. Techniques for doing this task in the receiver, Receiver Autonomous Integrity Monitoring (RAIM), were also developed, and can be used to detect other problems, as we will see.

Q2 Was the ionosphere/troposphere behaving itself? Was position affected?
Most jurisdictions have networks of stationary GNSS receivers that are used for geodetic and infrastructural purposes. In Australia, for instance, Geoscience Australia operates the Australian Regional GPS Network (ARGN). By consulting data from these networks, ionospheric (and to some extent tropospheric) disturbances to individual satellite measurements (pseudoranges) can be observed if present.
The effect on the positioning of an individual pseudorange is discussed under Q6 so the supplementary "was positioning affected?" question is omitted from Q3, Q4.

Q3 Was the receiver affected by multipath?
In the absence of any information coming from the receiver, the expert can talk in general terms about whether the receiver was in an environment where multipath is likely. In open farmland, for instance, multipath is highly unlikely; in a high-rise urban environment, multipath (and blockage) is highly likely, with subsequent likely degradation of position. At any given instant, however, it is not possible to speculate as to whether a given measurement will suffer multipath error, and whether that error would be positive (delay) or negative (advance), because in-phase and anti-phase short-delay multipaths cause each of those cases respectively. Long-delay multipath can cause severe delays in the case where the multipath power exceeds the power in the line-ofsight signal and effectively the receiver measures the multipath-affected path as the range.
A GNSS receiver has a number of methods whereby multipath can be detected, and the affected pseudoranges excluded, some of which have been identified by the authors (e.g. [3] detection, [4] exclusion).

Q4 Were any signals attenuated?
This sort of data is often recorded by a receiver. For instance the relatively common NMEA "GSV" sentence includes signal to noise ratio (SNR) data for satellites used in a fix. This is not automatically logged, but can identify individual satellites that may be attenuated. It is relatively easy (i.e. there is very little computational penalty) for a receiver to keep track of the signal levels and report them.

Q5 Was the receiver jammed or spoofed?
Although these two things look similar to an outside malign actor, the receiver effects are quite different. Whereas a jammer, if deliberate, aims to disable the receiver, the spoofer aims to fool it into recording a false position. Unintentional jamming is also possible.
One symptom in a jamming event is decreased SNR, so the same data can be used to identify it as is used for Q4. Other detection methods include monitoring signal levels at different points in the receiver (GNSS signals sit below the noise so the signal levels usually stay very constant). In the jamming case, all satellites will be affected (this may also occur in the attenuation case if the sky is blocked in most directions by the same heavy attenuator -trees, concrete etc).
Detecting spoofing is quite different as the spoofing signal looks "real" to the receiver. In some circumstances, multipath metrics can be used to detect spoofing (e.g. [5]), but there are many methods including looking at where the signals come from, monitoring the incoming signal power, watching for jumps in measurements and/or position, and signal authentication. Generally these can be implemented within the receiver.
In general, the detection of jamming or spoofing is not reported by receivers.
If the expert witness simply has the final positional data to work with, a jammed receiver will start to behave erratically, with large random errors, whereas a spoofed receiver will be "erratic" in the sense that the receiver output looks good, but it reports a platform moving in a way that may not be possible, such as a boat travelling over land [6].

Q6 Was there good Geometry?
This is relatively straightforward to determine. Calculation of Dilution of Precision (DOP) can be achieved by the receiver, or an external observer, as long as the satellites used to position are known. Using the almanac (orbit models), the location of each satellite can be determined and DOP calculated. DOP represents a factor by which the pseudorange error is multiplied to given an estimate of position error. It can be used in advance, to predict the accuracy, or afterwards.
Knowledge of the receiver environment can also be used by the expert to predict the effect of DOP. For instance, in an urban canyon, blockage of low satellites can be expected, leading to an expectation of higher error.