alexa Reason and Proof in Forensic Evidence | Open Access Journals
ISSN: 2157-7145
Journal of Forensic Research
Like us on:
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on
Medical, Pharma, Engineering, Science, Technology and Business

Reason and Proof in Forensic Evidence

Kola Abimbola*

School of Law and Department of Chemistry, University of Leicester, Leicester, LE1 7RH, United Kingdom

*Corresponding Author:
Kola Abimbola
Department of Law and Forensic Science
The University of Leicester, University Road
Leicester, LE1 7RH, United Kingdom
E-mail: [email protected]

Received Date: November 26, 2012; Accepted Date: December 29, 2012; Published Date: December 31, 2012

Citation: Abimbola K (2013) Reason and Proof in Forensic Evidence. J Forensic Res S11:006. doi: 10.4172/2157-7145.S11-006

Copyright: © 2013 Abimbola K. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Forensic Research

Abstract

This paper examines the process by which forensic investigators generate, discover and configure evidence in pre-trial legal processes. Contrary to the received view in which forensic evidence is regarded as an archetype of objectivity, I argue that the validity, adequacy and persuasiveness of forensic proof is dependent upon elements of Reason that forensic investigators acquire by way of belief and individual interpretation of experiences. Using the examples of fingerprint identification and the analysis of evidence by police offices during the early stages of crime investigations, I argue that forensic evidence is to a large extent subjective.

Keywords

Forensic evidence; Reason; Stories; Generalization; Subjectivity

Introduction: Evidence, Reason and Proof

Over the last century, and especially during the last fifty years, the study of evidence in law has shifted from an overt focus on what John Henry Wigmore called “the trial rules of admissibility” [1] to theorizing about the nature, property and uses of evidence and inference in legal contexts, which Wigmore called “the logic of proof.” The significance of this transformation cannot be over-emphasized. Admissibility is about jurisdictional-specific rules: bad character evidence; similar fact evidence; presumption of competence; and presumption of innocence, are good examples of such rules. These rules of admissibility are “arbitrary” in the sense that they are subject to change over time; they differ from jurisdiction to jurisdiction; and they “imprison the human mind” by placing unnatural exclusionary fetters on the freedom to draw conclusions from certain types of evidence. The logic of proof, however, is the study of “objective” principles that are “valid” (or at least “correct”) across all jurisdictions. These principles, if we are successful in correctly identifying them, are immutable in the sense that they are applicable across time, jurisdictions, and legal cultures.

In 1986, Richard Lempert, described this revolutionary shift from trial rules of admissibility to the principles of logical proof as follows: “Evidence is being transformed from a field concerned with the articulation of rules to a field concerned with the process of proof ... and disciplines outside law, like mathematics, psychology and philosophy are being plumbed for the guidance they can give” [2]. The New Evidence Scholarship (as this new movement was called) was born, and evidence in law became interdisciplinary. William Twining, Terence Anderson, David Schum, Richard Eggleston, John Jackson, Peter Tillers, Ronald Allen, Michael Pardo, and a host of other scholars have produced interesting theoretical, philosophical, socio-legal, and context-rich works on the theoretical foundations of inference and proof in law.

Notwithstanding the significance of these works, this essay will argue that there is an important dimension to the study of forensic evidence that has for far too long been understudied; namely, the role of Reason in legal evidence. The word “reason”, of course, has numerous usages, and to an extent, some recent works have touched on the nature and function of “reason” in forensic evidence [3,4]. However, there is one particular sense of Reason—a sense that has featured prominently in philosophical literature [5-8]; a sense that illuminated our understanding of forensic evidence-which has not made any important in-roads into the extant forensic literature. This sense of the word, which is typically distinguished from other senses by writing it as Reason, designates the mental capacity, faculty or function by which the human mind grasps, configures, or connects its beliefs about truth-and-falsehood.

As David Schum and others have emphasized, we need to distinguish E (the occurrence or non-occurrence of an event) from E* (someone’s testimony that an event did or did not occur). So suppose we designate Ian Williams’ testimony as: E*. Williams’ testimony to event E is not the same as event E itself. For the mere fact, that Williams testifies to E does not provides conclusive evidence of the occurrence of event E. Perhaps Elliot’s fingerprints were not recovered at the crime scene at all and Williams in collision with the police had simply planted the evidence. Or perhaps Williams is mistaken in his identification of the fingerprints. Simply put, E (the occurrence or non-occurrence of an event) is distinct and distinguishable from E* (someone else’s claims that event E occurred).

Indeed in the legal context it is inconsistent to equate E with E*. For in the legal setting, there are always two sides to a trial. Typically, the prosecution wants the judge or jury to infer E from a series of evidence presented, but the defence will want the judge or jury to infer not-E –which, henceforth, will be written as Ec. [It is important to note that both E and E* (in bold characters) are compounds which could take either of two forms. E could be: E (the occurrence of the relevant event), or, Ec (the non-occurrence of the event). Also, E* could be either E* (someone’s testimony that event E occurred) or E*c (someone’s testimony that not-E is true –i.e. the claim that event E did not occur.) I will use bold letters E and E* respectively for the compounds, but nonbold characters E, Ec, E*, and E*c when I need to specify precisely what is being asserted of the event.]

Suppose E* is equated with E. If this equation holds, then whenever a witness testifies that E, we will have to take that testimony as conclusive evidence for the occurrence of E. If this jump from a witness’s testimony that E to the occurrence of E is legitimate, then whenever a prosecution witness asserts E, we would be justified in asserting that E occurred. But by the same token, we should be justified in asserting Ec whenever a defence witness asserts Ec. But this would be contradictory because E and Ec cannot both be conclusively true. E and Ec (in the same trial, and when asserted of the same piece of evidence) cannot both be true. To assert that (E&Ec) is true is a contradiction. Hence, we can also infer that it is logical contradictory to equate E* with E*c.

The thrust of the foregoing is this. Whenever we are urged that, due to certain considerations, we have legitimate grounds for inferring E, we are well advised to distinguish E* (the evidence proffered) from E itself. For instance, it is on the basis of Ian Williams’ testimony that the prosecution will be urging the jury to infer that Elliot’s fingerprints were in fact found at the scene of the incident. Williams could be mistaken or lying about the fingerprints. And indeed the jury itself has not seen these fingerprints. But even if the jury did visit the scene of the crime and did see fingerprints, all they would see are fingerprints alleged to be Elliot’s. It is impossible for them to see Mr Elliot as he is in the process of leaving the fingerprints. Moreover, even if there was a video recording of someone that looked like Elliot at the crime scene, (perhaps the burglar did not realise that the premises was under constant CCTV recording), this would still be inconclusive evidence that the fingerprints were Mr. Elliot’s after all, video pictures can be altered and doctored in various ways by an appropriate expert.”

The foregoing has an important implication that should not be overlooked. If we cannot equate E with E*, then, in evidential reasoning, whenever we move from evidence proffered to a conclusion, there is at least one intermediate stage of reasoning involved within this process. This intermediate stage is here supplied by the fact finder when s/he infers E from E*. For the fact finder must be able to make a chain of reasoning connection between the testimony to the event and the occurrence of the event itself if s/he is to conclude (on the basis of the testimony) that the event did in fact occur. In other words, in the legal process, we are always being urged to infer C (the conclusion) from E, in a context where E itself is an inference from E*. This is what we mean by cascaded, catenated, hierarchical or multistage reasoning.

But how do we perform the mental act of moving from one link in the chain to another before we arrive at the judgment E or C? What provides the warrant, support or justification for the inference from one stage of reasoning to another? What makes the mental process of drawing a conclusion C from certain items of information justified? A cascaded chain on reasoning connection must depend upon some X for its coherence; and this X cannot itself be inference (i.e. an item on the chain of reasoning connection). For, if it were, we would have a new chain of reasoning that itself would require overall coherence, and some Y to warrant the perception of how each item on the chain link up with the next one-just as the initial chain of reasoning connection itself required an X. Inferences are connectors and we require something that is not a connector to understand how and why facts should or should not connect. For this, we need Reasons.

Reason, in this special sense, is an arbiter of human judgment. For the formation of judgments about the external world is a human capacity that depends on our ability to grasp two things: (i) how each step in a chain of reasoning link up with the next step; and (ii) our perception of the coherence and unity of the whole chain itself. The grasping and understanding of these two functions cannot itself be inferential.

Differently put, it is Reason that adjudges our judgments about the external world all the time. I see a gunwoman point her rifle at the bank teller and I judge that she is an armed robber. You see Jake standing over the body of Peter, rummaging through the corpse’s pockets with one hand and a blood-stained-knife in his other hand; and you judge, without any second thoughts, that Peter was stabbed by Jake. A juror listens to the defense lawyer demolish eyewitness Abdul on the stand by showing that Abdul cannot distinguish between an Irish, a Welsh, a Russian, an American, or an English accent; and the juror concludes that Abdul couldn’t have heard what he claimed he heard. One of the centerpieces of Kant’s legacy to the intellectual world is his claim that human cognitive judgment relies on categories like those of “cause and effect” that arbitrate, judge and order all our sensory impressions. It is only when our claims, beliefs, and other cognitive states conform to these formal pre-conditions of knowledge (i.e., categories) that we can avoid confusion and acquires knowledge and truth about the external world. And herein lies the significance of Reason to forensic evidence: grasping the connections between inferences, and an understanding the meaningful totality of chains of inferences are both processes of human cognition that cannot be based on factual appearances which present themselves to the senses.

Suppose, for instance, you begin to doubt your initial belief that Jake stabbed Peter. To ascertain whether this belief is true or false, you will need to inquire about how this belief of yours “connects-up” with your other beliefs and other judgments, and you may later discover that Jake was a tramp who lived on the streets and that he has a fetish for dead bodies. Or, perhaps, you discover that Abdul was a refugee who managed to escape the deadly prison camps of Congo and that he becomes incoherent, scared and confused in front of authority figures, especially legal personnel. The positivistic focus on descriptions of facts (and of how the facts connect up) needs to be augmented with an understanding of how background assumptions increase our knowledge of the nature of reality.

Much of current scholarship has focused on: the nature of legal evidence (e.g. the credibility, weight, and the sources of evidence); the roles of inference in transferring truth from evidence to conclusion in legal proof (e.g. Pascalian gradations of the force of evidence; inferential networks; posterior and prior probability measures); and, the nature of proof in law itself (probability diagrams; abductive reasoning; Bayes theorem, etc.). Very little attention has been paid to the objects, the categories, or structures that unify the different elements of evidence, inference and proof in the human thinking process itself, namely Reason. Yet, without Reason none of these aspects of understanding would be possible.

Reason in Forensic Evidence

Following Wigmore [1] and Bradley [9], I take inference to be “the process of thinking about a piece of evidence, not the result.” Inference signifies the thought process by which humans extract and transfer truth from evidence to conclusion. It does not describe the piece of evidence that is the basis for thinking. Nor does it describe the end result arrived at. Rather, it is the mental act of drawing conclusions from the information. Proof, on the other hand, is concerned with establishing the fact(s)-in-issue at a legal forum charged with this specific task. And the logic of proof is concerned with evaluating the rational adequacy of the arguments advanced in support of decisions and conclusions drawn from these facts. Inference is central to this process; and proof is about how we establish the correctness of legal judgments. In contrast to the standard approach to proof in law in which inference is central, the subject of this essay is the underlying “property” of cognitive judgment that is “fixed,” “constant,” and “invariable” when the human mind thinks, which I will call Reason. I will consider two types of Reasons, namely: stories and generalizations.

Stories

Paul Ricoeur defines a story as “… a narrative of particular events arranged in a time sequence and forming a meaningful totality” [10] All the four italicized elements of this definition must be present before we can claim to have a story. Thus: “The Joneses were burgled then Elliot was arrested” is not a story-it is merely a chronological statement. But: “The Joneses were burgled and Elliot was arrested because he matched the description of the burglar”, is a story because the events in the chronology are meaningfully connected.

Stories, thus defined, abound throughout the forensic process. Consider again the case in which Elliot was arrested for burglary because he matched the description given to them by Mrs. Jones. In interrogating Elliot, his story (if he decides to answer questions put to him by the police) will be assessed and evaluated against the background of Mrs. Jones’ narrative of events before a decision to charge him is made. These two stories (i.e., Mrs Jones’ and Elliot’s) will provide the interpretative matrix on the basis of which the police would base their decisions.

Police officers must have their own stories before they decide to charge a suspect. Judges usually incorporate stories as an integral part of their judgments. Stories are also crucial to the opening and closing statements of attorneys during trials, and jurors also make use of stories in their evaluation of fact and evidence.

Stories perform an indispensable role in the human configuration of evidence because reasoning from fact to proof requires a conceptual organization of events into chronological narratives. Without chronologically arrangements of events to form a meaningful totality, humans would be unable to comprehend connections between various items of evidence. Stories, therefore, function not as inferences, but rather as inferential warrants enabling fact investigators to link different items of evidence to form a meaningful totality.

Generalizations

There are at least three types of generalizations employed in forensic reasoning: scientific generalizations, general knowledge generalizations, and case-specific generalizations.

Scientific generalizations are those generalizations that are established by scientific knowledge. These would include statements such as ‘smoking causes cancer’; ‘if the force exerted on a particle of mass m is f, then that particle’s acceleration is f/m’; ‘a fingerprint identification is valid if there are at least 16 points of comparison between the print recovered from the crime scene and the suspects’ prints (as long as there is no point of disagreement).’

General knowledge generalizations are information-based claims accepted as true or reliable. They are often founded upon cogent and coherent evidence. Consider for instance, the following generalizations: “Palm trees, rain, and high humidity are common in Miami, Florida; transactions in securities traded on the New York Stock Exchange are accurately summarized in the Wall Street Journal; most pubs in England are affiliated in some manner with a brewery.” [11]. Even though we may not be able to state off-hand the justification or authority for these generalizations, they are all founded upon cogent and well-founded evidence of some sort.

General knowledge generalizations may also be founded upon introspection, indoctrination, moral education, religious beliefs, etc. When they do, they may not be based upon evidence at all. Thus many may accept the general knowledge generalization that anyone seen running away from the scene of a stabbing with the bloody knife in his hands is likely to be guilty of murder, but I doubt whether there is any hard statistical evidence in support of this generalization. It is therefore more likely than not that this generalization is based upon introspection and general beliefs about the behavior of humans.

Case-specific generalizations are those general assumptions that are operative in a particular legal case. For instance: “All through their marriage, Peter dominated Margaret.” “The cleaners and kitchen staff at Tacoma University were predominantly African American.” “Macrosoft regularly discriminated against women in its employment practices.” Although these generalizations may not be explicitly stated in the legal cases in which they may be operative, they are the sorts of generalizations that lawyers and investigators may rely on in discrimination cases. Some of these generalization may be supported by evidence present in the case in question, or they may be the product the pre-established beliefs of the fact-investigator.

It should be noted that the classification of generalizations into three types is primarily for the purposes of analysis. In real life, it might be difficult to pigeonhole a generalization into one and only one group. Consider for instance the role of generalizations in the identification of firearms. According to Bonfanti & Kinder [12]:

Firearms identification has relied on the hypothesis that there is a unique signature left by a firearm on the elements of the fired round, i.e. the striation marks on the bullet or [a number of] marks on the cartridges. The hypothesis assumes a unique combination of striation lines or other impressions so that the probability of finding two identical sets of marks is practically zero. These traces originate either from the close contact of the bullet or the cartridge case with one or more parts of the forearm, or from the dynamic processes during the firing. During this contact, an imprint is made of the imperfections of the firearm, resulting from either its manufacture or its continual use.

Bonfanti & Kinder point out that, despite the fact that striation marks on gun barrels are caused by the process of manufacture, firearms identification is predicated upon the assumption that even guns from the same manufacturing batch (i.e. consecutively-manufactured firearms) leave different striation marks on bullets and cartridges. Bonfanti & Kinder evaluated over fifty different studies in which this generalization was put to test. And in all of them, the researches claimed that the generalization was confirmed. Conclusions such as: “The traces left by each barrel are individual”; “each barrel has a distinctive and separate individuality”; “the striation marks on the bullets allow the identification of the weapons which shot them,” etc. were reached by each of these empirical tests. On the basis of the conclusions reached by these empirical tests, the generalization that ‘a unique “signature” is always left by a firearm on the bullet or cartridge case,’ is regarded as justified. Hence in identifying bullets, firearms investigators assume that there is a unique one to one mapping between guns and bullets (or cartridges) fired from each gun. This generalization is clearly a scientific generalization. As the Bonfanti & Kinder report indicates, this claim has been subjected to test after test by researches in numerous experiments. But the claim also functions as a general knowledge generalization. For firearms investigators, police officers, and everyone interested in ballistics will accept the generalization as true and reliable, thereby making use of the information in their inferential tasks, without caring much for how the scientific generalization was established. Until there is a need to question a conclusion arrived at on the basis of a scientific generalization of this sort, it will simply be an essential part of culture for fact investigators who rely on it.

Strictly speaking, all generalizations are ‘theoretical’ statements in the sense that they are constructs or postulates that are regarded as true or false by those who make use of them. Further evidence, experiments, or explanations might show a generalization to be incorrect. Thus the lawyer who advances the case-specific generalization that an employer discriminates against women because: (i) a clearly less qualified male applicant was offered the job, and (ii) there is no female employee in a workforce of 100, could be mistaken. For it could well be that the lawyer’s client (who also happens to be the defendant’s very first female applicant) got a particularly bad reference from her referee. My point is that the thesis of inter-dependence of theory and fact holds true for generalizations. For, although generalizations are often built up on the basis of the facts, they are theoretical constructs in the sense that they often go beyond the data.

Generalizations are important to legal reasoning because they provide backing for our arguments in the sense that the move from one item of evidence to another in cascaded inferences requires generalizations. The backing provided by generalizations can be classified into three broad types:

Generalizations and the formation of hypotheses

I will use an example from one of Sherlock Holmes’ investigations to illustrate the role of generalizations in the formation of hypotheses.

A prized horse, Silver Blaze, the favorite for the Wessex Cup, had been stolen from its stable in the middle of the night. The trainer, who was also the stable master, was also found dead on the heath. The trainer had suffered a rather gruesome death.

His head had been shattered by a savage blow from some heavy weapon, and he was wounded on the thigh, where there was a long, clean cut, inflicted evidently by some sharp instrument. It was clear … [to the police], that Straker [the trainer] had defended himself vigorously against his assailants, for in his right hand he held a small knife, which was clotted with blood up to the handle [13].

Although the police had a few suspects in mind, their investigation had not produced any significant leads until Sherlock Holmes began his own investigation. After Holmes had made some preliminary investigation by asking questions from the stable lads, Inspector Gregory noticed that Holmes’ “attention had been keenly aroused.” So he asked Holmes:

“Is there any point to which you wish to draw my attention?”

“To the curious incident of the dog in the night-time” (replied Holmes).

“The dog did nothing in the night-time.”

“That was the curious incident,” remarked Sherlock Holmes [13].

While questioning the stable lads, Holmes had discovered that a trained watchdog had been in the stable guarding the horse on the night it disappeared. The stable lads had also been asleep in the stable loft. But curiously, the dog did not bark at all while the horse was been taken out of the stable. Relying intuitively upon some general knowledge generalizations about the behavior of guard dogs, Holmes formulated the hypothesis that the murdered stable master must have been involved in the disappearance of the horse. As Holmes himself put it:

… I grasped the significance of the silence of the dog, for one true inference invariably suggests other. … a dog was kept in the stables, and yet, though someone had been in and had fetched out a horse, he had not barked enough to arouse the two lads in the loft. Obviously the midnight visitor was someone whom the dog knew well [13].

Having relied upon the generalization about the behavior of dogs to account for the hypothesis that the stable master was involved in the crime, Holmes began to seek further evidence to support his hypothesis. Holmes discovered that the trainer was heavily in debt and that the trainer had also placed a bet against Sliver Blaze in the cup. Further investigation also led Holmes to the conclusion that the horse had killed the trainer while he was trying “to make a slight nick upon the tendons of …[the] horse’s ham, and to do it subcutaneously, so as to leave absolutely no trace. A horse so treated would develop a slight lameness, which would be put down to a strain in exercise or a touch of rheumatism, but never foul play” [13].

To perform this delicate nick on the horse, the trainer “had got behind the horse and had struck a light; but the creature, frightened at the sudden glare, and with the strange instinct of animal feeling that some mischief was intended, had lashed out, and the steel shoe had struck Straker (the trainer) on the forehead… and so he fell, his knife gashed his thigh” [13].

Holmes relied upon a series of general knowledge generalizations in solving this case. First, it was on the basis of his generalization about the behavior of guard dogs that he began investigating the trainer as a suspect. Holmes further relied upon another generalization about the behavior of humans to explain the trainer’s motive for taking the horse out of the stable. Holmes also had to rely upon a generalization about the behavior of animals to explain why the horse lashed out with its hoof.

“Holmes’ conclusion that it was the trainer who took the horse from its stable was eventually corroborated by evidential facts. First, the small surgical knife that had been found in the dead man’s hands. Although the police had assumed that the trainer used the knife in self-defence, Holmes’ hypothesis was a better one because the knife would have been an ineffective weapon. Moreover, Holmes’ theory explained why the trainer had such a knife on him to start with. Second, prior to the disappearance of the horse, three sheep from the paddock on the farm had gone lame. Holmes’ hypothesis explains the lameness of the sheep— the trainer must have been practising the tendon-nicking technique on the sheep. Third, Holmes’ hypothesis accounted for the disappearance of horse—the horse had not been stolen after all. Holmes’ explanation was that, after being frightened by the stable master, the horse had bolted and headed for a nearby stable on the other side of the moor.”

Generalizations as gap-fillers

Generalizations also function as gap-fillers because they are sometimes relied on when concrete evidence is lacking. For example, in a letter to The Honolulu Advertiser on 5 December 1974, Vincente Romero, the Consul General of the Philippine Consulate General, advanced the following argument:

As an academic, Professor Benedict J. Kerkvliet has given himself away as biased and unscientific … it is pathetic to see Professor Kerkvliet, a non-Filipino, deploring political and social conditions in a foreign country like the Philippines when his own country calls for social and moral regeneration.

In this argument, the Consul General relied upon an unstated generalization about foreigners in drawing his conclusion that Professor Kerkvliet is “biased and unscientific.” He did not offer any evidence whatsoever in support of his claims about the Professor’s prejudice. The argument simply relied on the generalization that foreigners are unable to view issues from the perspective of an insider who understands the intricacies of the local issues.

Generalizations as glues: The example of fingerprint identification

Although the use of fingerprinting as evidence can be traced back to the reign of Hammurabi 1792-1750 BC in Babylon, historical evidence suggests that the current technique of fingerprint identification is quite similar to those used by the Assyrians and the Chinese (from about 300 BC onward) in authenticating legal documents. The usage of fingerprints as evidential fact was not accepted in England until July 1901. And it was not until 1902 that there was the first criminal conviction on the basis of fingerprint evidence. The first conviction was that of Henry Jackson for burglary. Various theoretical hurdles had to be overcome before the theory of fingerprinting became acceptable within the legal context in England and Wales. Scientists and methodologists like William Herschel, Henry Faulds and Edward Hendry were crucial to the early development of fingerprinting. Henry Faulds for instance established the point that fingerprints do not change with age and Edward Hendry devised a method for the cataloguing and easy retrieval of prints on record.

The uniqueness of fingerprinting is now accepted as one of the most important ‘evidential facts’ in the identification of suspects and individuals. But as accepted as it now is, fingerprinting is actually founded upon theoretical assumptions that transcend the data. Not all humans have been individually fingerprinted in an effort to cross check whether two or more individuals could share identical fingerprints. Indeed probably less than ten per-cents of humans have ever been fingerprinted. Hence the acceptance of fingerprinting within the legal process is, strictly speaking, based upon theoretical generalizations about humans. The hard evidence we have about the uniqueness of fingerprints to specific individuals is an insignificant percentage of the world’s population. Nonetheless the success of fingerprinting within the legal process relies upon generalizations about the uniformity of nature.

One could identify at least three of this uniformity of nature assumptions. One is the general presumption that every individual has a unique print pattern that remains unchanged over time. Without this assumption it would be impossible to generalize from a small sample to the whole group of humans, past, present and future.

The second uniformity generalization is based upon Edmund Locard’s principle that “every contact leaves a trace.” This generalization is the foundation of all forensic science. Forensic science is carried out on the belief that whenever any individual is at a scene, she will leave some material of some sort (fingerprint, hair sample, shoe prints, skin tissue, gunshot residue, dandruff, etc.) which is recoverable by one scientific method or the other.

The theoretical nature of evidential assumptions about the uniqueness of fingerprints is further exhibited by the very nature of the process of matching prints recovered from crime scenes with samples taken off suspects. Print patterns on the human palm are divided into three general types, viz., arches, loops, and whorls. These types describe the three most common print patterns found on human palms. (These three types are all further sub-divided into various sorts. For instance, arches are either ‘plain’ or ‘tented,’ whorls could be ‘elliptical,’ ‘composites,’ ‘twin,’ ‘lateral pockets’ or ‘accidental’.) This tripartite classification of all human print patterns is the third uniformity assumption we can identify.

In all three types of fingerprints (and sub-division of types), recognizable patterns are designated as points. For instance, in loops and whorls the spots on fingerprints where the ridges bifurcate (that is, where a ridge: ‘divides’, ‘ends’, ‘stands alone’, or where it is a ‘lake’) are points (Figure 1) [14]. Characteristics designated as points are those experts look for when they identify a recovered print as corresponding to the sample taken from a suspect. In England and Wales, a rule was adopted in 1953 to the effect that in fingerprint identifications, there must be at least 16 points of comparison (and no point of difference) between the recovered print and the sample print taken off a suspect. In some jurisdictions there are no set rules on the number of points required.

forensic-research-Characteristics-Fingerprints

Figure 1: Characteristics of Fingerprints.

The epistemological significance of “points” in fingerprint identification should not be overlooked. For establishing that a recovered print corresponds to that of a suspect is anything but an indubitable fact. Even when a recovered print is crystal clear and unsmudged, what an expert in a non-U.K. jurisdiction regards as authentic may differ from what is accepted in the United Kingdom. And indeed, prior to 1953, there was no uniform standard of identification throughout the UK. In August 1924, Scotland Yard changed from the 12-point standard of clear point characteristics to the 16-point standard. But the “new” 16-point rule of identification was not mandatory on other Police Forces throughout the UK. Some continued to use a 12-point rule, while others adopted a non-numerical standard in which the judgment of the expert prevailed irrespective of the number of clear matches between the recovered print and the sample print. It was not until 1953 that a National Standard of 16-point was adopted in the U.K.

Hence an identification which would have been accepted as authentic before 1953 might not be accepted as authentic now. This particular change is not based on the fact that a pre-1953 print was misidentified, rather it is based upon changes in (theoretical) assumptions about the number of points recovered and sample prints must share before they can safely be regarded as the same. Any two sets of prints, even when they are known to come from different individuals could have a number of points in common. What then is the safe number of points required for identification? 12, 16, or some higher number? Or should we leave it to the judgment of experts—in which case a fewer number of strategically placed points can be accepted?

But the problems do not end here. On the 3rd of April 2000, the United Kingdom adopted a no point system. In its announcement of this change in 1998, the National Fingerprint Evidence Project Board made the following statement:

‘The change in the manner in which fingerprint identifications will be established and evidence presented from the 3rd April 2000, will not affect the integrity or accuracy of the evidence. What it will do is to enable Fingerprint Experts to give their opinion of the results of comparisons without being fettered by an arbitrary numerical threshold. All fingerprint experts know through training and their own experience that the positivity of identity is established by far less than ‘16 points.’ Additionally, fingerprint experts will be able to use other information resident in a fingerprint which has not historically been used or needed because of the ultra-high numerical threshold. Nevertheless, until the proposed date for change (3 April 2000), the present ‘16 point’ standard will remain in place [15].’

The changes in the manner of fingerprint identification (from 12-points to 16-points, and then to a non-numerical system of identification) succinctly illustrates the point that all evidence, including fingerprint evidence which is usually regarded as unique and hard-and-fast, requires interpretation and theory. The point then is that to fully understand the nature of evidential inference, we need to evaluate, appraise and examine procedures by which fact investigators generate beliefs, facts, and evidence.

More specifically, we need to be concerned with questions about why, how, and in what ways fact investigators come to hold the beliefs, conclusions and ideas they hold; and these will largely depend on the generalizations they uphold that enables them to grasp their chains of reasoning connections. These are all epistemological issues about the discovery and generation of evidence-the sorts of issue that traditional treatises on the Law of Evidence do not deal with. But practitioners such as forensic scientists, practicing lawyers, police officers, etc., all recognize the importance of issues about the discovery and generation of facts. All these issues occur during the pre-trial stages of the criminal process. Hence it is imperative for any comprehensive theory of inference that purports to take evidential inferential practices seriously to deal with them.

A general theory of evidence should tackle the full range of epistemological questions that confront legal agents. It should obviously deal with traditional epistemological questions such as: “what information can be presented in court; through what means; how does a court decide whether that information proves whether an event happened in a particular way or not?” However, if we are to fully understand the nature of the reasoning from fact to proof, we also need to understand the process by which mental states of belief are generated in fact investigators. A theory which claims to study the process of rational proof also needs to understand the process by which proof is produced, sustained and generated by fact investigators. Hence it should also deal with questions such as: How do fact investigators arrive at their beliefs and conclusions? How should they arrive at such beliefs and conclusions? Are the current methods for arriving at beliefs and conclusions adequate? The model canvassed in this dissertation develops a framework for tackling these psychologistic questions.

Since we are interested in providing a model of reasoning that exhibit real life processes of the judicial system, it is prudent not to define legal evidence independently of a proper understanding of the workings of the legal processes. Fingerprints for instance are not simply “declarations of matters of fact”; nor are they “statements or allegations to be proved in court.” If they were, “dishonest appropriation of property belonging to another,” “malice aforethought,” or some other legal construct would be evidence as well.

The Subjectivity of Evidence: An Example from Police Investigation

The police investigation of the death of Stephen Lawrence provides a good illustration of the role of stories and generalization in the logic of proof. Stephen Lawrence was murdered by a gang of white youths on 22 April 1993, but because the police initially failed to classify the homicide as a “racist crime,” important facts, evidence and arguments were overlooked.

Right from the inception of the police investigation, there were two rival stories of the case. There was the story of the police that the homicide was the result of a failed drug deal and the story of Duwayne Brooks (a friend of Stephen Lawrence who was with him when the attack occurred) that the homicide was the result of a racist attack.

According to Brooks’ narrative, he and Lawrence were on their way to a bus stop to catch a bus home around 10:30pm. As they were approaching the bus stop, Lawrence went ahead of his friend to see whether a bus was approaching. Brooks then called out to his friend to ask whether the bus was approaching, and someone from a group of five or six white youths who had been on the other side of the road replied Brooks with the question: “what, what, nigger?” The whole group then crossed the road, surrounded Stephen, and stabbed him twice—in the chest and arm-and he died a few minutes after he was stabbed. Duwayne Brooks ran across the road and away from the scene. The three eyewitnesses to the crime (who had also been waiting for the bus) corroborated Brooks’ story.

The police, however, did not accept the narrative of Brooks’ and the eyewitnesses. In fact in their initial investigation, the police assumed that the death was the result of a failed drug deal and their prime suspect was Duwayne Brooks. Despite the fact that various persons from the general public volunteered information the “Krays gang” was responsible for the killing, and despite the fact that this information specified that one of the initiation rituals for this gang was the stabbing of blacks and other minorities, the police did not take this information seriously until a month after the incident, which was when they decided that there was nothing in Lawrence and Brooks’ background to suggest that they were drug dealers.

Stephen Lawrence was killed on 22 April. On the 23rd of April, a letter stating that this gang was responsible for the killing was left at a phone box close to the police station. The letter also disclosed the names of the gang members. On that same day, various people made statements to the police about the attacks. The names of these five suspects were prominent, but most of these informants wished to remain anonymous because the whole neighborhood was terrified of the gang.

But, perhaps the most important came from “James Grant.” (This person was given this pseudonym to protect his identity.) James Grant went to the police station on two occasions—on the 23rd (the day after the murder) and on 24th of April. The information supplied by James Grant was logged in the police station as “Message 40”:

“A [white skinhead] male attended ‘RM [i.e., Plumstead Police Station] and stated that the persons responsible for the murder on the black youth, are Jamie and Neil Acourt of 102 Bournbrook Road SE3 together with David Norris and 2 other males identity unknown. That the, Acourt Brothers call themselves ‘The Krays.’ In fact you can only join their gang if you stab someone. They carry knives and weapons most days. Also, David Norris stabbed a Stacey Benefield a month ago in order to prove himself. … He then went on to say that a young Pakistani boy was murdered last year in Well Hall, that Peter Thompson who is serving life was part of the Acourts gang. That in fact one of the Acourts killed this lad. They also stabbed a young lad at Woolwich town centre called ‘Lee.’ He had a bag placed over his head and was stabbed in his legs and arms in order to torture him.” [16].

To understand why the initial investigation pursued this line of inquiry, we need to place the investigation in the wider context of police culture. Studies such as Reiner [17,18] and Holdaway [19] have shown that the police routinely rely on generalizations about black youths as criminals. Reiner, for instance, claims that:

Cain’s and Lambert’s studies of city forces in the early and late 1960s show a clear pattern of rank-and-file police prejudice, perceiving blacks as especially prone to violence or crime, and generally incomprehensible, suspicious and hard to handle. … My own interviews in Bristol in 1973-4 found that hostile and suspicious views of blacks were frequently offered quite spontaneously in the context of interviews concerning police work in general. … One uniform constable summed up the pattern: ‘the police are trying to appear unbiased in regard to race relations. But if you asked them you’d find 90 per cent of the forces are against coloured immigrants. …’[18].

Stereotypical generalizations which seemed to have informed the police’s investigation of the murder would include:

All/most/many black male youths are involved in criminal activities.

All/most/many black male youths are unreliable witnesses.

All/most/many black male youths are involved in drug-related crimes.

In the case of Brooks, these three generalizations can be converted into case specific generalizations that guided their investigation because despite the facts that: Brooks had no criminal record; he was not known to the police as a criminal; and that there was no evidence to doubt his version of events: the police nonetheless conducted their initial main investigation along the lines of a failed drug deal. The police’s initial inquiry is therefore consistent with an assumption of the sort of stereotypical generalizations Reiner and others talk about in their research on police culture.

Moreover, as Holdaway also emphasized, other color-blind generalization and assumptions were implicit in the police’s investigation of the crime:

The officers were ‘colour blind,’ denying the relevance of racial status of the victims, the racial motivation of the assailants and, therefore, the need for a particular approach to the investigation of the Lawrence murder. The failure of the police officers dealing with the Lawrence case to recognise and accept ‘race’ as a central feature of their investigation is in my view central to the deficiencies in policing identified by Kent Police [19].

The generalization identified by Holdaway also functions as an action-guiding principle governing the police in their interpretation of the assailants’ motive. Since the police did not accept Brooks’ story of the events, they also operated on the assumption that they understood the assailant’s motive-a dispute in a failed drug deal. This in turn affected the sorts of questions they regarded as germane to solving the crime. For the sorts of questions they were asking from witnesses were all directed at discovering evidence that could implicate Brooks and Lawrence in criminal activities.

Heuristics, Methodology and Algorithm

In what precise manners do elements of the model described above guide the configuration, analysis and evaluation of legal arguments? We need to turn briefly to scientific methodology, the study of the rules and standards for appraisal of theories. Like most…gies, there are so many different understandings of methodology that it is essential to state precisely what I take the term to mean. One way of clarifying the concept is to distinguish between two different senses of the term, viz., a narrow (i.e. formal, algorithmic) sense of methodology and a broad (heuristics) sense. In its narrow sense, methodology is made up of (more or less) formal principles that provide an algorithm of rational choice. These principles are those that enable traditional philosophers to claim that one scientific theory is, in view of the available empirical evidence, better than its rivals.

These principles (which are mainly principles of deductive logic and probability) are also said to invariably govern theory choice throughout the history of science. That is, for those who maintain that such formal principles of logic operate in science, it is the same set of formal principles that have been in operation throughout the whole history of science-past, present and future. It is precisely because these principles are invariable that epistemologists are able to deliver the judgment that one scientific theory is objectively better than its rivals. The photon theory of light is better than the wave theory of light for exactly the same sort of reason that the wave theory of light was better than the corpuscular theory of light.

Of course, philosophers have hotly disagreed about how correctly and exactly to characterize these formal principles. Nonetheless, once an epistemologist has succeeded in identifying the correct (or true) principles for the logical appraisal of theories, these principles are valid for all times, past present and feature. For example, philosophers like Sir Karl Popper and his followers fervently believed that they had hit upon one such principle that they called falsificationism. (However, most contemporary philosophers disagree with the adequacy of falsificationism as a scientific method.)

But philosophers such as Thomas Kuhn and Larry Laudan reject the idea of formal invariable methodological principles. For these philosophers, there are no invariant principles of theory appraisal because a scientist’s ‘criteria’ of choice is intimately connected to her belief system. Thomas Kuhn for instance used the term paradigm to refer to the “strong network of commitments-conceptual, theoretical, instrumental, and methodological” [20] assumed by scientists. These commitments “provide scientists not only with a map but also with some directions essential for map-making. In learning a paradigm the scientist acquires the theory, method, and standards together which leads inextricable mixture” [20].

Whenever these assumptions are relied upon in scientific research (according to proponents of the broad approach to methodology), these assumptions perform a dual role: on the one hand, they function as substantive claims which make specific assertions about the nature of the world (e.g. light is a wave-like disturbance in a medium; phlogiston is emitted into air during combustion; events in nature are deterministic). On the other hand, these assumptions also perform heuristic action-guiding roles in the sense that: (i) they lay down certain requirements about what sorts of explanations, conjectures and theories are admissible within a domain of inquiry (e.g. any new theory of light must explain the wave-like properties of light if it is to be accepted); and (ii) they also specify kinds of modifications that are acceptable within their domains of inquiry (e.g. for as long as the principle of determinism is accepted, any explanation in fluid mechanism, say, must not rely on indeterministic assumptions).

In short, in the broad conception of scientific methodology, theoretical, metaphysical, and factual assumptions also function in a natural way as positive and negative heuristic principles that guide the further development of science.

The view outlined above is not that Reasons provide an algorithm of choice (that would be the narrow, formal, conception of choice). Stories and generalizations do not operate as basic principles and standards for the “correct” appraisal of legal arguments. Rather they are tools for methodological appraisal in the broad sense. The adequacy of legal arguments is simply constrained and guided by Reasons.

Conclusion

I have maintained that the validity or adequacy of legal arguments cannot be fully understood if we ignore the Reasons of implicit in the investigator’s mind when she reasons from fact and evidence to conclusions. Forensic proof is always conducted on the basis of Reasons like stories and generalizations. These background assumptions, which I have referred to as “Reasons” are indispensable to forensic evidence. However, the total stocks of Reasons that are available to specific forensic find-finders vary on the basis of knowledge, qualification, experience, and cultural factors.

Whenever an investigator or fact-finder relies on a set of particular assumptions, these assumptions perform a dual role: they will function as substantive claims that make specific assertions about the nature of the world. (In the case of stories, the assumptions involve a temporal ordering of events. In the case of generalizations, we have claims that are taken as applicable to the population of a specified group). On the other hand, these assumptions also perform heuristic or methodological functions within the legal system: they place certain requirements on the sorts of explanations, hypotheses, facts and arguments that are acceptable as adequate within the legal system. In short, Reasons function as positive and negative heuristic principles that guide the performance of inferential tasks within the legal process. Forensic reasoning is thereby judgmental in the sense that it is to a large extent based on subjective individual personal knowledge, assumptions, impressions and feelings and opinion, just as much as it is based on external objective facts.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Article Usage

  • Total views: 12004
  • [From(publication date):
    April-2013 - Nov 20, 2017]
  • Breakdown by view type
  • HTML page views : 8212
  • PDF downloads : 3792
 

Post your comment

captcha   Reload  Can't read the image? click here to refresh

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

Agri & Aquaculture Journals

Dr. Krish

[email protected]

1-702-714-7001Extn: 9040

Biochemistry Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Business & Management Journals

Ronald

[email protected]

1-702-714-7001Extn: 9042

Chemistry Journals

Gabriel Shaw

[email protected]

1-702-714-7001Extn: 9040

Clinical Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Engineering Journals

James Franklin

[email protected]

1-702-714-7001Extn: 9042

Food & Nutrition Journals

Katie Wilson

[email protected]

1-702-714-7001Extn: 9042

General Science

Andrea Jason

[email protected]

1-702-714-7001Extn: 9043

Genetics & Molecular Biology Journals

Anna Melissa

[email protected]

1-702-714-7001Extn: 9006

Immunology & Microbiology Journals

David Gorantl

[email protected]

1-702-714-7001Extn: 9014

Materials Science Journals

Rachle Green

[email protected]

1-702-714-7001Extn: 9039

Nursing & Health Care Journals

Stephanie Skinner

[email protected]

1-702-714-7001Extn: 9039

Medical Journals

Nimmi Anna

[email protected]

1-702-714-7001Extn: 9038

Neuroscience & Psychology Journals

Nathan T

[email protected]

1-702-714-7001Extn: 9041

Pharmaceutical Sciences Journals

Ann Jose

[email protected]

1-702-714-7001Extn: 9007

Social & Political Science Journals

Steve Harry

[email protected]

1-702-714-7001Extn: 9042

 
© 2008- 2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords