The Risks of Cutting Corners in Forensics via Jonathan Desverney @ USA News

The Risks of Doing Forensics on the Cheap - featured image

[ad_1]

With the demise of the internationally renowned public sector UK Forensic Science Service in 2012 came the promulgation and growth of a new competitive marketplace.

Private sector companies working in a hastily drawn up framework for forensic science provision were invited into rounds of competitive tendering that were driven by the police service.

These were based on the notion that the Forensic Science Service had been inefficient, delivering forensic science analysis in an expensive and untimely manner.

However, high-quality forensic science provision was always costly and the British police service wanted to operate in a new culture of cost reduction and value for money. They wanted full control of their spending and that is understandable.

But with the rise of competitive tendering the provision of forensic science was commoditized. Specific work and tests in each forensic discipline were itemized and bid for by the companies.

The police forces guaranteed specific volumes of testing to the companies in order to get the best prices and the police began to dictate to the companies what tests they required against the ‘pricelist’ when potential forensic evidence had been collected from crime scenes.

The police service also determined to undertake certain basic scientific tasks themselves. By bringing these in house they could further save money and reduce the burden on their budgets.

This new and cheaper approach has been in place for nine years and has been subjected to comment by critics and supporters alike. In 2018 one major player in the new UK forensic market, Key Forensic Services Ltd, collapsed. They had won a significant share of the available forensic science work, but couldn’t sustain the service.

Many working in forensic science warned that the quality of expert analysis and interpretation would be lost as scientists would no longer be able to refer their findings to colleagues across overlapping disciplines in order to provide a holistic approach to obtaining the best evidence from the forensic samples presented.

This would inevitably lead to the loss of the opportunities for contextualization of the evidential findings for use in the justice process.

In addition the fragmentation of the industry has seen many expert scientists set up their own niche services and struggle to get regular work. Some left the industry altogether.

There has never been a properly constituted academic analysis of what these changes have meant to UK forensic science provision and what the impact has been.

Until now.

In a comprehensive and thorough six-year research programme, Dr Karen Richmond from the University of Copenhagen undertook a long and objective period of fieldwork and analysis.

Interviews were conducted not only with forensic scientists, but also with allied institutional agents including senior professional members of the judiciary of England and Wales, Her Majesty’s Inspectorate of Constabulary, the Office of the Forensic Science Regulator, the Crown Prosecution Service, the Royal Society, the UK Accreditation Service, the Metropolitan Police Service, and the Chartered Society of Forensic Sciences.

Her findings are both startling and important. They point to a thoroughly dysfunctional marketplace that has failed to harmonize the array of tests and reports in a way that should have led to the configuration of a homogenous service from each provider to all police forces.

Instead, the exact opposite has occurred, with very different requirements being demanded of providers by each separate police force so that scientists are “reinventing the wheel” for each customer.

Furthermore the scientific strategy for the analysis and reporting of forensic samples is set by the police with little or no scientific training. They will perhaps have undertaken Crime Scene Investigation training, but that doesn’t allow for the best objective understanding of what might work or not work in each case.

This can result in loss of opportunities as the scientists are often not able to question the police requirements and cannot make their own investigative assessments of what the best science is to be applied.

There has been a market push and perhaps an over reliance on DNA testing as the “go to” science, to the detriment of other scientific processes such as the searching for, collection of,  and analysis of fibers.

Indeed there are a host of critical findings that reflect negatively on the way the market has developed. Dr Richmond says in her report:

The data demonstrates not only how government agencies failed to adapt to the introduction of competitive tendering, but also how the market which developed in their wake began to influence, distort and reconfigure the very processes of forensic strategy-setting and analysis.

She goes on to say:

The results offer a compelling insight into the ways in which these agents have adapted to changing relations, shifting priorities, and the imposition of market logics within a sector unaccustomed to the obtrusions of economic efficiency and external regulation.

After nine years there are continuing tensions and frustrations. They are keenly felt by scientists and the companies providing forensic science services. These should not now be set aside as just another academic study.

Dr Richmond’s work has shown that in hindsight the implementation of the decision to restructure forensic science provision to the UK criminal justice system was flawed.

It perhaps should have never left the public sector in the first place, where in a government agency cost considerations would have remained secondary to the need to provide comprehensive criminal justice outcomes.

The US Perspective

In a recent column for The Crime Report “ Why We Need a Federal Forensic Science Agency,” I argued that forensic science provision in the U.S. cannot continue to support unvalidated and often junk science in its courts to the detriment of a fair and just criminal justice system. This remains the case.

However the UK seems to have also got things wrong. Not in the quality of work that is done by the forensic providers, as this is regulated and accredited to international standards;  but in the way that the science has been dumbed down by the police.

The emphasis on treating science as a sequence of commoditized testing processes has led to the inability of scientists to properly engage their expertise in support of criminal justice in the way they did when forensic science was delivered as a public sector service.

If the U.S. is eventually to embrace a nationally mandated federal forensic science system in the future,  then there are clear lessons to be learned from Dr Richmond’s research.

The application of forensic science to the justice process should be led by independent experts, working in a quality controlled environment, to provide the best evidence for the courts. A system that allows the police to control the work of scientists, without having recourse to the expert opinion of those scientists before the work is carried out, should not be the way to go.Gareth Bryon

Gareth Bryon

Indeed the UK police approach to seeking quick results cheaply from forensic science may one day mean that the best opportunities to secure a conviction in a high-profile case may go out of the window, because other potential evidence is overlooked or not considered based on cost.

This couldn’t happen in the U.S., could it?

Gareth Bryon is a former Detective Chief Superintendent who worked as a senior officer in the South Wales Police and the British Transport Police, where he led major crime investigation and forensic science services for over 30 years.

Advertisement

$7.5M Award paid for BIASED evidence analysis via FOX6Now.com

MILWAUKEE —  Robert Lee Stinson spent 23 years in prison for a crime he did not commit. Now, he is finally getting justice for a conviction based on flawed evidence. His long-awaited day in court came amid a national effort to put forensic science on trial.

 

For decades, television shows have conditioned people to believe that people can pinpoint a criminal suspect with a shoe print, tire mark, or a single strand of hair, and they can do it with absolute certainty. However, the advent of DNA technology has proven that other forensic disciplines, once thought to be bulletproof, are susceptible. Those errors have put hundreds, if not thousands, of innocent people in prison.

The conviction and exoneration of Robert Lee Stinson

When Robert Lee Stinson walked out of New Lisbon Correctional Center at the age of 44, his smile revealed a full set of teeth.

“It’s been a long time. Twenty-three years. I was accused of something I didn’t do,” Stinson said when he was released in 2009.

More than two decades earlier, one of those teeth was missing, and that’s all it took to convict him of murder.

“That was essentially the case. The whole case against Mr. Stinson,” said Keith Findley, co-founder of the Center for Integrity in Forensic Sciences.

In fall 1984, the body of 63-year-old Ione Cychosz was discovered in the backyard of a home near 7th and Center. She’d been raped and beaten to death. There were bite marks all over her skin.

“Whoever left these bite marks had some irregular dentition,” Findley explained.

Milwaukee police brought in a dental expert from Marquette University to examine the marks. Doctor L. Thomas Johnson helped police develop a sketch, which showed the killer would likely have a cracked or missing upper right tooth.

“It’s a difficult job,” Dr. Johnson said during a 2007 interview with FOX6 about forensic odontology.

Stinson lived just steps from the crime scene, and had a missing upper right tooth.

“The detectives closed this case after seeing Mr. Stinson,” said Heather Lewis Donnell, Stinson’s attorney since 2009.

The jury never saw the sketch, which showed a different tooth missing than the one in Stinson’s mouth, but they did hear Dr. Johnson say that the bite marks “had to have come” from Stinson. There was no margin for error. A second expert agreed.

“So they were saying, ‘It has to be him,'” Lewis Donnell explained.

She said the level of certainty the dental experts relayed to the jury in 1985 was never supported by the science.

“That they had the ability, their science had the ability to say, ‘It was this person, and only this person,'” Lewis Donnell said.

“It’s really kind of preposterous,” Findley said.

Twenty-three years would pass before Findley and the Wisconsin Innocence Project would prove the doctors were wrong.

“Did you ever think this would come?” a reporter asked Stinson after his 2009 release.

“No, I didn’t. No, I didn’t, but with the help of the Innocence Project — came through,” Stinson responded.

DNA technology would eventually identify the real killer as Moses Price, but Findley said the bite mark analysis that put Stinson away instead was flawed from the start, and more recent research proves it.

“It’s essentially junk,” Findley said.

Questioning bite mark analysis

For more than 50 years, Dr. Johnson was a pioneer in the field of forensic odontology. He led a team of dentists that identified victims of the 1985 Midwest Airlines crash, and he helped police identify the remains of victims dismembered by serial killer Jeffery Dahmer.

However, Findley said using bite marks to solve crimes is an entirely different process.

“Matching human remains is not the problem. Matching bite marks to a particular individual is a huge problem,” Findley said.

Matching actual teeth to actual dental records is precise, but a growing body of research finds that bite marks left on the skin are unreliable, because skin is a terrible medium for retaining bite mark indentations.

“Because skin is malleable,” Lewis Donnell explained.

“It stretches. It bloats. You bruise in funny patterns,” Findley explained further. “And that’s where the science has completely fallen apart.”

Study after study now questions the validity of bite mark analysis, with one expert calling it “the poster child for bad forensic science.” A 2009 report by The National Academy of Sciences went further, citing “serious problems” across the entire “forensic science system,” from fingerprints to firearms, and footwear to hair comparison.

Changing the face of forensic science

It was that government report and another that followed in 2016 that ultimately prompted Findley to join some of the nation’s leading criminal defense experts in launching The Center for Integrity in Forensic Sciences. The first symposium was held in June at Northwestern University.

“This is one of our inaugural events,” Findley said during the symposium. “We can’t wait for the federal government to fix this.”

“We want to ensure that the science we’re producing is reliable and defendable,” said Jennifer Naugle, deputy administrator of the Wisconsin State Crime Lab.

Naugle said she’s on board with improving the science behind forensic science.

“‘The only thing we’re trying to do is seek the truth through science. That’s it. That’s really all it is,” Naugle said.

She said a 2016 report by the Obama Administration unfairly lumped more reliable techniques used every day, like fingerprint and firearms analysis, with things like hair and bite mark analysis, which has been largely discredited.

“That’s not something we would ever do at the Wisconsin State Crime Lab,” Naugle said.

“We’re not suggesting that all of the forensic disciplines are useless. They’re not, but what we are suggesting is that they need to be improved,” Findley said.

Dr. Johnson retired in 2013, but the following year, he published his final study on bite mark analysis. It concluded it is sometimes possible to narrow the source of a human bite mark to about 5% of the population. In other words, nowhere near a precise individual match. The FOX6 Investigators contacted Dr. Johnson by telephone, but he is 93 years old and unable to hear well. His wife declined an interview on his behalf.

Now that Dr. Johnson is retired, there is only one board-certified forensic odontologist in Wisconsin — Dr. Donald Simley in Madison. He declined an interview for this story because Dr. Johnson is a close personal friend and mentor. Dr. Simley has not testified in a bite mark case since 2003. While he believes there is still value in this type of evidence, he said police are better off swabbing a bite mark for DNA than trying to match a suspect’s teeth.

Across the country,  the Innocence Project has exonerated more than 160 people who were convicted with flawed forensic evidence, including 10 because of bite marks.

“This evidence is dreadful,” said Jennifer Mnookin, UCLA School of Law, during the symposium.

Yet, bite mark evidence is still admissible in more states, including Wisconsin, where, ironically, Stinson’s case still serves as the legal precedent.

“Even though Stinson has now been conclusively exonerated, and the bite mark evidence in his case has been shown to be false,” Findley said.

Robert Lee Stinson seeks justice in federal court

Ten years after Stinson’s release, his federal civil rights case against the dentists and the City of Milwaukee finally went to trial.

“There was a lot of powerful and moving testimony,” Lewis Donnell said.

Just before the case went to the jury, they settled out of court. The City of Milwaukee will pay Stinson $7.5 million. Stinson’s attorney said the remaining terms of the settlement — including any amount other defendants have agreed to pay — will remain confidential.

“We’re just really grateful that this is how it ended, and that Mr. Stinson got some measure of justice after all he’s been through,” said Lewis Donnell.

Thirty-four years later, Stinson can finally move on, but the injustice he endured is sure to leave a mark.

  • Share

 

 

 

MILWAUKEE — He spent 23 years in prison for a crime he did not commit. Now, a Milwaukee man is finally getting justice for a conviction based on flawed evidence. His long-awaited day in court came amid a national effort to put forensic science on trial. For decades, television shows have conditioned people to believe that people can pinpoint a criminal suspect with a shoe print, tire mark, or a single strand of hair, and they can do it with […]

via ‘It’s essentially junk:’ $7.5M bite mark settlement underscores national call for better forensic evidence — FOX6Now.com

Do Inmates Need Educational Protocols?

ISO 13485
ISO 13485 Medical devices — Quality management systems — Requirements for regulatory purposes is an International Organization for Standardization standard published for the first time in 1996; it represents the requirements for a comprehensive quality management system for the design and manufacture of medical devices.

This standard supersedes earlier documents such as EN 46001 and EN 46002, the previously published ISO 13485, and ISO 13488.

The essentials of validation planning, protocol writing, and change management will be explained.

via ESSENTIALS OF VALIDATION – Do You Really Need It? — Compliance4all

Forensic Failures Described via Law in Focus @ CSIDDS |

Faulty Forensics: Explained

By Jessica Brand

(West Midlands Police / Flickr [CC])

In our Explainer series, Fair Punishment Project lawyers help unpack some of the most complicated issues in the criminal justice system. We break down the problems behind the headlines — like bail, civil asset forfeiture, or the Brady doctrine — so that everyone can understand them. Wherever possible, we try to utilize the stories of those affected by the criminal justice system to show how these laws and principles should work, and how they often fail. We will update our Explainers quarterly to keep them current.

In 1992, three homemade bombs exploded in seemingly random locations around Colorado. When police later learned that sometime after the bombs went off, Jimmy Genrich had requested a copy of The Anarchist Cookbook from a bookstore, he became their top suspect. In a search of his house, they found no gunpowder or bomb-making materials, just some common household tools — pliers and wire cutters. They then sent those tools to their lab to see if they made markings or toolmarks similar to those found on the bombs.

At trial, forensic examiner John O’Neil matched the tools to all three bombs and, incredibly, to an earlier bomb from 1989 that analysts believed the same person had made — a bomb Genrich could not have made because he had an ironclad alibi. No research existed showing that tools such as wire cutters or pliers could leave unique markings, nor did studies show that examiners such as O’Neil could accurately match markings left by a known tool to those found in crime scene evidence. And yet O’Neil told the jury it was no problem, and that the marks “matched … to the exclusion of any other tool” in the world. Based on little other evidence, the jury convicted Genrich.

Twenty-five years later, the Innocence Project is challenging Genrich’s conviction and the scientific basis of this type of toolmark testimony, calling it “indefensible.” [Meehan Crist and Tim Requarth / The Nation]

There are literally hundreds of cases like this, where faulty forensic testimony has led to a wrongful conviction. And yet as scientists have questioned the reliability and validity of “pattern-matching” evidence — such as fingerprints, bite marks, and hair — prosecutors are digging in their heels and continuing to rely on it. In this explainer, we explore the state of pattern-matching evidence in criminal trials.

What is pattern-matching evidence?

In a pattern-matching, or “feature-comparison,” field of study, an examiner evaluates characteristics visible on evidence found at the crime scene — e.g., a fingerprint, a marking on a fired bullet (“toolmark”), handwriting on a note — and compares those features to a sample collected from a suspect. If the characteristics, or patterns, look the same, the examiner declares a match. [Jennifer Friedman & Jessica Brand / Santa Clara Law Review]

Typical pattern-matching fields include the analysis of latent fingerprints, microscopic hair, shoe prints and footwear, bite marks, firearms, and handwriting. [“A Path Forward” / National Academy of Sciences”] Examiners in almost every pattern-matching field follow a method of analysis called “ACE-V” (Analyze a sample, Compare, Evaluate — Verify). [Jamie Walvisch / Phys.org]

Here are two common types of pattern-matching evidence:

Fingerprints: Fingerprint analysts try to match a print found at the crime scene (a “latent” print) to a suspect’s print. They look at features on the latent print — the way ridges start, stop, and flow, for example — and note those they believe are “significant.” Analysts then compare those features to ones identified on the suspect print and determine whether there is sufficient similarity between the two. (Notably, some analysts will deviate from this method and look at the latent print alongside the suspect’s print before deciding which characteristics are important.) [President’s Council of Advisors on Science and Technology]

Firearms: Firearm examiners try to determine if shell casings or bullets found at a crime scene are fired from a particular gun. They examine the collected bullets through a microscope, mark down characteristics, and compare these to characteristics on bullets test-fired from a known gun. If there is sufficient similarity, they declare a match. [“A Path Forward” / National Academy of Sciences”]

What’s wrong with pattern-matching evidence?

There are a number of reasons pattern-matching evidence is deeply flawed, experts have found. Here are just a few:

These conclusions are based on widely held, but unproven, assumptions.

The idea that handwriting, fingerprints, shoeprints, hair, or even markings left by a particular gun, are unique is fundamental to forensic science. The finding of a conclusive match, between two fingerprints for example, is known as “individualization.” [Kelly Servick / Science Mag]

However, despite this common assumption, examiners actually have no credible evidence or proof that hair, bullet markings, or things like partial fingerprints are unique — in any of these pattern matching fields.

In February 2018, The Nation conducted a comprehensive study of forensic pattern-matching analysis (referenced earlier in this explainer, in relation to Jimmy Genrich). The study revealed “a startling lack of scientific support for forensic pattern-matching techniques.” Disturbingly, the authors also described “a legal system that failed to separate nonsense from science in capital cases; and consensus among prosecutors all the way up to the attorney general that scientifically dubious forensic techniques should not only be protected, but expanded.” [Meehan Crist and Tim Requarth / The Nation]

Similarly, no studies show that one person’s bite mark is unique and therefore different from everyone else’s bite mark in the world. [Radley Balko / Washington Post] No studies show that all markings left on bullets by guns are unique. [Stephen Cooper / HuffPost] And no studies show that one person’s fingerprints — unless perhaps a completely perfect, fully rolled print — are completely different than everyone else’s fingerprints. It’s just assumed. [Sarah Knapton / The Telegraph]

Examiners often don’t actually know whether certain features they rely upon to declare a “match” are unique or even rare.

On any given Air Jordan sneaker, there are a certain number of shared characteristics: a swoosh mark, a tread put into the soles. That may also be true of handwriting. Many of us were taught to write cursive by tracing over letters, after all, so it stands to reason that some of us may write in similar ways. But examiners do not know how rare certain features are, like a high arch in a cursive “r” or crossing one’s sevens. They therefore can’t tell you how important, or discriminating, it is when they see shared characteristics between handwriting samples. The same may be true of characteristics on fingerprints, marks left by teeth, and the like. [Jonathan Jones / Frontline]

There are no objective standards to guide how examiners reach their conclusions.

How many characteristics must be shared before an examiner can definitively declare “a match”? It is entirely up to the discretion of the individual examiner, based on what the examiner usually chalks up to “training and experience.” Think Goldilocks. Once she determines the number that is “just right,” she can pick. “In some ways, the process is no more complicated than a child’s picture-matching game,” wrote the authors of one recent article. [Liliana Segura & Jordan Smith / The Intercept] This is true for every pattern-matching field — it’s almost entirely subjective. [“A Path Forward” / National Academy of Sciences”]

Unsurprisingly, this can lead to inconsistent and incompatible conclusions.

In Davenport, Iowa, police searching a murder crime scene found a fingerprint on a blood-soaked cigarette box. That print formed the evidence against 29-year-old Chad Enderle. At trial, prosecutors pointed to seven points of similarity between the crime scene print and Enderle’s print to declare a match. But was that enough? Several experts hired by the newspaper to cover the case said they could not draw any conclusions about whether it matched Enderle. But the defense lawyer didn’t call an expert and the jury convicted Enderle. [Susan Du, Stephanie Haines, Gideon Resnick & Tori Simkovic / The Quad-City Times]

Why faulty forensics persist

Despite countless errors like these, experts continue to use these flawed methods and prosecutors still rely on their results. Here’s why:

Experts are often overconfident in their abilities to declare a match.

These fields have not established an “error rate” — an estimate of how often examiners erroneously declare a “match,” or how often they find something inconclusive or a non-match when the items are from the same source. Even if your hair or fingerprints are “unique,” if experts can’t accurately declare a match, that matters. [Brandon L. Garrett / The Baffler]

Analysts nonetheless give very confident-sounding conclusions — and juries often believe them wholesale. “To a reasonable degree of scientific certainty” — that’s what analysts usually say when they declare a match, and it sounds good. But it actually has no real meaning. As John Oliver explained on his HBO show: “It’s one of those terms like basic or trill that has no commonly understood definition.” [John Oliver / Last Week Tonight] Yet, in trial after trial, jurors find these questionable conclusions extremely persuasive. [Radley Balko / Washington Post]

Why did jurors wrongfully convict Santae Tribble of murdering a Washington, D.C., taxi driver, despite his rock-solid alibi supported by witness testimony? “The main evidence was the hair in the stocking cap,” a juror told reporters. “That’s what the jury based everything on.” [Henry Gass / Christian Science Monitor]

But it was someone else’s hair. Twenty-eight years later, after Tribble had served his entire sentence, DNA evidence excluded him as the source of the hair. Incredibly, DNA analysis established that one of the crime scene hairs, initially identified by an examiner as a human hair, belonged to a dog. [Spencer S. Hsu / Washington Post]

Labs are not independent — and that can lead to biased decision-making.

Crime labs are often embedded in police departments, with the head of the lab reporting to the head of the police department. [“A Path Forward” / National Academy of Sciences] In some places, prosecutors write lab workers’ performance reviews. [Radley Balko / HuffPost] This gives lab workers an incentive to produce results favorable to the government. Research has also shown that lab technicians can be influenced by details of the case and what they expect to find, a phenomenon known as “cognitive bias.” [Sue Russell / Pacific Standard]

Lab workers may also have a financial motive. According to a 2013 study, many crime labs across the country received money for each conviction they helped obtain. At the time, statutes in Florida and North Carolina provided remuneration only “upon conviction”; Alabama, Arizona, California, Missouri, Wisconsin, Tennessee, New Mexico, Kentucky, New Jersey, and Virginia had similar fee-based systems. [Jordan Michael Smith / Business Insider]

In North Carolina, a state-run crime lab produced a training manual that instructed analysts to consider defendants and their attorneys as enemies and warned of “defense whores” — experts hired by defense attorneys. [Radley Balko / Reason]

Courts are complicit

Despite its flaws, judges regularly allow prosecutors to admit forensic evidence. In place of hearings, many take “judicial notice” of the field’s reliability, accepting as fact that the field is accurate without requiring the government to prove it. As Radley Balko from the Washington Post writes: “Judges continue to allow practitioners of these other fields to testify even afterthe scientific community has discredited them, and even after DNA testing has exonerated people who were convicted, because practitioners from those fields told jurors that the defendant and only the defendant could have committed the crime.” [Radley Balko / Washington Post]

In Blair County, Pennsylvania, in 2017, Judge Jolene G. Kopriva ruled that prosecutors could present bite mark testimony in a murder trial. Kopriva didn’t even hold an evidentiary hearing to examine whether it’s a reliable science, notwithstanding the mounting criticism of the field. Why? Because courts have always admitted it. [Kay Stephens / Altoona Mirror]

Getting it wrong

Not surprisingly, flawed evidence leads to flawed outcomes. According to the Innocence Project, faulty forensic testimony has contributed to 46 percent of all wrongful convictions in cases with subsequent DNA exonerations. [Innocence Project] Similarly, UVA Law Professor Brandon Garrett examined legal documents and trial transcripts for the first 250 DNA exonerees, and discovered that more than half had cases tainted by “invalid, unreliable, concealed, or erroneous forensic evidence.” [Beth Schwartzapfel / Newsweek]

Hair analysis

In 2015, the FBI admitted that its own examiners presented flawed microscopic hair comparison testimony in over 95 percent of cases over a two-decade span. Thirty-three people had received the death penalty in those cases, and nine were executed. [Pema Levy / Mother Jones] Kirk Odom, for example, was wrongfully imprisoned for 22 years because of hair evidence. Convicted of a 1981 rape and robbery, he served his entire term in prison before DNA evidence exonerated him in 2012. [Spencer S. Hsu / Washington Post]

In 1985, in Springfield, Massachusetts, testimony from a hair matching “expert” put George Perrot in prison — where he stayed for 30 years — for a rape he did not commit. The 78-year-old victim said Perrot was not the assailant, because, unlike the rapist, he had a beard. Nonetheless, the prosecution moved forward on the basis of a single hair found at the scene that the examiner claimed could only match Perrot. Three decades later, a court reversed the conviction after finding no scientific basis for a claim that a specific person is the only possible source of a hair. Prosecutors have dropped the charges. [Danny McDonald / Boston Globe]

In 1982, police in Nampa, Idaho, charged Charles Fain with the rape and murder of a 9-year-old girl. The government claimed Fain’s hair matched hair discovered at the crime scene. A jury convicted him and sentenced him to death. DNA testing later exonerated him, and, in 2001, after he’d spent two decades in prison, a judge overturned his conviction. [Raymond Bonner / New York Times]

Bite mark analysis

In 1999, 26 members of the American Board of Forensic Odontologyparticipated in an informal proficiency test regarding their work on bite marks. They were given seven sets of dental molds and asked to match them to four bite marks from real cases. They reached erroneous results 63 percent of the time. [60 Minutes] One bite mark study has shown that forensic dentists can’t even determine if a bite mark is caused by human teeth. [Pema Levy / Mother Jones]

That didn’t keep bite mark “expert” Michael West from testifying in trial after trial. In 1994, West testified that the bite mark pattern found on an 84-year-old victim’s body matched Eddie Lee Howard’s teeth. Based largely on West’s testimony, the jury convicted Howard and sentenced him to death. Experts have since called bite mark testimony “scientifically unreliable.” And sure enough, 14 years later, DNA testing on the knife believed to be the murder weapon excluded Howard as a contributor. Yet the state continues to argue that Howard’s conviction should be upheld on the basis of West’s testimony. [Radley Balko / Washington Post]

West, who in 1994 was suspended from the American Board of Forensic Odontology and basically forced to resign in 2006, is at least partially responsible for several other wrongful convictions as well. [Radley Balko / Washington Post]

West himself has even discredited his own testimony, now stating that he “no longer believe[s] in bite mark analysis. I don’t think it should be used in court.” [Innocence Project]

Fingerprint analysis

The FBI has found that fingerprint examiners could have an error rate, or false match call, as high as 1 in 306 cases, with another study indicating examiners get it wrong as often as 1 in every 18 cases. [Jordan Smith / The Intercept] A third study of 169 fingerprint examiners found a 7.5 percent false negative rate (where examiners erroneously found prints came from two different people), and a 0.1 percent false positive rate. [Kelly Servick / Science Mag]

In 2004, police accused American attorney Brandon Mayfield of the notorious Madrid train bombing after experts claimed his fingerprint matched one found on a bag of detonators. Eventually, four experts agreed with this finding. Police arrested him and detained him for two weeks until the police realized their mistake and were forced to release him. [Steve Pokin / Springfield News-Leader]

In Boston, Stephan Cowans was convicted, in part on fingerprint evidence, in the 1997 shooting of a police officer. But seven years later, DNA evidence exonerated him and an examiner stated that the match was faulty. [Innocence Project]

A 2012 review of the St. Paul, Minnesota, crime lab found that over 40 percent of fingerprint cases had “seriously deficient work.” And “[d]ue to the complete lack of annotation of actions taken during the original examination process, it is difficult to determine the examination processes, including what work was attempted or accomplished.” [Madeleine Baran / MPR News]

Firearm analysis

According to one study, firearm examiners may have a false positive rate as high as 2.2 percent, meaning analysts may erroneously declare a match as frequently as 1 in 46 times. This is a far cry from the “near perfect” accuracy that examiners often claim. [President’s Council of Advisors on Science and Technology]

In 1993, a jury convicted Patrick Pursley of murder on the basis of firearms testimony. The experts declared that casings and bullets found on the scene matched a gun linked to Pursley “to the exclusion of all other firearms.” Years later, an expert for the state agreed that the examiner should never have made such a definitive statement. Instead, he should have stated that Pursley’s gun “couldn’t be eliminated.” In addition, the defense’s experts found that Pursley’s gun was not the source of the crime scene evidence. Digital imaging supported the defense. [Waiting for Justice / Northwestern Law Bluhm Legal Clinic] In 2017, a court granted Pursley a new trial. [Georgette Braun / Rockford Register Star]

Rethinking faulty forensics

Scientists from across the country are calling for the justice system to rethink its willingness to admit pattern-matching evidence.

In 2009, the National Research Council of the National Academy of Science released a groundbreaking report concluding that forensic science methods “typically lack mandatory and enforceable standards, founded on rigorous research and testing, certification requirements, and accreditation programs.” [Peter Neufeld / New York Times]

In 2016, the President’s Council of Advisors on Science and Technology (PCAST), a group of pre-eminent scientists, issued a scathing report on pattern-matching evidence. The report concluded that most of the field lacked “scientific validity” — i.e., research showing examiners could accurately and reliably do their jobs. [Jordan Smith / The Intercept] Until the field conducted better research proving its accuracy, the Council stated that forensic science had no place in the American courtroom. The study found that, regarding bite mark analysis, the error rate was so high that resources shouldn’t be wasted to attempt to show it can be used accurately. [Radley Balko / Washington Post]

After the PCAST report came out, then-Attorney General Loretta Lynch, citing no studies, stated emphatically that “when used properly, forensic science evidence helps juries identify the guilty and clear the innocent.” [Jordan Smith / The Intercept] “We appreciate [PCAST’s] contribution to the field of scientific inquiry,” Lynch said, “[but] the department will not be adopting the recommendations related to the admissibility of forensic science evidence.” [Radley Balko / Washington Post]

The National District Attorneys Association (NDAA) called the PCAST report “scientifically irresponsible.” [Jessica Pishko / The Nation] “Adopting any of their recommendations would have a devastating effect on the ability of law enforcement, prosecutors and the defense bar to fully investigate their cases, exclude innocent suspects, implicate the guilty, and achieve true justice at trial,” the association noted. [Rebecca McCray / Take Part]

The NDAA also wrote that PCAST “clearly and obviously disregard[ed] large bodies of scientific evidence … and rel[ied], at times, on unreliable and discredited research.” But when PCAST sent out a subsequent request for additional studies, neither the NDAA nor the Department of Justice identified any. [PCAST Addendum]

This problem is getting worse under the current administration. Attorney General Jeff Sessions has disbanded the National Commission on Forensic Science, formed to improve both the study and use of forensic science, and which had issued over 40 consensus recommendation documents to improve forensic science. [Suzanne Bell / Slate] He then developed a DOJ Task Force on Crime Reduction and Public Safety, tasked with “support[ing] law enforcement” and “restor[ing] public safety.” [Pema Levy / Mother Jones]

But there are also new attempts to rein in the use of disproven forensic methods. In Texasthe Forensic Science Commission has called for a ban on bite marks. “I think pretty much everybody agrees that there is no scientific basis for a statistical probability associated with a bite mark,” said Dr. Henry Kessler, chair of the subcommittee on bite mark analysis. [Meagan Flynn / Houston Press]

A bill before the Virginia General Assembly, now carried over until 2019, would provide individuals convicted on now-discredited forensic science a legal avenue to contest their convictions. The bill is modeled after similar legislation enacted in Texas and California. The Virginia Commonwealth’s Attorneys Association opposes the legislation, arguing: “It allows all sorts of opportunities to ‘game’ the system.” [Frank Green / Richmond Times-Dispatch]

Meanwhile, at least one judge has recognized the danger of forensic expert testimony. In a 2016 concurrence, Judge Catherine Easterly of the D.C. Court of Appeals lambasted expert testimony about toolmark matching: “As matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true. This is not evidence on which we can in good conscience rely, particularly in criminal cases … [T]he District of Columbia courts must bar the admission of these certainty statements, whether or not the government has a policy that prohibits their elicitation. We cannot be complicit in their use.” [Spencer S. Hsu / Washington Post]

Do you wonder how witchcraft and satanic children eating coven stories survive in this era of lies and misdemeanors and wrongful convictions? This article pushes back against what’s coming out of the US WH and DOJ (and some DAs) spiel about forensic reliability. https://injusticetoday.com/faulty-forensics-explained-fe4d41157452

via #Forensics: The usual forensic failures described : DAs don’t give much of a damn. — FORENSICS and LAW in FOCUS @ CSIDDS | News and Trends