It was declared as ‘dishwashing liquid.’ It was really 4,020 litres of an MDMA precursor drug —

The Canada Border Services Agency (CBSA) seized over 4,000 litres of “dishwashing liquid” in Vancouver last year. Only, it wasn’t dishwashing liquid. It was MDP-2-P, a precursor used in the production of ecstasy and MDMA. Coverage of drug seizures on The seizure happened when border services officers with the CBSA processed a container from…

via It was declared as ‘dishwashing liquid.’ It was really 4,020 litres of an MDMA precursor drug —


The science behind forensic toxicology

(AP Photo/Julie Jacobson) Featured Image -- 122
WRITTEN BY: Katherine Ellen Foley

When we get our blood tested for cholesterol, it doesn’t take long to get the results. And if someone turns up at the hospital with what looks like a drug overdose, doctors can perform a quick test to verify their suspicions before treatment.
But unlike popular crime series like CSI, in which investigators whip up test results in the span of a quick montage, most forensic toxicology reports take anywhere from a few weeks to a few months. This can be an excruciating wait after mysterious deaths and unsolved crimes. Why does it take so long?
Quartz spoke with Robert Middleberg, a toxicologist from NMS Labs in Willow Grove, Pennsylvania, to find out.
Unlike other medical tests, where technicians isolate a specific compound like cholesterol, Middleberg says that you don’t always know what you’re looking for with forensic toxicology. “If you have a young person who is found dead in bed and there’s no history of drug abuse, you’re looking for the proverbial needle in a haystack,” he tells Quartz.
Testing times

After a body is found and an autopsy is performed by a pathologist, a separate lab will look for any environmental or pharmaceutical toxins that could be the killers. Without any clear clues, Middleberg says they will start testing for about 400 different substances. “We never know what we’re going to get,” he notes. It takes creative intuition to guide a cycle of testing and interpreting the results of tests to inform further testing.
Once an initial analysis returns a match for a particular substance, toxicologists must gather more specifics for the official report. Bodies that have already started decaying produce some toxins naturally, like ethanol (another name for the alcohol we drink) and cyanide, so toxicologists may have to perform additional tests to determine whether these played an active role in the cause of death.
All of this is further complicated by the fact that samples often arrive in less than ideal conditions. “If somebody is pulled out of the water after being missing for two or three weeks, these samples are very, very bad,” Middleberg says.
Unlike testing in an emergency room to confirm an overdose, pathology focuses on specifics. “For [medical toxicologists], sometimes it doesn’t really matter exactly what’s there,” Middleberg says. “In our world, the pathologists want to know exactly what it is and how much.”
Not every test is a complicated affair—despite all of the unknowns, Middleberg says that most labs try to have a turnaround time of 3-5 days for ruling things out and 7-10 days for identifying the specific factors leading to death.
Looking for clues

Like detectives, toxicologists look for clues to narrow down which tests are necessary. Knowing a subject’s history with drug or alcohol use obviously helps. There are also several somewhat macabre rules of thumb that tip toxicologists off to seek substances they wouldn’t normally test for:
Bright red blood as a sign of carbon monoxide poisoning
A green brain as a sign of exposure to hydrogen sulfide
Chocolate brown blood as a sign of excess methemoglobin poisoning
Hair falling out can be a sign of chronic arsenic or thallium poisoning
Blue skin can be a sign of gadolinium poisoning
Cocaine and methamphetamines can change the shape of the heart
Dear Jeff Bezos: My husband needed therapy after working for Amazon
Green tech is helping restore Florida’s $40 billion economic catalyst: the Everglades

Reasons why The Use of Bite Mark Evidence should not be Admissible in Court.

Bite mark evidence has a high margin of error.

Bite mark evidence is very flawed and should only be used “in combination” with solid evidence when used in trial.

In otherwords, its not an exact science. A bite mark matching advocacy group just conducted a study that discredits bite mark evidence
Share on Facebook Share on Twitter Share on Google Plus Share via Email More Options

Resize Text Print Article Comments
By Radley Balko April 8
In February, I posted a four-part series on the forensic speciality of bite mark analysis. The series looked at the history of the field, how it came to be accepted by the courts as scientific evidence despite the lack of any real scientific research to support its basic assumptions, the innocent people who have been convicted based on bite mark analysis and how the bite mark matchers, advocacy groups like the American Board of Forensic Odontology and their supporters have waged aggressive, sometimes highly personal campaigns to undermine the credibility of people who have raised concerns about all of this.

The series ran during the annual American Academy of Forensic Sciences convention in Orlando, Florida. That conference included a presentation by Adam Freeman, who sits on the executive board of the ABFO, and Iain Pretty, who is not a member of the ABFO, has been critical of bite mark analysis and chairs the AAFS committee on forensic odontology.* Freeman and Pretty were to present the results of a study they had designed with David Senn, another ABFO member and a proponent of bite mark analysis.**

Senn in fact was the main witness for New York County Assistant District Attorney Melissa Mourges during a 2013 evidentiary hearing on the scientific validity of bite mark analysis in State v. Dean. That hearing was the first to assess the science behind bite mark matching since the field came under fire in a landmark 2009 report by the National Academy of Sciences. Ultimately, Senn and Mourges prevailed. Judge Maxwell Wiley ruled that the evidence could be admitted at Clarence Dean’s trial. In fact, to date, every court to rule on the admissibility of bite mark analysis has allowed it to be used as evidence. This, despite an ever increasing number of wrongful convictions, wrongful arrests, and lack of scientific research to support the field, and a new body of research suggesting that its core assumptions are false.

The study:

All of this makes the presentation by Pretty and Freeman particularly interesting. In response to mounting criticism, last year the ABFO released a “decision tree” for bite mark specialists to follow when performing their analysis. The “tree” is basically a flow chart. It begins by asking if there is sufficient evidence to know whether or not a suspicious mark is a human bite. It then asks whether it is in fact a bite, then what distinguish characteristics are noticeable in the bite, and so on.
But the problem with bite mark analysis was never the lack of a flow chart. The problem is that there has never been any real scientific research to support its two main underlying premises — that human dentition is unique, and that human skin is capable of registering and recording that uniqueness in a useful way. And the research that has been done strongly suggests those two premises are not true. The flow chart was just adding a series of procedures to a method of analysis that is entirely subjective, and that lacks basic scientific quantifiers like probability and margin for error.

Yet the ABFO wanted to show that its flow chart worked. So last year, the organization put together an exam to prove its effectiveness. Pretty and Freeman, with consultation from Senn and others within the organization, gave 39 ABFO-certified bite mark analysts photos of 100 bite marks, then asked them to answer three preliminary questions, all based on the decision tree chart. The average analyst who participated in the study had 20 years experience as a forensic odontologist. Here are the three questions they were asked:

Is there sufficient evidence in the presented materials to render an opinion on whether the patterned injury is a human bite mark?
Is it a human bite mark, not a human bite mark, or suggestive of a human bite mark?
Does the bite mark have distinct, identifiable arches and individual tooth marks?
That last question is asking if, once the analyst has determine that the mark is a human bite, the mark contains enough distinguishing features to be of value as evidence.

Interestingly, the intent of this study was to measure consensus, not whether the analysts were actually correct in their conclusions. Consensus is important, particularly in a field that relies so much on pattern matching and subjective analysis instead of quantifiable data. Consensus also shows predictability, which is also an important characteristic when assessing whether a field is legitimately based in science. There will of course occasionally be cases in which the evidence is ambiguous, but if a cross section of experts from a particular field consistently fail to reach consensus conclusions after looking at the same pieces of evidence, you have to start asking if the field is much more than guesswork.

But it’s also notable that there was no effort here to determine the rightness or wrongness of the answers. For example, if 10 out of 10 analysts agree that a mark on human skin is a human bite, that would suggest that the decision tree succeeded at fostering consensus. If only 7 out of 10 agree, that’s more troubling. But it would be even more troubling if the seven in the majority were also wrong.

The study didn’t measure for accuracy in part because the photos were taken from actual cases, so for many of them, whether or not the bite is actually human has never been definitively determined. But as I pointed out in my original series, it’s also keeping the field’s tendency to be more concerned about methodology than veracity. ABFO conducts its certification exams in a similar manner. The candidates are evaluated only on their method of analysis, not on on whether or not they’re actually correct in matching a bite mark to the correct dental mold.
This reflects an ugly reality about the pattern-matching fields of forensics: Because they’re so subjective, it isn’t difficult for attorneys on either side of a case to find an expert who will testify to the conclusion they’re looking for. In these fields then, the most important attribute in a witness is not that they be accurate, but that they sound accurate — that they be more convincing to a jury than the expert on the other side. Juries don’t like wishy-washy witnesses. They like witnesses who seem sure of themselves, who speak with authority. But in forensic specialties as subjective as pattern matching, certainty is a red flag. Most of the time, an honest witness should hedge, speak in probabilities, and avoid definitive conclusions. But this means that the least honest experts can often be the most persuasive, and there’s a clear incentive for prosecutors and defense attorneys to seek them out.

Finally, note that this study also did not ask the examinees to actually match a mark to the teeth of an individual human being the way this sort of evidence would be presented in court. (A previous competency test administered by bite mark critic Michael Bowers in 1999 found a 60 percent error rate among the analyst test takers.) It only asked the three preliminary questions above.

So in sum, this study only measured the ability of ABFO-certified experts to come to a consensus, and only on the most basic, preliminary questions about a piece of evidence.

The results

Even within these limited parameters, and even when designed and administered by the field’s biggest advocates, this study shows that bite mark analysis fails.

The first question — again, whether the test provided sufficient evidence to determine whether or not the photographed mark was a human bite — isthe most basic question a bite mark specialist should answer before performing an analysis. Yet the 39 analysts came to unanimous agreement on just 4 of the 100 case studies. In only 20 of the 100 was there agreement of 90 percent or more on this question. By the time the analysts finished question two — whether the photographed mark is indeed a human bite — there remained only 16 of 100 cases in which 90 percent or more of the analysts were still in agreement. And there were only 38 cases in which at least 75 percent were still in agreement. (These figure come from my own examination of the raw data, as well as processing of the data done by the Innocence Project.)

By the time the analysts finished question three, they were significantly fractionalized on nearly all the cases. Of the initial 100, there remained just 8 case studies in which at least 90 percent of the analysts were still in agreement.

“These results are really disturbing,” says Paul Giannelli, a law professor at Cast Western Reserve University who specializes in scientific evidence. Giannelli also serves on the National Commission on Forensic Science, started by President Obama to address and remedy the shortcomings in forensic evidence outlined in that 2009 NAS report. “But they aren’t all that surprising. There have been a number of cases over the years in which one bite mark analyst testified that a mark was a human mark, while another testified it was something entirely different, for example a bug bite, or an indentation from a belt buckle.”

Peter Bush, who with his wife Mary heads up the University of Buffalo research team that has cast doubt on the integrity of bite mark analysis (and who has been attacked by the community of bite mark analysts and their supporters for that research), agrees: “When there have been exonerations of people convicted with bite mark evidence, the forensic odontologists have said that the problem is with the analysts — that they’re rogue or incompetent experts who didn’t do the analysis properly. This is just another piece of evidence that’s it’s both of these things. It’s the improper analysis, but it’s also the very nature of the evidence itself.”

To put these results in perspective, it might help to ask what might have happened if a similar exam had been given to specialists from a more science-based field of forensics, such as DNA analysis.

“It would be difficult to set up a DNA test that was exactly the same, but if you could, you’d see overwhelming agreement,” Giannelli says. “I’d expect it to be unanimous. And on the questions where it wasn’t unanimous, you’d be able to go back and find the source of the problem — whether it was tainted evidence, or some glitch in the exam. With bite mark analysis, you can’t really even go back, because it’s just a subjective disagreement over what the analysts are seeing.”

Chris Fabricant, the director of strategic litigation for the Innocence Project who is challenging bite mark evidence in several cases across the country, points to a similar study of fingerprint analysts published in 2011 that found 99 percent agreement. “Contrast that to some of the questions in this study, in which the level of agreement among the analysts was only slightly better than randomness,” Fabricant says.

The reaction

The bite mark community reacted with shock, disappointment, and ultimately an effort to suppress the results of the study. According to reliable sources within the ABFO, David Senn initially wanted to cancel the panel at the AAFS conference in which Freeman and Pretty were to present the results. These sources say Senn was astonished at the results, and told other members of the ABFO that he was “reeling” from them. He also apologized to the organization for his role in the study.

In the end, the organization did proceed with the presentation of the results, but then played down their significance. Newly-elected ABFO president Gary Berman briefly mentioned the study in his quarterly message to the organization’s members.

In order to improve the study of bitemarks the ABFO developed a decision tree to assist practitioners in the proper selection and pathways of analysis in bitemark analysis. The ABFO has conducted preliminary research, presented in Orlando, designed to evaluate the first step of a revised decision tree; statistical analysis of the study showed inconsistent overall agreement among the individuals who participated in the project. The ABFO in reaffirming its commitment to ensure accuracy in bitemark analysis is revising the decision tree to ensure reliable results by forensic dentists and will be conducting additional studies this year.

While it’s commendable that the ABFO is attempting to create guidelines that will “ensure reliable results,” it’s far more troubling that the current guidelines don’t, that the unreliable results those guidelines produce have for years been used and continue to be used in court, and that rather than running to courtrooms across the country to halt the convictions, imprisonments and pending executions based on the results, the organization continues to fight for its members’ ability to testify using the very analysis it now concedes is flawed.

In an email in response to my query, Berman blamed the poor design of the study for the results. “Post analyses of the results indicate that the design of the survey and the design of Step 1 of the decision tree may be flawed, and that an ABFO guideline term may be the root cause,” Berman wrote. “The troublesome term, ‘suggestive of a human bitemark’, is one of the currently recommended terms for confidence that a pattern is or is not a bitemark.”

Berman writes that some of the test-takers may have answered the first question in the affirmative (that there was sufficient evidence to show that the mark was a human bite), but then changed their mind as they answered the other questions. He writes, “they were loathe to go back and change the answer to the negative . . . Instead they selected the hedged, and available third choice, ‘suggestive of a human bitemark.’”

Berman’s explanation raises another common criticism made by skeptics of bite mark evidence, although perhaps he raised it inadvertently: Because so much of their value as expert witnesses relies on their credibility, there’s a strong disincentive to change their minds about their conclusions once they’ve made them, even when new evidence suggests they should. If an analyst is loathe to admit a mistake in an anonymous proficiency study, it doesn’t bode well for his ability to admit to a mistake after putting his name and reputation behind court testimony, or in an affidavit leading to an arrest.

Indeed, bite mark analysts have concocted some fantastic theories of culpability even after a suspect convicted based on their testimony was found not to be a match to the semen taken from a victim who was raped, or even to the saliva taken from the bite mark itself. On more than one occasion, for example, a bite mark analyst has confronted a DNA mismatch on semen taken from a rape victim by arguing that someone else must have raped the victim while the suspect implicated by their testimony must have held the suspect down and bit her.

But even more concerning than the results of the study itself, the ABFO has since decided to hold off on publishing those results until the organization can tweak the design of the study and conduct it again, a process that’s expected to take at least a year.

“If this were truly a science-based organization, I would not only expect them to be extremely troubled by the results of this study, I would expect them to want to publish the results,” says Paul Giannelli. “And sooner rather than later, so that they could be considered in any pending criminal cases in which bite mark evidence is a factor.”

The ABFO did release the raw data from the study in spreadsheet form to a few people, but won’t release the presentation given at the AAFS meeting, nor will they publish the data in a journal or another publicly assessable format, at least until the completion of the second study. “We are in the process of modifying the decision tree, the language, and then we will be running the study again,” Adam Freeman wrote in response to an email query. “The results of both studies will then be sent to the [Journal of Forensic Sciences] for publication.The release of the presentation at this point would be premature. ”

Critics like Fabricant are skeptical. “If the results had been more to their liking, I can’t imagine that they’d be objecting over the language in their own study, then taking another year or so to rerun the study to get more favorable results before releasing the data. In the meantime, people are suffering in prison. Some are fighting a death sentence.”

One of the pending criminal cases is the one mentioned at the start of this post: that of Clarence Dean, which is expected to go to trial sometime this year. As noted above, that case included an important evidentiary hearing in which a New York judge ruled that bite mark evidence is admissible and scientifically valid. Many other judges have made that ruling in the past, but this was the first such ruling since the publication of the NAS report in 2009. The prosecutor in Dean’s case is Melissa Mourges, a fierce advocate for bite mark matching who, as I explained in the series in February, has not only advocated for bite mark analysis as a field, but has waged nasty, often highly personal attacks on those who have raised concerns about its legitimacy.

Mourges included a reproduction of the ABFO’s “decision tree” in her brief for the bite mark hearing in the Dean case. She cited the tree as another example of the bite mark community’s dedication to accuracy:

An important Guideline revision was added in February 2013 when the ABFO voted to include a bitemark flow chart or decision tree, included below. Properly used, the decision tree will guide forensic odontologists’ investigatory paths leading to proper conclusions based on the quality of the bitemark and the teeth of the suspected biters. This new guideline offers specific recommendations for forming degrees of linkage conclusions based on the quality of both injury features and suspected biter dentitions.

Mourges attended the presentation by Pretty and Freeman at the AAFS conference in February. I reached out to the Manhattan DA’s office where Mourges works to ask for her official reaction to the study. She didn’t respond, but the office did issue a statement from Chief Assistant District Attorney Karen Friedman Agnifilo:

This study reinforces the importance of basing decisions on the best possible evidence available. The use of forensic odontology, properly performed, has been and continues to be a valuable tool to aid in the identification of assailants and can also be used to help place victims, many of whom are children, out of harm’s way. Equally important, forensic odontology is used to exclude and exonerate suspects.‎ Each time an injury is recognized as a bitemark and swabbed, investigators gain both DNA evidence and potential bitemark identification. Forensic odontology differs from DNA evidence in that it may not be dispositive, but it is probative. Undeniably, bitemarks have significant evidentiary value, which is why this type of evidence is admissible in all 50 states.

Agnifilo’s statement conflates a lot of issues, and I examined several of the points she makes in the February series. But briefly, few would object to swabbing potential bite marks for DNA. Rather, critics of bite mark evidence fault the attempt to match marks on human skin to human teeth. The fact that bite mark evidence is admissible in all 50 states is convincing only if you believe the courts have done an adequate job of keep bad science out of criminal cases. Part two of the February series argues that they haven’t. Agnifilo’s point about the quality of the evidence is a good one. But it remains true that even with the most pristine bite mark evidence, there’s no scientific research to support the contention that the marks we make with our teeth are individually, or to what extent they’re unique, or that, even if they were unique, that human skin is capable of preserving that uniqueness in a way that allows it to be analyzed.

The Manhattan DA’s office insistence on standing behind bite mark evidence is interesting in and of itself. Current Manhattan DA Cyrus Vance, Jr., was elected in 2009 on a platform of “community justice,” and won endorsements from criminal justice reform advocates — including, interestingly, Peter Neufeld and Barry Scheck, co-founders of the same Innocence Project that is now feuding with Mourges in court. On its website, Vance’s office stresses the importance of fairness and sound evidence in preventing wrongful convictions:

The Manhattan District Attorney’s Office spares no effort in seeking justice in every case that comes before it. Through the years and around the country, innocent men and women have been convicted of crimes they did not commit. This not only robs an innocent person of his or her freedom, it leaves a criminal on the street, free to commit more crimes.

To protect New Yorkers and ensure justice, District Attorney Vance created the Conviction Integrity Program in March 2010. The Program is comprehensive in scope, and is unique in purpose: not only does it address claims of actual innocence, it also seeks to prevent wrongful convictions from occurring . . .

The Conviction Integrity Policy Advisory Panel is comprised of leading criminal justice experts, including legal scholars and former prosecutors, who advise the Office on national best practices and evolving issues in the area of wrongful convictions.

The work of the Conviction Integrity Program, combined with the Office’s commitment to using the most advanced scientific and investigative tools available, has made the cases brought by the Office stronger for victims and more fair for defendants.

But meanwhile, at least two of Vance’s top lieutenants continue to defend a field of forensics that has contributed to at least 24 wrongful convictions and arrests around the country, despite numerous studies showing it lacks any basis in science, including one organized by the field’s leading advocacy organization.

Finally, I noted in my original series that last fall, the National Institute for Science and Technology announced the members of the forensic odontology subcommittee that will study the scientific validity of bite mark matching. The committee is one of several that will study various fields of forensics as part of the federal government’s push toward reform in light of the 2009 NAS report. Incredibly, 10 of the 16 members are either practicing bite mark analysts, or are open advocates of the practice, including the chairman, Robert Barsley. It’s a development one critic of bite mark matching likened to starting a committee to investigate the scientific validity of astrology, then stacking it with astrologists.

Pretty and Freeman’s study is a major development in the field of bite mark analysis. It’s one you’d think would attract the attention of the committee charged with investigating whether bite mark analysis is suitable for court. The committee held its first meeting on February 16. The results of the ABFO study were by then well known to the members affiliated with ABFO. According to the webcast and public notes from the meeting, chairman Barsley did include the ABFO “decision tree” in his presentation. He also incorrectly compared the uniqueness of bite marks to fingerprints, and noted that while he couldn’t point to a citation of a study showing that human dentition is unique, “there are studies that lead us to believe this is true.” (In fact, the only peer-reviewed, scientifically rigorous study of the uniqueness of human dentition has been conducted by Peter and Mary Bush’s team, and they’ve found no basis for that assertion.) Curiously missing from Barsley’s presentation was any discussion of the ABFO study showing that the decision tree failed to produce a consensus among even the ABFO’s most experienced analysts.

As the ABFO hems and haws on this study and takes another year to redesign it, ostensibly to achieve more favorable results, bite mark evidence continues to be used in criminal cases, and existing bite mark cases continue to move forward. Over the last several months there have been new filings in the death penalty cases of Eddie Lee Howard in Mississippi, and Jimmie Duncan in Louisiana. At least 15 people convicted with bite mark evidence are currently awaiting execution.

Meanwhile, just last week a sheriff in northern Indiana announced that he’ll be assembling a “forensic dentistry team” within his department. From the Chicago Tribune:

Sheriff David Reynolds recently swore in three local dentists as part of the department’s forensic dentistry team . . .

The dentists will do everything from matching bite marks with suspects or victims, to using dental records to identify victim’s remains, Reynolds said . . .

Over the years, Reynolds has used forensic dentists a number of ways.

“We used them for rape cases, investigating bite marks,” he said, as well as for remains . . .

“There were other cases where people were bitten and we were able to take (dental) models and pictures and match them up to bite marks on the victims.”

So even as we await the results of the ABFO’s do-over on its own study to assess the validity of this field, not only do those convicted due to bite mark analysis remain in prison, law enforcement groups are still using it to win convictions. It’s almost as if those 24 exonerations never happened.

(*Forensic odontology or forensic dentistry, includes the controversial field of bite mark matching, but also the more accepted practice of using dental records to identify human remains.)

(**Senn did not respond to my request for comment. In an email, Pretty acknowledged the study, the results, and that the ABFO will be conducting another study to be published next year. But because the study was administered by the ABFO, using ABFO case studies, he wrote that “it would be wrong of me to make any comments on the work beyond those that were made at the AAFS.”)
Radley Balko blogs about criminal justice, the drug war and civil liberties for The Washington Post. He is the author of the book “Rise of the Warrior Cop: The Militarization of America’s Police Forces.”
Share on Facebook
Share on Twitter
Most Read
Think Walter Scott’s death is ‘another Ferguson’? Cops don’t.
Apple’s new diverse emoji are even more problematic than before
Enough with the Scandinavians already
The Insiders: The Democratic Whackjob Survey
The sugar lobby’s sour tactics
The Most Popular All Over

Rand Paul just walked out of an interview after being pressed on a question

The Baltimore Sun
Commercial real estate boom reaches Baltimore

The Atlantic
The Abuse of Satire
The Most Popular stories around the web
Get the Opinions Newsletter
Free daily updates delivered just for you.
Our Online Games

Play right from this page

Mahjongg Dimensions
Genre(s): Strategy
It’s 3D Mahjongg- you don’t even need to wear 3D glasses!

Genre(s): Strategy
Put on your Sudoku hat and get ready for a challenging Sudoku puzzle!

Daily Crossword
Genre(s): Word
Same fun of crosswords but online!

Vegas World
Genre(s): Card
Vegas World brings synchronous multiplayer gameplay to all of its casino-based classics.
© 1996-2015 The Washington Post

Help and Contact Us
Terms of Service
Privacy Policy
Submissions and Discussion Policy
RSS Terms of Service
Ad Choices