RELATIVE POVERTY not Poverty causes crime.

Psychology Professor Jordan Peterson explains the clear documented science why it’s relative poverty and not poverty itself that causes crime, AKA the Gini Coefficient He goes on further explaining the role of the male dominance hierarchy in context of relative poverty and crime.

Dr. Peterson’s new book is available for pre-order:
12 Rules for Life: An Antidote to Chaos: http://amzn.to/2yvJf9L
If you want to support Dr. Peterson, here is his Patreon:
https://www.patreon.com/jordanbpeterson
Check out Jordan Peterson’s Self Authoring Program, a powerful tool to sort yourself out:
http://bit.ly/selfAuth (Official affiliate link for Bite-sized Philosophy)

 

Essentials Of Educational Protocols – Do You Really Need Them?

ISO 13485
ISO 13485 Medical devices — Quality management systems — Requirements for regulatory purposes is an International Organization for Standardization standard published for the first time in 1996; it represents the requirements for a comprehensive quality management system for the design and manufacture of medical devices.

This standard supersedes earlier documents such as EN 46001 and EN 46002, the previously published ISO 13485, and ISO 13488.

The essentials of validation planning, protocol writing, and change management will be explained.

via ESSENTIALS OF VALIDATION – Do You Really Need It? — Compliance4all

How to Evaluate a Statistic and avoid Bias / False Presumptions via Mathematical Software

A counting statistic is simply a numerical count of the number of some item such as “one million missing children”, “three million homeless”, and “3.5 million STEM jobs by 2025.” Counting statistics are frequently deployed in public policy debates, the marketing of goods and services, and other contexts. Particularly when paired with an emotionally engaging story, counting statistics can be powerful and persuasive. Counting statistics can be highly misleading or even completely false. This article discusses how to evaluate counting statistics and includes a detailed list of steps to follow to evaluate a counting statistic.

Checklist for Counting Statistics

  1. Find the original primary source of the statistic. Ideally you should determine the organization or individual who produced the statistic. If the source is an organization you should find out who specifically produced the statistic within the organization. If possible find out the name and role of each member involved in the production of the statistic. Ideally you should have a full citation to the original source that could be used in a high quality scholarly peer-reviewed publication.
  2. What is the background, agenda, and possible biases of the individual or organization that produced the statistic? What are their sources of funding?What is their track record, both in general and in the specific field of the statistic? Many statistics are produced by “think tanks” with various ideological and financial biases and commitments.
  3. How is the item being counted defined. This is very important. Many questionable statistics use a broad, often vague definition of the item paired with personal stories of an extreme or shocking nature to persuade. For example, the widely quoted “one million missing children” in the United States used in the 1980’s — and even today — rounded up from an official FBI number of about seven hundred thousand missing children, the vast majority of whom returned home safely within a short time, paired with rare cases of horrific stranger abductions and murders such as the 1981 murder of six year old Adam Walsh.
  4. If the statistic is paired with specific examples or personal stories, how representative are these examples and stories of the aggregate data used in the statistic? As with the missing children statistics in the 1980’s it is common for broad definitions giving large numbers to be paired with rare, extreme examples.
  5. How was the statistic measured and/or computed? At one extreme, some statistics are wild guesses by interested parties. In the early stages of the recognition of a social problem, there may be no solid reliable measurements; activists are prone to providing an educated guess. The statistic may be the product of an opinion survey. Some statistics are based on detailed, high quality measurements.
  6. What is the appropriate scale to evaluate the counting statistic? For example, the United States Census estimates the total population of the United States as of July 1, 2018 at 328 million. The US Bureau of Labor Statistics estimates about 156 million people are employed full time in May 2019. Thus “3.5 million STEM jobs” represents slightly more than one percent of the United States population and slightly more than two percent of full time employees.
  7. Are there independent estimates of the same or a reasonably similar statistic? If yes, what are they? Are the independent estimates consistent? If not, why not? If there are no independent estimates, why not? Why is there only one source? For example, estimates of unemployment based on the Bureau of Labor Statistics Current Population Survey (the source of the headline unemployment number reported in the news) and the Bureau’s payroll survey have a history of inconsistency.
  8. Is the statistic consistent with other data and statistics that are expected to be related? If not, why doesn’t the expected relationship hold? For example, we expect low unemployment to be associated with rising wages. This is not always the case, raising questions about the reliability of the official unemployment rate from the Current Population Survey.
  9. Is the statistic consistent with your personal experience or that of your social circle?If not, why not? For example, I have seen high unemployment rates among my social circle at times when the official unemployment rate was quite low.
  10. Does the statistic feel right? Sometimes, even though the statistic survives detailed scrutiny — following the above steps — it still doesn’t seem right. There is considerable controversy over the reliability of intuition and “feelings.” Nonetheless, many people believe a strong intuition often proves more accurate than a contradictory “rational analysis.” Often if you meditate on an intuition or feeling, more concrete reasons for the intuition will surface.

(C) 2019 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Centerinvolved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

 

 

A counting statistic is simply a numerical count of the number of some item such as “one million missing children”, “three million homeless”, and “3.5 million STEM jobs by 2025.” Counting statistics are frequently deployed in public policy debates, the marketing of goods and services, and other contexts. Particularly when paired with an emotionally engaging […]

via How to Evaluate a Counting Statistic — Mathematical Software

Factors Affecting The Intensity Of Poisoning via Forensic’s blog

By @forensicfield

Introduction

The resultant of poisoning depends on many factors.

There are number of reasons which can affect intensity of poisoning are further explained, such as;

  • Dose.
  • Time of intake
  • Way of taking
  • Environmental factors, etc.

Dose

Amount of the poison is determine the affect of it on the body. Smaller the dose, lighter the effect and larger the dose, severe the effect.

Resistance

After doing continuous use of some drugs, such as opiates, tobacco, alcohol, etc. person develop a resistance towards some drugs.

Incompatible Combination of Drugs

Ingestion of some incompatible combination of Medicines may be fatal. Such As; Prozac and Tramadol, Thyroid medication and proton pump inhibitors, Nonsteroidal anti-inflammatory drugs and antihypertensive, etc.

Hypersensitivity

Some of persons show abnormal response (idiosyncrasy) to a drug like morphine, quinine, aspirin etc. due to inherent personal hypersensitivity.

Allergy

Some persons are allergic (acquired hypersensitivity) towards certain drugs like penicillin, sulpha, etc.

Incompatible Combinations

Ingestion of certain medications like anti – ulcerous gels with aspirin may lead to fatal effects.

Tolerance

People develop a marked tolerance in the case of opium, alcohol, strychnine, tobacco, arsenic and some other narcotic drugs by repeated and continued use.

Synergism

Some poisonous drugs can be toxic when taken together may cause lethal effect. Such as; Alcohol and Benzodiazepines, Heroine and Cocaine, Benzodiazepines and Opioids, Alcohol and Opioids

Slow Poisons

The continuous small amount of poison ingestion like arsenic, strychnine, lead, etc. accumulate in body and may cause death.

Conditions of The Body

  • Conditions of the body, i.e. age, health, etc. also affect the action of the poison.
  • Generally old persons, weaker persons and children severly affected by low dose of poison then young and healthy person.

Cumulative Action

The repeated small doses of cumulative poisons like arsenic, lead, mercury, strychnine, digitalis etc. may cause death or chronic poisoning by cumulative action.

Shock

Some times, a large dose of a poison acts differently from a small dose, for example; a large dose of arsenic may cause death by shock while a small dose results in diarrhoea.

Forms of Poison

  • Gases/Vapours Poisons
  • Liquid Poisons
  • Powder Poisons
  • Chemical Combination
  • Mechanical Combination

Gases / Vapours Poisons

These types of poison absorbed immediately and act quickly.

Liquid Poisons

These act better than solids.

Powder Poisons

Fine powdered poison act fast than coarse powdered poison.

Chemical Combination

Some substances in combination act like lethal, such as; acids and alkali’s, strychnine and tannic acid, etc.

Mechanical Combination

The action of a poison is altered when combined mechanically with inert substances, such as; when alkaloid when taken with charcoal, it does not act.

Methods Of Administration

A poison acts more rapidly when inhaled in gaseous form or when injected intravenously.

Next when inject intramuscularly or subcutaneously.

A poison acts slowly when swallowed or applied on skin.

Watch it🤳, share it ✌and subscribe it 👇 : –

NO COMMENTS – add the first!

JZ Logistics via AP Editor


SEND

By @forensicfield Introduction The resultant of poisoning depends on many factors. There are number of reasons which can affect intensity of poisoning are further explained, such as; Dose. Time of intake Way of taking Environmental factors, etc. Dose Amount of the poison is determine the affect of it on the body. Smaller the dose, lighter […]

via Factors Affecting The Intensity Of Poisoning — Forensic’s blog

Variables to consider when Determining Post Mortem Blood Alcohol Levels via True Crime Rocket Science / #tcrs

Immediately following the release of the autopsy reports on November 19th, I contacted Thomas Mollett, a forensic investigator, fellow true crime author and friend, and asked him his opinion on Shan’anns Blood Alcohol Levels. They were found to be three times the legal limit for driving. How likely was it, I asked, that these apparently high levels were from “normal” decomposition?

SUPPLEMENTAL

Autopsy reports show Shanann Watts, daughters were asphyxiated – TimesCall

Fullscreen capture 20181217 233710

Pathology is an extremely complex science, and many factors play into the biological processes that occur after death.

image001

The three basic pillars one uses to calculate whether the BAC is “normal” or not are related to:

  1. the time the body is exposed to the elements [here time of death is a factor, unknown in this case, but with a relatively short window either way]
  2. the ambient conditions of the body [temperature, humidity etc.]
  3. circumstantial evidence is also a vital tool to gauge alcohol content, including eye witnesses, Shan’ann’s drinking habits, and her appearance in the Ring camera footage when she arrived home [described but not released thus far]

During our first communication I miscommunicated to Mollett that Shan’ann’s corpse was recovered after only 48 hours, which I guessed wasn’t enough time to reflect the high alcohol levels found. This was an initial error on my part; it took closer to 70 hours for Shan’ann’s corpse to be discovered and exhumed.

Based on this initial miscommunication, Mollett also believed the BAC level was likely higher than a natural rate [which as I say, was also what I suspected].

I asked Mollett to investigate the BAC levels and I’m grateful to him for doing so in detail. Obviously part of his thorough investigation corrected the original 48 hour error.

Below is Mollet’s unabridged report on the BAC levels.

Fullscreen capture 20181217 230915Fullscreen capture 20181217 230922Fullscreen capture 20181217 230928Fullscreen capture 20181217 230933Fullscreen capture 20181217 230940Fullscreen capture 20181217 231003Fullscreen capture 20181217 231009Fullscreen capture 20181217 231013Fullscreen capture 20181217 231022Fullscreen capture 20181217 231028

Fullscreen capture 20181217 235348Fullscreen capture 20181217 235321Fullscreen capture 20181217 231033Fullscreen capture 20181217 231105

20181119__20TCAAUTw_1

  • Share
  • 9Comments
9 COMMENTS
  • Helen

    Helen

    I think Chris tied Shanann to the bed after she fell asleep, put a pillow case over her mouth to prevent her from screaming, made sure she watched through the monitor how he smothered Bella and Celeste, and then came back to the bedroom to strangle her.

    Reply
  • BAMS13

    BAMS13
    Helen

    You’re going to get in trouble from Nick now… lol.

    Reply
  • nickvdl

    nickvdl
    BAMS13

    Bams, can I let you take it from here? I can’t always be the one cracking the whip 😉

    Reply
  • BAMS13

    BAMS13
    nickvdl

    Haha! Always happy to try and exert my low ranking power anytime. You’d think those virtual whip cracks can be heard loud and clear though. 😉

    Reply
  • Syzia

    Syzia

    Helen took it to the next level here

    Reply
  • Marie

    Marie
    Syzia

    Oh yes syzia, I agree

    Reply
  • Karen

    Karen

    Well, that report certainly cleared up so many things. Now we know. The body certainly is a fascinating animal in death as much as life. I do know that when officer Coonrod was in the kitchen he didn’t have a peek in the sink to see if there were breakfast dishes in there to find out if the kids had eaten so we couldn’t see if there was a wine glass. Nor did I see any at all throughout his whole walk through the house. Thorough report

    Reply
  • Sylvester

    Sylvester

    “Important moments at Watt’s well site” is really stunning. I hope everyone can blow it up on a computer monitor rather than a cell phone. You really get the sense of vastness of that site – miles and miles in every direction of land dotted with wildflowers. The tank battery site even seems dwarfed in proportion to the land. As the drone makes it’s lazy pass from the air you then see the sheet, hugging the scrub. Look a little closer and you see the black garbage bags. It was rather stupid of him to discard the sheet on top of the land after it had fulfilled it’s purpose to conceal and drag. Same with the garbage bags. Maybe he thought in the vastness of the land those items, like his family, would simply vanish.

    Reply
  • Karen

    Karen
    Sylvester

    Sylvester, do you know if they sent the drone out before Chris said anything or after? For the life of me, I can’t remember. Thank you kindly

    Reply

JZ Logistics @ The Internet Truck Stop


SEND

Immediately following the release of the autopsy reports on November 19th, I contacted Thomas Mollett, a forensic investigator, fellow true crime author and friend, and asked him his opinion on Shan’anns Blood Alcohol Levels. They were found to be three times the legal limit for driving. How likely was it, I asked, that these apparently high […]

via Thomas Mollett’s Forensic Report on Shan’ann Watts’ Post Mortem Blood Alcohol Level — True Crime Rocket Science / #tcrs

Global Standardization of Forensics will Decrease the Bias Factor of Evidence Collection Procedures and Court Rulings

Interviews – 2018

Angus Marshall, Digital Forensic Scientist

via Angus Marshall
Angus, tell us a bit about yourself. What is your role, and how long have you been working in digital forensics?

Where to begin? I have a lot of different roles these days, but by day I’m a Lecturer in Cybersecurity – currently at the University of York, and also run my own digital forensic consultancy business. I drifted into the forensic world almost by accident back in 2001 when a server I managed was hacked. I presented a paper on the investigation of that incident at a forensic science conference and a few weeks later found myself asked to help investigate a missing person case that turned out to be a murder. There’s been a steady stream of casework ever since.

I’m registered as an expert adviser and most of my recent casework seems to deal with difficult to explain or analyse material. Alongside that, I’ve spent a lot of time (some might say too much) working on standards during my time on the Forensic Science Regulator’s working group on digital evidence and as a member of BSI’s IST/033 information security group and the UK’s digital evidence rep. on ISO/IEC JTC1 SC27 WG4, where I led the work to develop ISO/IEC 27041 and 27042, and contributed to the other investigative and eDiscovery standards.

You’ve recently published some research into verification and validation in digital forensics. What was the goal of the study?

It grew out of a proposition in ISO/IEC 27041 that tool verification (i.e. evidence that a tool conforms to its specification) can be used to support method validation (i.e. showing that a particular method can be made to work in a lab). The idea of the 27041 proposal is that if tool vendors can provide evidence from their own development processes and testing, the tool users shouldn’t need to repeat that. We wanted to explore the reality of that by looking at accredited lab processes and real tools. In practice, we found that it currently won’t work because the requirement definitions for the methods don’t seem to exist and the tool vendors either can’t or won’t disclose data about their internal quality assurance.

The effect of it is that it looks like there may be a gap in the accreditation process. Rather than having a selection of methods that are known to work correctly (as we see in calibration houses, metallurgical and chemical labs etc. – where the ISO 17025 standard originated) which can be chosen to meet a specific customer requirement, we have methods which satisfy much fuzzier customer requirements which are almost always non-technical in nature because the customers are CJS practitioners who simply don’t express things in a technical way.

We’re not saying that anyone is necessarily doing anything wrong, by the way, just that we think they’ll struggle to provide evidence that they’re doing the right things in the right way.

Where do we stand with standardisation in the UK at the moment?

Standardization is a tricky word. It can mean that we all do things the same way, but I think you’re asking about progress towards compliance with the regulations. In that respect, it looks like we’re on the way. It’s slower than the regulator would like. However, our research at York suggests that even the accreditations awarded so far may not be quite as good as they could be. They probably satisfy the letter of the regulator’s documents, but not the spirit of the underlying standard. The technical correctness evidence is missing.

ISO 17025 has faced a lot of controversy since it has been rolled out as the standard for digital forensics in the UK. Could you briefly outline the main reasons why?

Most of the controversy is around cost and complexity. With accreditation costing upwards of £10k for even a small lab, it makes big holes in budgets. For the private sector, where turnover for a small lab can be under £100k per annum, that’s a huge issue. The cost has to be passed on. Then there’s the time and disruption involved in producing the necessary documents, and then maintaining them and providing evidence that they’re being followed for each and every examination.

A lot of that criticism is justified, but adoption of any standard also creates an opportunity to take a step back and review what’s going on in the lab. It’s a chance to find a better way to do things and improve confidence in what you’re doing.

In your opinion, what is the biggest stumbling block either for ISO 17025 specifically, or for standardizing digital forensics in general?

Two things – as our research suggests, the lack of requirements makes the whole verification and validation process harder, and there’s the confusion about exactly what validation means. In ISO terms, it’s proof that you can make a process work for you and your customers. People still seem to think it’s about proving that tools are correct. Even a broken tool can be used in a valid process, if the process accounts for the errors the tool makes.

I guess I’ve had the benefit of seeing how standards are produced and learning how to use the ISO online browsing platform to find the definitions that apply. Standards writers are a lot like Humpty Dumpty. When we use a word it means exactly what we choose it to mean. Is there a way to properly standardise tools and methods in digital forensics?

It’s not just a UK problem – it’s global. There’s an opportunity for the industry to review the situation, now, and create its own set of standard requirements for methods. If these are used correctly, we can tell the tool makers what we need from them and enable proper objective testing to show that the tools are doing what we need them to. They’ll also allow us to devise proper tests for methods to show that they really are valid, and to learn where the boundaries of those methods are.

Your study also looked at some existing projects in the area: can you tell us about some of these? Do any of them present a potential solution?

NIST and SWGDE both have projects in this space, but specifically looking at tool testing. The guidance and methods look sound, but they have some limitations. Firstly, because they’re only testing tools, they don’t address some of the wider non-technical requirements that we need to satisfy in methods (things like legal considerations, specific local operational constraints etc.).

Secondly, the NIST project in particular lacks a bit of transparency about how they’re establishing requirements and choosing which functions to test. If the industry worked together we could provide some more guidance to help them deal with the most common or highest priority functions.

Both projects, however, could serve as a good foundation for further work and I’d love to see them participating in a community project around requirements definition, test development and sharing of validation information.

Is there anything else you’d like to share about the results?

We need to get away from thinking solely in terms of customer requirements and method scope. These concepts work in other disciplines because there’s a solid base of fundamental science behind the methods. Digital forensics relies on reverse-engineering and trying to understand the mind of a developer in order to work out how extract and interpret data. That means we have a potentially higher burden of proof for any method we develop. We also need to remember that we deal with a rate of change caused by human ingenuity and marketing, instead of evolution.

Things move pretty fast in DF, if we don’t stop and look at what we’re doing once in a while, we’ll miss something important.

Read Angus Marshall’s paper on requirements in digital forensics method definition here. Angus Marshall

The hottest topic in digital forensics at the moment, standardisation is on the tip of everyone’s tongues. Following various think pieces on the subject and a plethora of meetings at conferences, I spoke to Angus Marshall about his latest paper and what he thinks the future holds for this area of the industry. You can […]

via Angus Marshall talks about standardisation — scar

Autopsy of a Dill Pickle-Introductory Lab for Anatomy or Forensics!

A Pickle Autopsy? YES!

If you teach Anatomy & Physiology, you know the struggle of the first unit…. it’s HUGE!! … and jam-packed with things that are absolutely essential for students to know in order to be successful in the course.  I usually struggle with finding activities to review the body cavities and directional terms.  This year, someone suggested using the pickle autopsy and I’m so glad I did!

The lab I used was published in The Forensic Teacher and would be appropriate for either discipline (I teach both this year).  Here is the link to the lab I used http://www.theforensicteacher.com/Labs_files/picklelabsheets.pdf  A clever fellow teacher friend came up with the storyline that there was a gang war between the Claussens and the Vlasics in the fridge that resulted in no survivors. I loved it so I also used that storyline to frame my lab.

Set Up– The Basics

Now that I had my lab picked out and my story to tell, I had to figure the logistics of how to get everything set up.

First, the pickles….

img_9918

I found the big jars of dills at Walmart for $5.97 each. The smaller pickles I got because I wanted some of my “victims” to be pregnant (or they could also be small children pickles lol).  I had a hard time estimating how many pickles were in the big jars, but these 2 had a total of 33 pickles– more than enough for my classes. The picture below shows them separated by “male” and “female” victims (my “male” pickles are the ones with the stems lol).

Here are all the supplies I used for the lab: img_9916

How to make them look like victims….

I glued wiggly eyes onto thumbtacks for their eyes (so I can reuse them)img_9917

I also used pellets that go in pellet guns for bullet wounds (I smashed them a little with the hammer first and dipped them into gel food coloring before I stuck them in the “victims”)img_9922

I made their heads from an olive stuck on a toothpick– some I even squished so their “brains” fell out a little lol.  I also gave all of them a “spine” (a toothpick on the dorsal side just under the skin).  I also broke several of the toothpicks so this “injury” might be discovered and included in the story of their “victim”. img_9937.jpg

All the “victims” had a bead implanted in the vicinity of their heart.  If the bead was red, they had a normal heart.  If it was black or dark purple, it represented a heart attack.  I found that if you make a slit on the side of the pickle (choose a wrinkle), it will often be completely unnoticeable and students will wonder how in the world you got those beads in there!  I also slipped in a small green bead in the neck region of a few of the “victims” and told my students I heard that some of the gang members involved in the war were caught raiding the grapes from the fridge and several choked on them when their leader caught them.

I also told them that the gang members were not healthy and many had various diseases and disorders because they didn’t take care of themselves.  Many had white beads implanted in various areas.  These beads represented a tumor in the particular area.  Knotted pieces of rubber bands in the abdominal region represented parasites.  Many had broken toothpick “limbs”.  I also had several who were pregnant.

This is the sheet of “Helpful Hints” I gave my students with their lab:

img_9941

A Snapshot of My “Victims”

I separated my “victims” into 4 general types based on their cause of death:

  1. Trauma or internal bleeding (Stabbed or gunshot, injected with red food coloring)
  2. Poisoning/ Drug Overdose (I soaked them in baking soda but didn’t get a very good result)
  3. Heart Attack (black bead instead of red bead in chest)
  4. Drowning (blue food coloring injected in chest area)

 

My “victims” had multiple things that could have resulted in their deaths, but having 4 major things just helped me keep it organized. I also put them in separate dishes while I plotted their demise 🙂 img_9926

I also kept them separate in labeled gallon ziplock bags to transport them to school. img_9927

The Lab Set Up

I set my lab up as a mini crime scene.  I had some fake vampire blood from my forensics class that I also added to help set the scene.  I also added in some extra plastic swords and pellets around the “victims”.  (I let my students pick their own “victim” from the scene). img_9948

Group Jobs

Students were in a lab group of 3 per “victim”.  In my lab, every student in the group has a specific job and job description.  It just helps my lab groups run more smoothly and tends to decrease the possibility that one student does the lion’s share of work.  These are the jobs I gave my groups for this lab: img_9936.jpg

My Take on the Pickle Autopsy Lab

Would I use it again? Absolutely!  My students became very proficient at actually using the directional terminology and identifying the body cavities that we talked about in class.  I heard many meaningful conversations within the groups… “That’s a break in his arm that’s intermediate between the shoulder and the elbow” “I think this sword went through the abdominal cavity and not the thoracic cavity”…. This was so much better than hearing them try to memorize a diagram or a chart of the directional terms!

They loved getting into our “gang warfare” story.  I had them fill out a Coroner’s Report detailing the abnormalities they found both in, and on their “victim”, as well as the location of these abnormalities.  Then, they had to determine the cause of death for their victim, supporting their opinion with specific details from their autopsy.  At all times within their report, they had to incorporate correct anatomical terminology.  Finally, they had to create a narrative of what happened to their “victim” based on the findings from their autopsy.  Several groups shared with the class.  It was lots of fun!

 

 

A Pickle Autopsy? YES! If you teach Anatomy & Physiology, you know the struggle of the first unit…. it’s HUGE!! … and jam-packed with things that are absolutely essential for students to know in order to be successful in the course. I usually struggle with finding activities to review the body cavities and directional […]

via Autopsy of a Dill Pickle- A Great Introductory Lab for Anatomy or Forensics! — Edgy Instruction

Forensic psychology with an emphasis on prison-based rehabilitation is the focus of the Corrective Services 7th Annual Psychology Conference on 29-30 August. Keynote speaker Professor Jim Ogloff AM from @swinburne will discuss ways to reduce violence & serious sexual offending.pic.twitter.com/UmYrol3Yrj — Site Title

Forensic psychology with an emphasis on prison-based rehabilitation is the focus of the Corrective Services 7th Annual Psychology Conference on 29-30 August. Keynote speaker Professor Jim Ogloff AM from @swinburne will discuss ways to reduce violence & serious sexual offending. pic.twitter.com/UmYrol3Yrj Forensic psychology with an emphasis on prison-based rehabilitation is the focus of the Corrective […]

via Forensic psychology with an emphasis on prison-based rehabilitation is the focus of the Corrective Services 7th Annual Psychology Conference on 29-30 August. Keynote speaker Professor Jim Ogloff AM from @swinburne will discuss ways to reduce violence & serious sexual offending.pic.twitter.com/UmYrol3Yrj — Site Title

Forensic Failures Described via Law in Focus @ CSIDDS |

Faulty Forensics: Explained

By Jessica Brand

(West Midlands Police / Flickr [CC])

In our Explainer series, Fair Punishment Project lawyers help unpack some of the most complicated issues in the criminal justice system. We break down the problems behind the headlines — like bail, civil asset forfeiture, or the Brady doctrine — so that everyone can understand them. Wherever possible, we try to utilize the stories of those affected by the criminal justice system to show how these laws and principles should work, and how they often fail. We will update our Explainers quarterly to keep them current.

In 1992, three homemade bombs exploded in seemingly random locations around Colorado. When police later learned that sometime after the bombs went off, Jimmy Genrich had requested a copy of The Anarchist Cookbook from a bookstore, he became their top suspect. In a search of his house, they found no gunpowder or bomb-making materials, just some common household tools — pliers and wire cutters. They then sent those tools to their lab to see if they made markings or toolmarks similar to those found on the bombs.

At trial, forensic examiner John O’Neil matched the tools to all three bombs and, incredibly, to an earlier bomb from 1989 that analysts believed the same person had made — a bomb Genrich could not have made because he had an ironclad alibi. No research existed showing that tools such as wire cutters or pliers could leave unique markings, nor did studies show that examiners such as O’Neil could accurately match markings left by a known tool to those found in crime scene evidence. And yet O’Neil told the jury it was no problem, and that the marks “matched … to the exclusion of any other tool” in the world. Based on little other evidence, the jury convicted Genrich.

Twenty-five years later, the Innocence Project is challenging Genrich’s conviction and the scientific basis of this type of toolmark testimony, calling it “indefensible.” [Meehan Crist and Tim Requarth / The Nation]

There are literally hundreds of cases like this, where faulty forensic testimony has led to a wrongful conviction. And yet as scientists have questioned the reliability and validity of “pattern-matching” evidence — such as fingerprints, bite marks, and hair — prosecutors are digging in their heels and continuing to rely on it. In this explainer, we explore the state of pattern-matching evidence in criminal trials.

What is pattern-matching evidence?

In a pattern-matching, or “feature-comparison,” field of study, an examiner evaluates characteristics visible on evidence found at the crime scene — e.g., a fingerprint, a marking on a fired bullet (“toolmark”), handwriting on a note — and compares those features to a sample collected from a suspect. If the characteristics, or patterns, look the same, the examiner declares a match. [Jennifer Friedman & Jessica Brand / Santa Clara Law Review]

Typical pattern-matching fields include the analysis of latent fingerprints, microscopic hair, shoe prints and footwear, bite marks, firearms, and handwriting. [“A Path Forward” / National Academy of Sciences”] Examiners in almost every pattern-matching field follow a method of analysis called “ACE-V” (Analyze a sample, Compare, Evaluate — Verify). [Jamie Walvisch / Phys.org]

Here are two common types of pattern-matching evidence:

Fingerprints: Fingerprint analysts try to match a print found at the crime scene (a “latent” print) to a suspect’s print. They look at features on the latent print — the way ridges start, stop, and flow, for example — and note those they believe are “significant.” Analysts then compare those features to ones identified on the suspect print and determine whether there is sufficient similarity between the two. (Notably, some analysts will deviate from this method and look at the latent print alongside the suspect’s print before deciding which characteristics are important.) [President’s Council of Advisors on Science and Technology]

Firearms: Firearm examiners try to determine if shell casings or bullets found at a crime scene are fired from a particular gun. They examine the collected bullets through a microscope, mark down characteristics, and compare these to characteristics on bullets test-fired from a known gun. If there is sufficient similarity, they declare a match. [“A Path Forward” / National Academy of Sciences”]

What’s wrong with pattern-matching evidence?

There are a number of reasons pattern-matching evidence is deeply flawed, experts have found. Here are just a few:

These conclusions are based on widely held, but unproven, assumptions.

The idea that handwriting, fingerprints, shoeprints, hair, or even markings left by a particular gun, are unique is fundamental to forensic science. The finding of a conclusive match, between two fingerprints for example, is known as “individualization.” [Kelly Servick / Science Mag]

However, despite this common assumption, examiners actually have no credible evidence or proof that hair, bullet markings, or things like partial fingerprints are unique — in any of these pattern matching fields.

In February 2018, The Nation conducted a comprehensive study of forensic pattern-matching analysis (referenced earlier in this explainer, in relation to Jimmy Genrich). The study revealed “a startling lack of scientific support for forensic pattern-matching techniques.” Disturbingly, the authors also described “a legal system that failed to separate nonsense from science in capital cases; and consensus among prosecutors all the way up to the attorney general that scientifically dubious forensic techniques should not only be protected, but expanded.” [Meehan Crist and Tim Requarth / The Nation]

Similarly, no studies show that one person’s bite mark is unique and therefore different from everyone else’s bite mark in the world. [Radley Balko / Washington Post] No studies show that all markings left on bullets by guns are unique. [Stephen Cooper / HuffPost] And no studies show that one person’s fingerprints — unless perhaps a completely perfect, fully rolled print — are completely different than everyone else’s fingerprints. It’s just assumed. [Sarah Knapton / The Telegraph]

Examiners often don’t actually know whether certain features they rely upon to declare a “match” are unique or even rare.

On any given Air Jordan sneaker, there are a certain number of shared characteristics: a swoosh mark, a tread put into the soles. That may also be true of handwriting. Many of us were taught to write cursive by tracing over letters, after all, so it stands to reason that some of us may write in similar ways. But examiners do not know how rare certain features are, like a high arch in a cursive “r” or crossing one’s sevens. They therefore can’t tell you how important, or discriminating, it is when they see shared characteristics between handwriting samples. The same may be true of characteristics on fingerprints, marks left by teeth, and the like. [Jonathan Jones / Frontline]

There are no objective standards to guide how examiners reach their conclusions.

How many characteristics must be shared before an examiner can definitively declare “a match”? It is entirely up to the discretion of the individual examiner, based on what the examiner usually chalks up to “training and experience.” Think Goldilocks. Once she determines the number that is “just right,” she can pick. “In some ways, the process is no more complicated than a child’s picture-matching game,” wrote the authors of one recent article. [Liliana Segura & Jordan Smith / The Intercept] This is true for every pattern-matching field — it’s almost entirely subjective. [“A Path Forward” / National Academy of Sciences”]

Unsurprisingly, this can lead to inconsistent and incompatible conclusions.

In Davenport, Iowa, police searching a murder crime scene found a fingerprint on a blood-soaked cigarette box. That print formed the evidence against 29-year-old Chad Enderle. At trial, prosecutors pointed to seven points of similarity between the crime scene print and Enderle’s print to declare a match. But was that enough? Several experts hired by the newspaper to cover the case said they could not draw any conclusions about whether it matched Enderle. But the defense lawyer didn’t call an expert and the jury convicted Enderle. [Susan Du, Stephanie Haines, Gideon Resnick & Tori Simkovic / The Quad-City Times]

Why faulty forensics persist

Despite countless errors like these, experts continue to use these flawed methods and prosecutors still rely on their results. Here’s why:

Experts are often overconfident in their abilities to declare a match.

These fields have not established an “error rate” — an estimate of how often examiners erroneously declare a “match,” or how often they find something inconclusive or a non-match when the items are from the same source. Even if your hair or fingerprints are “unique,” if experts can’t accurately declare a match, that matters. [Brandon L. Garrett / The Baffler]

Analysts nonetheless give very confident-sounding conclusions — and juries often believe them wholesale. “To a reasonable degree of scientific certainty” — that’s what analysts usually say when they declare a match, and it sounds good. But it actually has no real meaning. As John Oliver explained on his HBO show: “It’s one of those terms like basic or trill that has no commonly understood definition.” [John Oliver / Last Week Tonight] Yet, in trial after trial, jurors find these questionable conclusions extremely persuasive. [Radley Balko / Washington Post]

Why did jurors wrongfully convict Santae Tribble of murdering a Washington, D.C., taxi driver, despite his rock-solid alibi supported by witness testimony? “The main evidence was the hair in the stocking cap,” a juror told reporters. “That’s what the jury based everything on.” [Henry Gass / Christian Science Monitor]

But it was someone else’s hair. Twenty-eight years later, after Tribble had served his entire sentence, DNA evidence excluded him as the source of the hair. Incredibly, DNA analysis established that one of the crime scene hairs, initially identified by an examiner as a human hair, belonged to a dog. [Spencer S. Hsu / Washington Post]

Labs are not independent — and that can lead to biased decision-making.

Crime labs are often embedded in police departments, with the head of the lab reporting to the head of the police department. [“A Path Forward” / National Academy of Sciences] In some places, prosecutors write lab workers’ performance reviews. [Radley Balko / HuffPost] This gives lab workers an incentive to produce results favorable to the government. Research has also shown that lab technicians can be influenced by details of the case and what they expect to find, a phenomenon known as “cognitive bias.” [Sue Russell / Pacific Standard]

Lab workers may also have a financial motive. According to a 2013 study, many crime labs across the country received money for each conviction they helped obtain. At the time, statutes in Florida and North Carolina provided remuneration only “upon conviction”; Alabama, Arizona, California, Missouri, Wisconsin, Tennessee, New Mexico, Kentucky, New Jersey, and Virginia had similar fee-based systems. [Jordan Michael Smith / Business Insider]

In North Carolina, a state-run crime lab produced a training manual that instructed analysts to consider defendants and their attorneys as enemies and warned of “defense whores” — experts hired by defense attorneys. [Radley Balko / Reason]

Courts are complicit

Despite its flaws, judges regularly allow prosecutors to admit forensic evidence. In place of hearings, many take “judicial notice” of the field’s reliability, accepting as fact that the field is accurate without requiring the government to prove it. As Radley Balko from the Washington Post writes: “Judges continue to allow practitioners of these other fields to testify even afterthe scientific community has discredited them, and even after DNA testing has exonerated people who were convicted, because practitioners from those fields told jurors that the defendant and only the defendant could have committed the crime.” [Radley Balko / Washington Post]

In Blair County, Pennsylvania, in 2017, Judge Jolene G. Kopriva ruled that prosecutors could present bite mark testimony in a murder trial. Kopriva didn’t even hold an evidentiary hearing to examine whether it’s a reliable science, notwithstanding the mounting criticism of the field. Why? Because courts have always admitted it. [Kay Stephens / Altoona Mirror]

Getting it wrong

Not surprisingly, flawed evidence leads to flawed outcomes. According to the Innocence Project, faulty forensic testimony has contributed to 46 percent of all wrongful convictions in cases with subsequent DNA exonerations. [Innocence Project] Similarly, UVA Law Professor Brandon Garrett examined legal documents and trial transcripts for the first 250 DNA exonerees, and discovered that more than half had cases tainted by “invalid, unreliable, concealed, or erroneous forensic evidence.” [Beth Schwartzapfel / Newsweek]

Hair analysis

In 2015, the FBI admitted that its own examiners presented flawed microscopic hair comparison testimony in over 95 percent of cases over a two-decade span. Thirty-three people had received the death penalty in those cases, and nine were executed. [Pema Levy / Mother Jones] Kirk Odom, for example, was wrongfully imprisoned for 22 years because of hair evidence. Convicted of a 1981 rape and robbery, he served his entire term in prison before DNA evidence exonerated him in 2012. [Spencer S. Hsu / Washington Post]

In 1985, in Springfield, Massachusetts, testimony from a hair matching “expert” put George Perrot in prison — where he stayed for 30 years — for a rape he did not commit. The 78-year-old victim said Perrot was not the assailant, because, unlike the rapist, he had a beard. Nonetheless, the prosecution moved forward on the basis of a single hair found at the scene that the examiner claimed could only match Perrot. Three decades later, a court reversed the conviction after finding no scientific basis for a claim that a specific person is the only possible source of a hair. Prosecutors have dropped the charges. [Danny McDonald / Boston Globe]

In 1982, police in Nampa, Idaho, charged Charles Fain with the rape and murder of a 9-year-old girl. The government claimed Fain’s hair matched hair discovered at the crime scene. A jury convicted him and sentenced him to death. DNA testing later exonerated him, and, in 2001, after he’d spent two decades in prison, a judge overturned his conviction. [Raymond Bonner / New York Times]

Bite mark analysis

In 1999, 26 members of the American Board of Forensic Odontologyparticipated in an informal proficiency test regarding their work on bite marks. They were given seven sets of dental molds and asked to match them to four bite marks from real cases. They reached erroneous results 63 percent of the time. [60 Minutes] One bite mark study has shown that forensic dentists can’t even determine if a bite mark is caused by human teeth. [Pema Levy / Mother Jones]

That didn’t keep bite mark “expert” Michael West from testifying in trial after trial. In 1994, West testified that the bite mark pattern found on an 84-year-old victim’s body matched Eddie Lee Howard’s teeth. Based largely on West’s testimony, the jury convicted Howard and sentenced him to death. Experts have since called bite mark testimony “scientifically unreliable.” And sure enough, 14 years later, DNA testing on the knife believed to be the murder weapon excluded Howard as a contributor. Yet the state continues to argue that Howard’s conviction should be upheld on the basis of West’s testimony. [Radley Balko / Washington Post]

West, who in 1994 was suspended from the American Board of Forensic Odontology and basically forced to resign in 2006, is at least partially responsible for several other wrongful convictions as well. [Radley Balko / Washington Post]

West himself has even discredited his own testimony, now stating that he “no longer believe[s] in bite mark analysis. I don’t think it should be used in court.” [Innocence Project]

Fingerprint analysis

The FBI has found that fingerprint examiners could have an error rate, or false match call, as high as 1 in 306 cases, with another study indicating examiners get it wrong as often as 1 in every 18 cases. [Jordan Smith / The Intercept] A third study of 169 fingerprint examiners found a 7.5 percent false negative rate (where examiners erroneously found prints came from two different people), and a 0.1 percent false positive rate. [Kelly Servick / Science Mag]

In 2004, police accused American attorney Brandon Mayfield of the notorious Madrid train bombing after experts claimed his fingerprint matched one found on a bag of detonators. Eventually, four experts agreed with this finding. Police arrested him and detained him for two weeks until the police realized their mistake and were forced to release him. [Steve Pokin / Springfield News-Leader]

In Boston, Stephan Cowans was convicted, in part on fingerprint evidence, in the 1997 shooting of a police officer. But seven years later, DNA evidence exonerated him and an examiner stated that the match was faulty. [Innocence Project]

A 2012 review of the St. Paul, Minnesota, crime lab found that over 40 percent of fingerprint cases had “seriously deficient work.” And “[d]ue to the complete lack of annotation of actions taken during the original examination process, it is difficult to determine the examination processes, including what work was attempted or accomplished.” [Madeleine Baran / MPR News]

Firearm analysis

According to one study, firearm examiners may have a false positive rate as high as 2.2 percent, meaning analysts may erroneously declare a match as frequently as 1 in 46 times. This is a far cry from the “near perfect” accuracy that examiners often claim. [President’s Council of Advisors on Science and Technology]

In 1993, a jury convicted Patrick Pursley of murder on the basis of firearms testimony. The experts declared that casings and bullets found on the scene matched a gun linked to Pursley “to the exclusion of all other firearms.” Years later, an expert for the state agreed that the examiner should never have made such a definitive statement. Instead, he should have stated that Pursley’s gun “couldn’t be eliminated.” In addition, the defense’s experts found that Pursley’s gun was not the source of the crime scene evidence. Digital imaging supported the defense. [Waiting for Justice / Northwestern Law Bluhm Legal Clinic] In 2017, a court granted Pursley a new trial. [Georgette Braun / Rockford Register Star]

Rethinking faulty forensics

Scientists from across the country are calling for the justice system to rethink its willingness to admit pattern-matching evidence.

In 2009, the National Research Council of the National Academy of Science released a groundbreaking report concluding that forensic science methods “typically lack mandatory and enforceable standards, founded on rigorous research and testing, certification requirements, and accreditation programs.” [Peter Neufeld / New York Times]

In 2016, the President’s Council of Advisors on Science and Technology (PCAST), a group of pre-eminent scientists, issued a scathing report on pattern-matching evidence. The report concluded that most of the field lacked “scientific validity” — i.e., research showing examiners could accurately and reliably do their jobs. [Jordan Smith / The Intercept] Until the field conducted better research proving its accuracy, the Council stated that forensic science had no place in the American courtroom. The study found that, regarding bite mark analysis, the error rate was so high that resources shouldn’t be wasted to attempt to show it can be used accurately. [Radley Balko / Washington Post]

After the PCAST report came out, then-Attorney General Loretta Lynch, citing no studies, stated emphatically that “when used properly, forensic science evidence helps juries identify the guilty and clear the innocent.” [Jordan Smith / The Intercept] “We appreciate [PCAST’s] contribution to the field of scientific inquiry,” Lynch said, “[but] the department will not be adopting the recommendations related to the admissibility of forensic science evidence.” [Radley Balko / Washington Post]

The National District Attorneys Association (NDAA) called the PCAST report “scientifically irresponsible.” [Jessica Pishko / The Nation] “Adopting any of their recommendations would have a devastating effect on the ability of law enforcement, prosecutors and the defense bar to fully investigate their cases, exclude innocent suspects, implicate the guilty, and achieve true justice at trial,” the association noted. [Rebecca McCray / Take Part]

The NDAA also wrote that PCAST “clearly and obviously disregard[ed] large bodies of scientific evidence … and rel[ied], at times, on unreliable and discredited research.” But when PCAST sent out a subsequent request for additional studies, neither the NDAA nor the Department of Justice identified any. [PCAST Addendum]

This problem is getting worse under the current administration. Attorney General Jeff Sessions has disbanded the National Commission on Forensic Science, formed to improve both the study and use of forensic science, and which had issued over 40 consensus recommendation documents to improve forensic science. [Suzanne Bell / Slate] He then developed a DOJ Task Force on Crime Reduction and Public Safety, tasked with “support[ing] law enforcement” and “restor[ing] public safety.” [Pema Levy / Mother Jones]

But there are also new attempts to rein in the use of disproven forensic methods. In Texasthe Forensic Science Commission has called for a ban on bite marks. “I think pretty much everybody agrees that there is no scientific basis for a statistical probability associated with a bite mark,” said Dr. Henry Kessler, chair of the subcommittee on bite mark analysis. [Meagan Flynn / Houston Press]

A bill before the Virginia General Assembly, now carried over until 2019, would provide individuals convicted on now-discredited forensic science a legal avenue to contest their convictions. The bill is modeled after similar legislation enacted in Texas and California. The Virginia Commonwealth’s Attorneys Association opposes the legislation, arguing: “It allows all sorts of opportunities to ‘game’ the system.” [Frank Green / Richmond Times-Dispatch]

Meanwhile, at least one judge has recognized the danger of forensic expert testimony. In a 2016 concurrence, Judge Catherine Easterly of the D.C. Court of Appeals lambasted expert testimony about toolmark matching: “As matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true. This is not evidence on which we can in good conscience rely, particularly in criminal cases … [T]he District of Columbia courts must bar the admission of these certainty statements, whether or not the government has a policy that prohibits their elicitation. We cannot be complicit in their use.” [Spencer S. Hsu / Washington Post]

Do you wonder how witchcraft and satanic children eating coven stories survive in this era of lies and misdemeanors and wrongful convictions? This article pushes back against what’s coming out of the US WH and DOJ (and some DAs) spiel about forensic reliability. https://injusticetoday.com/faulty-forensics-explained-fe4d41157452

via #Forensics: The usual forensic failures described : DAs don’t give much of a damn. — FORENSICS and LAW in FOCUS @ CSIDDS | News and Trends