Repost of Day 58/67: Five Month GED, Graphing via Slope-Intercept Form, and Forensic Science Continuous Learning: Project Do Better

We use rate of change every day, for transporting ourselves and our needful things, for instance, perhaps without even recognizing it, but what else …

Repost of Day 58/67: Five Month GED, Graphing via Slope-Intercept Form, and Forensic Science Continuous Learning: Project Do Better
Advertisement

How To Begin — Adventures in Forensics and Cybersecurity

How to try to get into forensics and cybersecurity

I have been trying to blog about my adventures for a long time. I did not know how to begin. A colleague suggested I start with the steps or the missteps, I should say, that has guided my career in digital forensics. I can tell you that about 10 years ago I did not know anything about computers or anything having to do with investigating devices or determining if a system is compromised by malware. Everything I have learned and experienced has been on-the-job and through mentors, that probably did not know they were mentoring me.

Step 1: Make someone extremely angry that they move you from one organization to the next ( I truthfully did not know what I did)

This step will probably not be the same in your adventure but it led the to the beginning of mine. When I was moved, I ended up in an organization actively doing digital forensics using the Unix/Linux operating system. Talk about being over my head, I came in with no computing background other then using the internet and browsing Youtube at the time. So imagine getting thrown into an operating system that is not commonly used in homes or outside of computing types of career fields. It was a huge challenge but, I decided to jump right in.

Step 2: Jump right in

This was a big challenge, learning an entire new system and how to make it work and do things I need to accomplish. and if you have never seen how technical people use a Linux operating system, think about that blinking green light on a black screen that started typing telling Neo (Matrix) to follow the white rabbit or for those in the newer generation, the very small scenes of Mr. Robot where you see code or text being written across a black screen, seemingly not knowing what it means or does.

In the next 6 months, I was a Linux beast. As you can imagine the common trend to learning everything was repetition. The fact that the operating system was in my face day in and day out, was eventually the success to my learning. Now it did not mean I did not need further training over time but, by overcoming this first challenge I was introduced to terminology, functionality and a common language to discuss my needs to technical mentors. It helped with also identifying the terms needed to google in order to find free training resources across the web.

Step 3: Do not half-ass it

The career field of digital forensics and cybersecurity is always changing and everyone is in a constant mode of learning and training in order to stay with the times or get ahead of it. A good friend of mine and I always debated this and the three areas of this field that covers and interlace everything are

  • Operating System
  • Computer Science
  • Networking

If a person can be extremely knowledgeable in any two of these then they would probably be ahead of a lot of individuals in these career fields as it seems the average knowledge amongst these domains, if you will, in my experience, has been that most people have knowledge in 1 of these and then have half-ass knowledge of 1 other.

The knowledge needed in order to be successful requires understanding of operating systems and how they work; networks and communications between machines and the humans using those networks; everything runs off software or written code, the ability to read a variety of coding languages and understand the functionality of the code. The computer science or reading of code domain seems to be the least found amongst a lot of these career fields as people who can do it can find themselves in better financial positions as developers.

My strongest domains are in the operating systems and computer science domain and am currently learning the networking domain. I am constantly learning and always run into a new challenge, but I have always been able to overcome or advance in my career based on these 3 knowledge bases.

How To Begin — Adventures in Forensics and Cybersecurity

Statistical Probabilities Associated with Different Types of Deaths

The Odds Of Dying From Various Accidents

Visualizing The Odds Of Dying From Various Accidents

Fatal accidents account for a significant number of deaths in the U.S. every year. For example, nearly 43,000 Americans died in traffic accidents in 2021.

However, as Visual Capitalist’s Marcus Lu explains belowwithout the right context, it can be difficult to properly interpret these figures.

To help you understand your chances, we’ve compiled data from the National Safety Council, and visualized the lifetime odds of dying from various accidents.

Data and Methodology

The lifetime odds presented in this graphic were estimated by dividing the one-year odds of dying by the life expectancy of a person born in 2020 (77 years).

Additionally, these numbers are based on data from the U.S., and likely differ in other countries.

For comparison’s sake, the odds of winning the Powerball jackpot are 1 in 292,000,000. In other words, you are 4000x more likely to die by a lightning strike over your lifetime than to win the Powerball lottery.

Continue reading below for further context on some of these accidents.

Motor Vehicle Accidents

Motor vehicle accidents are a leading cause of accidental deaths in the U.S., with a 1 in 101 chance of dying. This is quite a common way of dying, especially when compared to something like bee stings (1 in 57,825).

Unfortunately, a major cause of vehicle deaths is impaired driving. The CDC reports that 32 Americans are killed every day in crashes involving alcohol, which equates to one death every 45 minutes.

For further context, consider this: 30% of all traffic-related deaths in 2020 involved alcohol-impaired drivers.

Drowning

The odds of drowning in a swimming pool (1 in 5,782) are significantly higher than those of drowning in general (1 in 10,386). According to the CDC, there are 4,000 fatal drownings every year, which works out to 11 deaths per day.

Drowning also happens to be a leading cause of death for children. It is the leading cause for kids aged 1-4, and second highest cause for kids aged 5-14.

A rather surprising fact about drowning is that 80% of fatalities are male. This has been attributed to higher rates of alcohol use and risk-taking behaviors.

Accidental Firearm Discharge

Lastly, let’s look at accidental firearm deaths, which have lifetime odds of 1 in 7,998. That’s higher than the odds of drowning (general), as well as dying in an airplane accident.

This shouldn’t come as a major surprise, since the U.S. has the highest rates of gun ownership in the world. More importantly, these odds highlight the importance of properly securing one’s firearms, as well as learning safe handling practices.

As a percentage of total gun-related deaths (45,222 in 2020), accidental shootings represent a tiny 1%. The two leading causes are suicide (54%) and homicide (43%).

Interested in learning more about death? Revisit one of our most popular posts of all time: Visualizing the History of Pandemics.

Tyler Durden
Sat, 01/28/2023 – 18:00

Go to Source
Author: Tyler Durden

Advertisements

REPORT THIS AD

Share this:

Related

U.S. Deaths Being Attributed to COVID-19 When They Clearly Weren’t; Alcohol Abuse, Cancer, Murder/Suicide, Traffic Accidents, etc.

By B.N. Frank People are dying from COVID-19 and people are dying with COVID-19.  There is a difference, however, that hasn’t stopped deaths from being… U.S. Deaths Being Attributed to COVID-19… Go to Source Author: Activist PostSeptember 20, 2021

In “Activist Post”

Visualizing The Biggest Ammonium Nitrate Explosions Since 2000

August 7, 2020

In “Zero Hedge”

Opioid Abusers Also Face Higher Risks Of Death From Suicide, Disease & Car Accidents

December 27, 2019

In “Zero Hedge”AuthorZero HedgePosted onCategoriesZero HedgeTagsZero Hedge

What to expect when you reach the border of life and death. A Medical Study. — JcgregSolutions

“We characterize the testimonies that people had and were able to identify that there is a unique recalled experience of death that is different to other experiences that people may have in the hospital or elsewhere,” Dr. Parnia said, “and that these are not hallucinations, they are not illusions, they are not delusions, they are […]

What to expect when you reach the border of life and death. A Medical Study. — JcgregSolutions

Forensic Science and its branches – AL MICRO LAW

Forensic Science is a branch of science that is a combination of scientific investigations and law. It is formed from two Latin words- “forensis” and “science” which help in solving a crime scene and analyzing the evidence. This is a core branch of science involving a lot of precision of science and law. Using scientific methods in solving cases has been practiced since ancient times the trial was held publicly as it used to carry a strong judicial connotation. The advancement of science and technology has led the forensic field to foster. 

The things forensic science experts perform are the examination of the body also known as an autopsy, document identification, evidence examination, a search of the crime scene, collecting fingerprints, and analyzing a small sample of blood, saliva, or any other fluids for determination and identification processes. In jurisprudence, forensics involves the application of knowledge and technology from several scientific fields. Biology, pharmacy, chemistry, medicine, and so on are the examples as each of them applies in today’s more complex legal proceedings in which experts from these fields are hard to prove offenses. Forensic science is the application of medical and paramedical expertise to assist the administration of justice in solving legal matters or in the court of law. The forensic findings can be used in a court of law as a piece of evidence and thus can be useful in solving a legal matter or dispute.

Forensic Science has various branches like Forensic biology, forensic physics, computational forensic, digital forensics, forensic accounting, forensic anthropology, forensic archaeology, forensic astronomy, forensic ballistic, forensic botany, forensic chemistry, forensic dactyloscopy, forensic document examination, forensic DNA analysis, forensic entomology, forensic geology, forensic linguistics, forensic meteorology, forensic odontology, forensic pathology, forensic podiatry, forensic toxicology, forensic psychology, forensic economics, criminology and wildlife forensics. 

  • Forensic biology – Forensic Biology is the use of biological scientific principles and processes, generally in a legal setting. Forensic biologists examine plants cellular and tissue samples, as well as physiological fluids, in the course of a legal inquiry.
  • Forensic physics – Forensic physics is the use of physics for civil or criminal law objectives. Forensic physics has typically entailed the determination of density (soil and glass investigation), the refractive index of materials, and birefringence for fibre analysis. Ballistics is a sub-discipline of forensic physics.
  • Computational forensic – Computational science is being used to investigate and solve problems in several sectors of forensic research.
  • Digital forensics – It specialises in retrieving data from electronic and digital media.
  • Forensic accounting – Accounting for forensic purposes investigates and evaluates facts pertaining to accounting.
  • Forensic anthropology – Forensic anthropology is the use of anthropology and osteology to establish information about a human body in an advanced stage of decomposition.
  • Forensic archaeology – Archaeology for forensic purposes is the branch in which archaeological approaches are used
  • Forensic astronomy – Astronomy for forensic purposes is the use of celestial constellations to address legal concerns is quite uncommon. It is most commonly utilised to solve historical issues.
  • Forensic ballistic – Forensic Ballistics is the examination of any evidence pertaining to weapons (bullets, bullet marks, shell casings, gunpowder residue etc.)
  • Forensic botany – Plant leaves, seeds, pollen, and other plant life found on the crime scene, victim, or accused can give solid proof of the accused’s presence.
  • Forensic chemistry – Forensic chemistry focuses on the investigation of illegal narcotics, gunshot residue, and other chemical compounds.
  • Forensic dactyloscopy – Dactyloscopy for forensic purposes relates to the collection, preservation, and analysis of fingerprint evidence.
  • Forensic document examination – Examining forensic documents investigates, researches, and determines the facts of documents under dispute in court.
  • Forensic DNA analysis – This branch of forensic science focuses on the collecting and analysis of DNA evidence for use in court.
  • Forensic entomology – It investigates insects discovered at the scene of a crime or on the body of a victim, and it is especially useful in pinpointing the time and place of the victim’s death.
  • Forensic geology – Geology for forensic purposes entails the use of geological variables such as soil and minerals to obtain evidence for a crime.
  • Forensic linguistics – It is the study of the language used in judicial procedures. Emergency calls, voice identification, ransom demands, suicide notes, and so on are all examples.
  • Forensic meteorology –  It includes using meteorological variables to ascertain details about a crime. It is most frequently applied in instances involving insurance companies and homicides.
  • Forensic odontology –  It refers to the investigation of dental evidence.
  • Forensic pathology – This branch of forensic science is concerned with the examination of a body and identifying factors such as the cause of death.
  • Forensic podiatry – Forensic podiatry refers to the investigation of footprint evidence.
  • Forensic toxicology – A forensic toxicologist investigates toxic compounds found on or in a body, such as narcotics, e-liquid, and poisons.
  • Forensic Psychology – Forensic Psychology and Forensic Psychiatry are two branches of forensic medicine. These are concerned with the legal implications of human activity.
  • Forensic economics – This is the investigation and analysis of economic damage evidence, which includes present-day estimations of lost earnings and benefits, the lost value of a firm, lost business profits, lost value of home services, replacement labour expenses, and future medical care expenditures. 
  • Criminology – In criminal investigations, this involves the use of several disciplines to answer issues about the study and comparison of biological evidence, trace evidence, impression evidence (such as fingerprints, shoeprints, and tyre tracks), restricted drugs, and guns.
  • Wildlife forensics – This involves the investigation of crime situations involving animals, such as endangered species or animals that have been unlawfully killed or poached.

When it comes to life and death situations, objective proof is critical. In the past, significant evidence in criminal prosecutions might have come from witnesses or other subjective sources, but forensic science now provides objective evidence. That is, forensic evidence, which is based on the scientific approach, is considered more dependable than even eyewitness testimony. In a legal system that holds that the accused is innocent until proven guilty, forensic scientists’ evidence is now routinely used by both the defence and the prosecution in many court cases. While Forensic Toxicologists, for example, may work most closely with law enforcement or the courts after a crime has been committed, Forensic Psychologists (also known as Profilers) might step in even before a suspect has been identified to assist prevent future crimes.

Forensic Science is an emerging branch of science that is a combination of scientific investigations and law. It is formed from two Latin words- “forensis” and “science” which help in solving a crime scene and analyzing the evidence. This is a core branch of science involving a lot of precision of science and law. Using […]

A brief about Forensic Science and its branches — AL MICRO LAW

FORENSICS IN THE COURTROOM via Manisha Nandan

IT IS A MISTAKE TO THEORIZE BEFORE YOU HAVE ALL EVIDENCE. IT BIASES THE JUDGMENT – Sherlock Holmes

FORENSICS IN THE COURTROOM — Manisha Nandan

[ CRIME NEVER DIES – PART 3 ]

UDGMENT – Sherlock Holmes

See the source image

When someone is charged with a crime, the prosecution and defence typically call in witnesses to testify about the guilt or innocence of the person who has been accused. One of the most important players in all this testimony often isn’t a person at all: it’s the forensic evidence.

And these evidences are obtained by scientific methods such as ballistics, blood test, and DNA test and further used in court proceeding . Forensic evidence often helps to establish the guilt or innocence of possible suspects.

So its Analysis is very important as they are used in the investigation and prosecution of civil as well as criminal matters. Moreover Forensic evidence can be used to link crimes that are thought to be related to one another. For example, DNA evidence can link one offender to several different crimes or crime scenes and this linking of crimes helps the police authorities to narrow the range of possible suspects and to establish patterns of for crimes to identify and prosecute suspect.

CASES REQUIRING FORENSIC EVIDENCE
Forensic evidence is useful in helping solve the most violent and brutal of cases, as well as completely nonviolent cases related to crimes such as fraud and hacking.

If a decomposing body is found in the woods somewhere, forensic scientists can use DNA, dental records, and other evidence to identify the person, determine the cause of death, and sometimes determine if the body contains material from another person who may have been present at the time of death.

Investigators often look for forensic evidence in cases where sexual assault is suspected. In some cases, DNA evidence can prove or disprove allegations of rape or child molestation.

Forensics are also useful in drug cases. Scientists can test unidentified substances that were found on an individual to confirm whether or not they are cocaine, heroin, marijuana, or other controlled substances. Investigators use forensic toxicology to determine whether a driver was impaired at the time they were involved in an accident.

The field of forensics isn’t only limited to evidence obtained from people’s bodies. Ballistics (otherwise known as weapons testing) can tell investigators a lot about cases where gunfire was involved. Did a bullet come from a particular gun? Where was the shooter standing? How many shots did they fire? Ballistics can help answer all of these questions. Another area of forensic evidence lies within the circuits of our phones and computers. Those who commit cyber crimes leave behind traces of their activities in databases and documents scattered throughout the digital world. Forensic computer specialists know how to sort through the information to discover the truth.

However ,The question of admissibility of evidence is whether the evidence is relevant to a fact in issue in the case. Admissibility is always decided by the judge and all relevant evidence is potentially admissible, subject to common law and statutory rules on exclusion. Relevant evidence is evidence of facts in issue and evidence of sufficient relevance to prove or disprove a fact in issue.

As per Section 45 of Indian evidence Act 1872- When the Court has to form and opinion upon a point of foreign law or of science or art, or as to identity of handwriting or finger impressions, the opinions upon that point of persons specially skilled in such foreign law, science or art, or in questions as to identity of handwriting or finger impressions are relevant facts. Such persons are called experts. Further as per Section 46 of Indian evidence Act 1872- it is stated that facts, not otherwise relevant, are relevant if they support or are inconsistent with the opinions of experts, when such opinions are relevant. Though there is no specific DNA legislation enacted in India, Sec.53 and Sec. 54 of the Criminal Procedure Code, 1973 provides for DNA tests impliedly and they are extensively used in determining complex criminal problems.

Sec. 53 deals with examination of the accused by medical practitioner at the request of police officer if there are reasonable grounds to believe that an examination of his person will afford evidence as to the commission of the offence. Sec. 54 of the Criminal Procedure Code, 1973 further provides for the examination of the arrested person by the registered medical practitioner at the request of the arrested person.

The law commission of India in its 37th report stated that to facilitate effective investigation, provision has been made authorizing an examination of arrested person by a medical practitioner, if from the nature of the alleged offence or the circumstances under which it is alleged to have been committed, there are reasonable grounds for believing that an examination of the person will afford evidence. Sec. 27(1) of Prevention of Terrorism Act, 2002 says when a investigating officer request the court of CJM or the court of CMM in writing for obtaining sample of hand writing, finger prints, foot prints, photographs, blood, saliva, semen, hair, voice of any accused person, reasonable suspect to be involved in the commission of an offence under this act. It shall be lawful for the court of CJM or the court of CMM to direct that such samples shall be given by the accused person to the police officer either through a medical practitioner or otherwise as the case may be.

Section 65(B) of Indian Evidence Act says that electronic records needs to be certified by a person occupying a responsible official position for being admissible as evidence in any court proceedings.
So as the capabilities of forensic science have expanded and evolved over the years, facing a number of significant challenges.

Then also a main weakness is in its susceptibility to cognitive bias. Today, despite remaining a powerful element within the justice system, and playing a key role in establishing and reconstructing events, forensic science much like any scientific domain, faces weaknesses and limitations.

These issues can arise throughout an investigation; from when the forensic evidence is first collected at the scene of the crime, until the evidence is presented at court.

So there is utmost need of forensic science because of reasons like –
The need for the application of science in criminal investigation has arisen from the following factors:
1. Social Changes:
The society is undergoing drastic social changes at a very rapid pace. India has changed from a colonial subject race to a democratic republic. Sizeable industrial complex has sprung up. The transport facilities have been revolutionized. There is a growing shift from a rural society to an urban one. These changes have made the old techniques of criminal investigation obsolete. In the British days the police was so much feared that once it had laid its hands upon an individual, he would ‘confess’ to any crime, he may not have even known. The fear is vanishing now. The use of ‘third degree’ techniques used in those days does not find favour with the new generation of police officers and judges.

2. Hiding facilities:
The quick means of transport and high density of population in cities have facilitated the commission of crimes. The criminal can hide himself in a corner of a city or move away to thousands of miles in a few hours. He, thus often escapes apprehension and prosecution.

3. Technical knowledge:
The technical knowledge of an average man has increased tremendously in recent years. The crime techniques are getting refined. The investigating officer, therefore, needs modern methods to combat the modern criminal.

4. Wide field: The field of activities of the criminal is widening at a terrific rate. Formely, the criminals were usually local, now we find that national or international criminal is a common phenomenon. Smuggling,drug trafficking ,financial frauds and forgeries offer fertile and ever expanding fields.

5. Better Evidence: The physical evidence evaluated by an expert is objective. If a fingerprint is found at the scene of crime, it can belong to only one person. If this person happens to be be the suspect, he must account for its presence at the scene. Likewise, if a bullet is recovered from a dead body, it can be attributed to only one firearm. If this firearm happens to be that of the accused , he must account account for its involvement in the crime. Such evidence is always verifiable.

In reality, those rare few cases with good forensic evidence are the ones that make it to court.—Pat Brown

@MANISHANANDAN

[ CRIME NEVER DIES – PART 3 ] IT IS A CAPITAL MISTAKE TO THEORIZE BEFORE YOU HAVE ALL EVIDENCE. IT BIASES THE JUDGMENT – Sherlock Holmes When someone is charged with a crime, the prosecution and defence typically call in witnesses to testify about the guilt or innocence of the person who has been […]

FORENSICS IN THE COURTROOM — Manisha Nandan

Focusing On The Total Cost Of PharmD E-Discovery Review

Prescription drug E-Discovey matching requirements for protocols and procedures via http://www.HaystackID.com:

The Dynamics of Agency Investigations:
A Second Request Update 

An Industry Education Webcast from HaystackID+ Date: Wednesday, October 14, 2020
+ Time: 12:00 PM ET (11:00 AM CT/9:00 AM PT)
+ One Step Registration (Click Here)

HSR Act-driven Second Request responses require an uncommon balance of understanding, expertise, and experience to successfully deliver certified compliant responses. Recent FTC and DOJ updates, coupled with the increasing velocity of requests, make current expertise more crucial than ever in this unique discovery area.

In this presentation, expert investigation, eDiscovery, and M&A panelists will present updates and considerations for managing Second Request responses to include tactics, techniques, and lessons learned from fourteen recent responses. 

Webcast Highlights

+ Defining Second Requests: The Requirement and Task
+ Context for Consideration: The Prevalence of Requests Over Time
+ A Different Type of Discovery: Characteristics of a Second Request
+ Recent DOJ and FTC Updates: From the Practical to the Tactical
+ Managing Second Requests: A Provider’s Perspective

Presenting Experts

+ Michael Sarlo, EnCE, CBE, CCLO, RCA, CCPA – Michael is a Partner and Senior EVP of eDiscovery and Digital Forensics for HaystackID.

+ Mike Quartararo – Mike currently serves as the President of ACEDS, which provides training and certification in eDiscovery and related disciplines to law firms, corporate legal, and the broader the legal community.

+ John Wilson, ACE, AME, CBE – As CISO and President of Forensics at HaystackID, John is a certified forensic examiner, licensed private investigator, and IT veteran with more than two decades of experience.

+ Anya Korolyov – A Senior Consultant with HaystackID, Anya has 12 years of experience in eDiscovery with extensive expertise with Second Requests as an attorney and senior consultant.

+ Seth Curt Schechtman – As Senior Managing Director of Review Services for HaystackID, Seth has extensive legal review experience, including class actions, MDLs, and Second Requests.

+ Young Yu – As Director of Client Service with HaystackID, Young is the primary strategic and operational advisor to clients in matters relating to eDiscovery.
 Register Now for this Educational Webcast

https://haystackid.com/webcast-transcript-hatch-waxman-matters-and-ediscovery-turbo-charging-pharma-collections-and-reviews/

Editor’s Note: On September 16, 2020, HaystackID shared an educational webcast designed to inform and update legal and data discovery professionals on the complexities of eDiscovery support in pharmaceutical industry matters through the lens of the Hatch-Waxman Act. While the full recorded presentation is available for on-demand viewing via the HaystackID website, provided below is a transcript of the presentation as well as a PDF version of the accompanying slides for your review and use.

Hatch-Waxman Matters and eDiscovery: Turbo-Charging Pharma Collections and Reviews

Navigating Hatch-Waxman legislation can be complex and challenging from legal, regulatory, and eDiscovery perspectives. The stakes are high for both brand name and generic pharmaceutical manufacturers as timing and ability to act swiftly in application submissions and responses many times mean the difference between market success or undesired outcomes.

In this presentation, expert eDiscovery technologists and authorities will share information, insight, and proven best practices for planning and supporting time-sensitive pharmaceutical collections and reviews so Hatch-Waxman requirements are your ally and not your adversary on the road to legal and business success.

Webcast Highlights

+ NDA and ANDA Processes Through the Lens of Hatch-Waxman
+ ECTD Filing Format Overview For FDA (NDA/ANDA Submissions)
+ Information Governance and Collections Under Hatch-Waxman
+ Dealing with Proprietary Data Types and Document Management Systems at Life Sciences Companies
+ Streamlining the Understanding of Specific Medical Abbreviations and Terminology
+ Best Practices and Proprietary Technology for Document Review in Pharmaceutical Litigation

Presenting Experts

Michael Sarlo, EnCE, CBE, CCLO, RCA, CCPA – Michael is a Partner and Sr. EVP of eDiscovery and Digital Forensics for HaystackID.

John Wilson, ACE, AME, CBE – As CISO and President of Forensics at HaystackID, John is a certified forensic examiner, licensed private investigator, and infotech veteran with more than two decades of experience.

Albert Barsocchini, Esq. – As Director of Strategic Consulting for NightOwl Global, Albert brings more than 25 years of legal and technology experience in discovery, digital investigations, and compliance.

Vazantha Meyers, Esq. – As VP of Managed Review for HaystackID, Vazantha has extensive experience in advising and helping customers achieve their legal document review objectives.


Presentation Transcript

Introduction

Hello, and I hope you’re having a great week. My name is Rob Robinson. On behalf of the entire team at HaystackID, I’d like to thank you for attending today’s webcast titled Hatch-Waxman Matters and eDiscovery, Turbo-Charging Pharma Collections and Reviews. Today’s webcast is part of HaystackID’s monthly series of educational presentations conducted on the BrightTALK, and designed to ensure listeners are proactively prepared to achieve their computer forensics, eDiscovery, and legal review objectives during investigations and litigation, and our expert presenters for today’s webcast include four of the industry’s foremost subject matter experts and authorities on eDiscovery, all with extensive experience in pharmaceutical matters. 

Our first presenter that I’d like to introduce you to is Michael Sarlo. Michael is a Partner and Senior Executive Vice President of eDiscovery and Digital Forensics for HaystackID. In this role, Michael facilitates all operations related to eDiscovery, digital forensics, and litigation strategy both in the US and abroad for a HaystackID. 

Our second presenter is digital forensics and cybersecurity expert John Wilson. As Chief Information Security Officer and President of Forensics at HaystackID, John’s a certified forensic examiner, licensed private investigator, and information technology veteran of more than two decades of experience working with the US government in both public and private companies. 

Our next presenting expert, Vazantha Meyers serves, as Vice President of Discovery for HaystackID, and Vazantha has extensive experience in advising and helping customers achieve their legal document review objectives. She’s recognized as an expert in all aspects of traditional and technology-assisted review. Additionally, Vazantha graduated from Purdue University and obtained her JD from Valparaiso University School of Law. 

Our final presenting expert is Albert Barsocchini. As Director of Strategic Consulting for NightOwl Global, newly merged with HaystackID, Albert brings more than 25 years of legal and technology experience in discovery, digital investigations, and compliance to his work supporting clients in all things eDiscovery. 

Today’s presentation will be recorded and provided for future viewing and a copy of the presentation materials are available for all attendees, and in fact, you can access those materials directly beneath the presentation viewing window on your screen by selecting the Attachments tab on the far left of the toolbar beneath the viewing window, and also a recorded version of this presentation will be available directly from the HaystackID and BrightTALK network websites upon completion of today’s presentation, and a full transcript will be available via the HaystackID blog. At this time, with no further ado, I’d like to turn the microphone over to our expert presenters, led by Mike Sarlo, for their comments and considerations on the Hatch-Waxman Matters and eDiscovery presentation. Mike? 

Michael Sarlo

Thanks for the introduction, Rob, and thank you all for joining our monthly webinar series. We’re going to be covering a broad array of topics around pharmaceutical litigation in general, the types of data types, in particular around Electronic Common Technical Documents (eCTDs), which we’ll learn more about. We’re going to start out with really looking at Hatch-Waxman as a whole and new drug application and ANDA processes related to Hatch-Waxman. We’re going to get into those eCTDs and why those are important for pharmaceutical-related matters on a global scale. I’m going to start to talk about more information governance and strategies around really building a data map, which is also more of a data map that is a fact map. These matters have very long timelines when you start to look at really just the overall lifecycle of an original patent of a new drug going through a regulatory process, and then actually hitting market and then having that patent expire. We’ll learn more about that, then we’re going to get into some of the nitty-gritties of really how we enhance document reviews at HaystackID for pharmaceutical matters and scientific matters in general, and then finish off with some best practices and just a brief overview of our proprietary testing mechanism and placement platform ReviewRight. 

So, without further ado, I’m going to kick it off to Albert. 

Albert Barsocchini

Thank you very much, Michael. So, I’m going to start off with a 30,000-foot level view of Hatch-Waxman, and I always like to start off with a caveat any time I’m talking about pharma related matters. Pharma is a very complex process, complex laws, and very nuanced, and especially Hatch-Waxman. So, my goal today is really just to give you the basic things you need to know about Hatch-Waxman, and it’s very interesting. In fact, in 1984, generic drugs accounted for 19% of retail prescriptions, and in 2018, they accounted for 90% and that’s because of Hatch-Waxman. In a recent report, the President’s cancer panel found that the US generic drug market saved the US healthcare system an estimated $253 billion overall in 2018, including $10 billion in savings for cancer drugs. So, Hatch-Waxman really has been very important to the generic drug market and to us, in public, for being able to get drugs at an affordable price. 

So, how did Hatch-Waxman start? And it started with a case called Roche v. Bolar. So, Roche made a drug, it was a sleeping pill, Dalmane, I don’t know if anybody’s taken it, I haven’t. Anyway, it was very popular, it made them literally millions and billions of dollars, and so what, and normally they have a certain patent term, and what a generic drug company likes to do is to make a bioequivalent of that, and to do that, they want to try to be timed, so at the termination of a patent, the generic drugs can start marketing their product. So, in this case, Bolar started the research and development before the Roche patent expired, and because of that, they were per se infringing on the Roche patent, and so a lawsuit pursued and Bolar lost. 

Now, a couple of terms that I think are important, and I’m going to throw them out now just because there are so many nuanced pharma terms. One is branded biologic, and biosimilar generic, and then there’s branded synthetic, and bioequivalent generic. Now, branded drugs are either synthetic, meaning they’re made from a chemical process or biological, meaning they’re made from a living source. We’re going to be talking today about synthetics and what is important is that synthetic branded drugs can be exactly replicated into more affordable generic versions, bioequivalents, but because biologics involve large complex molecules, because they’re talking about living sources, that’s where biosimilar comes in. So, today, we’re going to just focus on the bioequivalents, on synthetic drugs, and just as another point, there was a… in signing the law by President Obama, I think it was around 2010, the Biosimilar Act became law, which is another law very similar to the Hatch-Waxman. So, anyway, because of the Roche case, we came out in 1983 with the Hatch-Waxman Act, and the reason they wanted this was because what was happening is since a generic company could not start to research and development until after a patent expired, this in essence gave the new drug application additional years of patent, and which means millions of more dollars, and so Congress came in, and they thought this wasn’t fair, and so they decided that they were going to allow generic companies to start the research and development process before the patent expired, and this prevented that from happening in terms of giving the original patent holder more years on the patent, and also allowed generics to get on the market quicker and get to the public at cheaper prices, and that’s just trying to strike a balance, and as you can see, between the pharmaceutical formulations, the original patents, and the new generic versions, and so it’s a delicate balance, but they seem to have achieved it because of the fact that generics are now so prevalent in the market. 

And one thing about this act, generic drug companies are not required to conduct their own independent clinical trials to prove safety and efficacy but can instead rely on research of the pioneer pharmaceutical companies, and they can start development before the original patent expires. So, that’s already a headstart because they don’t have to produce their own data, they can rely on the data of the original patent holder, and that allowed this exploration in the patent process for generic drugs. 

So, one of the important areas that is part of this whole act is the so-called “Orange Book”. So, before you can have an abbreviated new drug application, called ANDA, for approving that generic drug, you must first have a new drug application or an NDA. Now the NDA is a pioneering brand name drugs company seeking to manufacture a new drug, and they must prepare, file, and have approved its drug by the FDA. Additionally, as part of this new drug application process, the pioneering drug company submits the information on the new drug safety and efficacy [obtained] from the trials. Now, the NDA applicant must also identify all patents that could reasonably be asserted, if a person not licensed by the owner engaged in the manufacture, use, or sell the drug, and the patents covering approved drugs, or use thereof, are published in what’s called the “Orange Book”. So, a generic company will be going to this “Orange Book”, which is like a pharma bible, to see what patents are in effect, and this helps them target certain patents they want to create a generic version of, so it’s a very important starting point and this process can start while the original patent hasn’t even gone to market. 

And so, you can see things start to heat up pretty early, and one of the things that we notice in this whole process is that when a patent is filed, the clock ticks on the patent, and so it may be another six years before that patent goes to market, and so because of that, there is a… it can be very unfair, and so there’s a lot of extensions that occur for the patent holder. 

Now, what happens in this particular situation with an ANDA is that we’re going to have a Paragraph IV certification, and briefly, in making a Paragraph IV certification, the generic drugmaker says the patent is at least one of the following. It’s either invalid, not infringed, or unenforceable, and that’s really the Reader’s Digest version on their Paragraph IV certification after the story gets much more complicated and adversarial, and that’s why I always give the warning that this is a very complex dance that’s occurring with Hatch-Waxman, but ANDA really is a very, I would say, important piece of this whole puzzle, and once the ANDA information is put together, it’s filed by what’s called the Electronic Common Technical Document, eCTD, and it’s a standard format for submitting application amendments, supplements, and reports and we’re going to talk about this a little later on in the presentation. Very similar to electronic court filings, but there’s a lot more to it, but it is something that is part of the process when you start the whole process. 

Now the patent owner, their patents and a pharma patent is good for about 20 years after the drug’s invention, and the Hatch-Waxman Act gave patent extensions to name-brand drug companies to account for delays in the approval process, and that is taken into the fact that, as pointed out earlier, that when the patent is filed, research is still in development, and it may be another six years, so realizing that, they decided to extend the 20-year patent and so it can be extended for another five years, and there are also other extensions that can occur during this time. So, with that, the patent owner is also concerned about these generic drug companies and so they’re always looking over their shoulder and looking for where there may be threats to their patent, and so once a patent owner files an action for infringement, in other words, we have the ANDA, we have the certification, it’s published, and then the patent owner has a certain amount of time, within 45 days of receiving notice of the Paragraph IV certification, to file their infringement action. At that point, there’s a 30-month period that protects the patent owner from the harm that could otherwise ensue from the FDA granting marketing approval to the potentially infringing product. 

But that’s really the start of where the race begins, and it’s very important to realize that during this race, what’s going to happen is that there could be other types of generic drug applicants that want to get in on it and they want to get in on it for a very specific reason because if their certification is granted, they get a 180-day exclusivity, which means that they could go to market for their generic product, and in countries like Europe and other countries, this can be worth hundreds of millions of dollars, this exclusivity. So, you’re going to have this 45-day period where the original patent holder will file their response to it, and then everything gets locked down for 30 months, and then there’s a lot of information that has to be exchanged from all the data during the research process, and all these certifications, and so it’s a very compressed time period. 

And what Michael is going to show in these next couple slides is that compressed time period means that you have to have your ducks in order, you have to have robust collection planning, you have to have legal review teams using the latest technology, and trying to digest this patent information that has a lot of terms that can be very difficult to assimilate, and for anybody that’s not familiar with patent litigation. HaystackID has been through a lot of this, so we have a good, solid basis and understanding of this whole process, and a very interesting process that we specifically designed for Hatch-Waxman. 

So, without further ado, I’m going to hand this over to Michael, and he’s going to go through it just to show you some of that compressed timelines and then get into the whole electronic filing process. Michael? 

Michael Sarlo

Thanks for that. Appreciate it. Thank you, Albert. That was a great overview. So, as Albert mentioned, really the timeline and lifecycle of a new drug is incredibly long. Really, the drug discovery itself, finding a compound that may have some clinical efficacy, that can take anywhere from three to six years, and at that same time you’re doing testing and you’re preparing to then file an IND, which is an investigational new drug application, so a lengthy process from an administrative standpoint, and really, as we get toward litigation, the lifecycle of litigation oftentimes begins at year zero, and if an IND is approved, you’ll get into Phase I, II, and III clinical studies. At that point, assuming you’re meeting your target metrics for the IND and the study’s end goals, you can choose to submit an NDA, and that review of an NDA can take quite some time, years often, and at the end of that process, the FDA might come back and say, well, we actually want some more information and wants you to go do this or do that, which is usually pretty devastating for organizations. It really can add on years of timeframe, and if they do accept it, then you’re at a point where it’s approved and you can start to go to market and the marketing process is highly regulated, and there are specific verticals you could market, and actually, marketing would be attended to oftentimes an NDA. 

So, right here alone, we have several different data points that might all be relevant for a Hatch-Waxman matter. On the flip side, a generic manufacturer has a much shorter timeframe, and they’re much less invested from a time standpoint. Typically speaking, they’re looking at a couple of years to develop something, to do some testing, they file an NDA, and then there’s this marketing period where they get 18 to 36 months before the marketplace becomes so crowded just due to so many generics, and at that point, usually, they move on or there’s this big stockpile, and all this is important because as we start to talk about these different applications and abbreviations, it’s important to understand the mechanisms, since most people here are on this presentation for eDiscovery purposes, of how this data is organized, and really, it started out with what’s called the Common Technical Document format, which is really a set of specifications for an application dossier for the registration of medicines designed to be used across Europe, Japan, and the United States. This was the paper format version. So, really, there are many other countries who also would adhere to the modern eCTD Common Technical Document, and really what’s the goal here, is that you can choose to streamline the regulatory approval process for any application so that the application itself can adhere to many different regulatory requirements, and these cost a lot of money, millions of dollars to put these together, millions of dollars to assemble these. You’re talking tens of thousands of pages, and these have a long lifecycle, and on January 1, 2008, actually, there was more of a scanning format for submitting an eCTD to the FDA, and at that point, they actually mandated a certain format, which became the eCTD format for these submissions. 

These are broken up into five different modules, and we’ll get into that, but the prevalence and rise of the eCTD format really began in 2008, and as you can see in the above graphic, on the right here, they became highly prevalent around 2017/2018. That’s really all there is, and that’s because as of 2017, NDAs, the FDA required that they would all be in eCTD format. The same thing for ANDAs, and then also, BLAs, and then INDs in 2018 – that actually got a little bit pushed, but we don’t need to get into that here. What’s important is that all subsequent submissions to these applications, including any amendments, supplements, reports, they need to be in digital format. This is important because a common strategy when you’re trying to… I’m a large pharmaceutical company, I’m trying to get all the value I possibly can out of my invention, this drug, we’ve spent probably millions, hundreds of millions of dollars on going to market, and something that could be making us billions of dollars, is oftentimes to really go through these, more of these NDA like processes for off label uses, for new populations that were outside the original study groups that the drug was approved for, and this is where it becomes incredibly complex, and there’s this concept of exclusivity around new novel treatments relating to use of a previous compound, and this is one of the major components of that of the Hatch-Waxman dance, how big pharma really has found many different mechanisms to extend these patents beyond their term life. 

It’s also important to note that master files, Trial Master Files, these are all of your trial data, human clinical trials, all that stuff actually would get appended to these files, and just in general you think about how fast we’re approving vaccines for coronavirus, you can see why there’s concern, that our system isn’t doing due diligence when you realize that these lifecycles of any normal drug is oftentimes 15 years. Trial Master Files, we commonly handle them the same way as an eCTD package, but there is actually a new format that more international standards are trying to move to, which is the electronic Trial Master File and having more set defined specifications regarding what the structure of that looks like is something that’s going on. 

What an eCTD is, is a collection of files. So, when we think eDiscovery, we often… we do production, let’s say now, in today’s world, it’s usually a Concordance load file, and you get an Opticon and DAT file. The eCTD file, you have to think about it very much in the same way. There’s an XML transform file, think about that more like your DATs, your load files. This is going to basically have all of the metadata. It’s going to contain all the structure of the application. It’s going to have more metadata about folders. It’s also going to track when additions and changes for when documents were removed from any eCTD and this is very important. So, there’s a whole industry that services creating these. It’s very much like where someone in a niche industry and eDiscovery, everything related to drug development from a technology standpoint has very similar functions that almost cross-correlate to eDiscovery. You have your folks who are supporting the scientists as they build out these applications, and one thing is these platforms are calibrated, and they’re calibrated by a third party. It’s very important that timing and timestamps as far as when something was touched, when it was looked at, and when it was deleted, so that metadata can be incredibly important. Outside the context of Hatch-Waxman, thinking about maybe a shareholder lawsuit against some executives at a pharmaceutical company who might have been accused of having access to a failed trial prior to the general public, you see these accusations quite a bit in small pharma companies, and they dump some shares and there’s an investigation, and you can see now why this type of information of who accessed what, when, and when something was added or removed might be important. 

The same thing goes for trial data itself. It’s highly audited, who accessed it, when. That type of data is really highly confidential, even to the company that is conducting the trial. It’s usually a third party that’s handling that, and so all this history is in there, and we have metadata about each module, and you’ll see here on the right-hand side, we have a structure here. 

It looks pretty basic. There are folders, there are files. There are also more stylesheet files, schema files that are similar to XML that will more control the formatting and should be thought of as extended metadata. Likewise, we’re also going to see files and folders, PDFs, Word docs, scientific data, big databases like Tableau, things like that. So, as you start getting into all of the extra stuff that goes with an application, these can become massive, and this is usually something that spans both paper sources and digital sources, so it’s really important to basically work on these to parse them appropriately for eDiscovery purposes. 

It should be something you have a lookout for if you ever see these modules, these little “Ms” in a folder structure that you get from your client; you should stop and say, wait a minute, this looks like it has some structure, what is this, and you’ll see it’s an eCTD, and oftentimes, because of their interlinked nature between what can be a paper file that was just scanned and thrown in a folder, and/or a digital file, and then all of these additions and adds, and these are also something that these filings go back and forth between the regulators and the organization that’s putting through an application. So, they might submit something, they say, okay, we want to see more of this or that, or we want more information here. They add it to the existing eCTD. So, in that way, you can also get a separate revision history that oftentimes wraps around the discourse between the regulator and the drug company. HaystackID deals with these often and is first to market in eDiscovery to have a solution to view, parse, review, and produce eCTDs or files from eCTDs right out of Relativity, and we’d be happy to do a demo for anybody. Just shoot us an email and it’s highly useful and has been really impactful in several large cases for us where we dealt with a lot of NDAs or INDs. 

We’ll say one thing, too, here is it’s important to realize that many different organizations may be a part of this process. 

So, now, here’s a screenshot as well for you. You see a little Relativity tree over here where we break out and parse everything. We also give you full metadata, both for your eDiscovery files, your PDFs, your Word docs, all of that, that may not be contained in the eCTD. So, this is important to note too. You can’t just load this as a load file and then not actually process the data. The data needs to be processed and it needs to be linked at the same time. And here in this application, a really unique feature is your ability to sort, filter, and search based on revisions and changes. So, if we have a case, we’re just interested in the final eCTD that resulted in an approval, we can get right to that, maybe cutting out 50% of the application. If we have a case where we’re interested about the actual approval process and the application process, then we can start to look at that and look at anything that was deleted, anything that was changed – a highly useful tool. 

Right, I’m going to kick it off to my colleague, John Wilson. I probably will jump in and cut him off a few times as well, because that’s what I do, then we’re going to talk more about information governance for these matters that have an incredibly long lifecycle, like legal hold and just preparing to respond to a Paragraph IV notice as more of a large pharmaceutical organization. 

John Wilson

Thanks, Mike. So, as Mike just said, there is a significant timeline involved with these projects, and the other side of the coin is you have a short time fuse for actually responding to requests and doing the appropriate activities. So, those two things are fighting each other because you’ve got this long history of information that you’ve got to deal with, and so, as soon as you receive the Paragraph IV acknowledgment letter, you should definitely have triggered your legal hold process. There are very short timeframes for receiving and acknowledging that letter, as well as the opposing sides have typically 45 days to take action and then decide if they’re going to sue or get involved. 

So, again, short timeframes, a lot of data, and data that spans a lot of different systems because you’re talking about a lot of historical information. The pharmaceutical companies need to be prepared to challenge all their generic manufacturers ahead of the patent expirations, because that is their – if that is their prerogative because waiting until it’s filed, you’re going to have a hard getting it all together in that short order. The INDs, the NDAs, the timelines, again, you have 20 years on the patent and the timelines of when the original work was done when the IND and the NDA were filed can be over 15 years and you’ve got to deal with paper documents, you’ve got to deal with lab notebooks and digital documents across a lot of different spectrums. A lot of the information may not even be documents. A lot of it may be logging data from your clinical trials that’s in a database system, and lab notebooks that are actual physical notebooks and they’re very fragile and you can have hundreds and hundreds of them. So, how do you identify them, find them? Where are they located? Get them all brought into your legal hold. There’s a lot of challenges around that. 

So, be prepared. Preparedness is certainly the key here. Also, because you’re talking about a lot of disparate data types, how do you parse all that properly into a review so that you can actually find the information you need and action your review. So, you’ve got to actually take a lot of preparation, you’ve got to plan out and create a data map. There’s a lot of historical data systems here involved, typically, so you’ve got to really understand your fact timeline in relation with your data maps. So, lab notebooks, how were they kept 15-20 years ago, how are they kept today? Clinical trials, how is that data stored? Is it in a database? Is it in log sheets or is it in a ticker tape that’s been clipped and put into the lab notebooks? Understanding all of those different aspects is why the timeline becomes really important. You’ve got to be able to tie that whole timeline back to all the different data sources at the relevant timeframes. 

So, always assume you’re going to have a mix of paper and digital when you’re dealing in these requests, because so much of the data is so much older and the timelines go far back. It’s really important that you identify who your key players in the drug developments are, the key milestones within the timeline, because your benchmark points through your process, when did you go to clinical trials? When did you file your IND? When did you file your NDA? All of those key milestones are going to be really important because you may have a lot of key people that you may have to deal with that may no longer be around because these things happened 15 years ago, 20 years ago, so understanding who those individuals are, who the inventors are, and what files they may have, how you’re going to track those, how you’re going to get those produced for your requests. 

Also, in a lot of these matters, a smaller pharmaceutical company may have gone out and used five, six, 10 other companies that were supporting distribution or packaging, all sorts of different aspects relative to that pharmaceutical, so how are you going to get the information from those companies. What if they don’t exist anymore? Do you have retention of your own information around it? There are a lot of moving parts. Really, that fact timeline data map becomes really critical to make sure that you’ve addressed all of that. 

Then like the lab notebooks, not only are they, a lot of times, paper, they can be very fragile. You have a lot of information. Sometimes it’s old logs off thermal printers that have been cut out and pasted into the lab notebooks. Sometimes those lab notebooks are on rice paper and very think and fragile, so understanding how those are all going to be handled and that they have to be handled with care, how you’re going to get them, how you’re going to get them all scanned. They can be very challenging to actually scan a lot of that content. 

Michael Sarlo

Let me actually say one thing too is that some organizations will not let those lab notebooks out of their sight. They’re considered the absolute crown jewels, like [hyperbaric states], and big pharmaceutical companies have a strong line and track on this stuff, so they are managing it, so if you’re a third party, you’re a law firm, you’re a vendor, you may be under some heavy constraints as it relates to getting access to those lab notebooks, scanning them or even taking photos. as John said, usually they’re very old. Then actually having to track down, in some cases, people who kept their own notes and these can be dead people with how long these go on for. 

Just something to keep in mind there. Go ahead, John. 

John Wilson

Then the last part is document management systems, pharmaceutical health sciences companies have used document management systems for a long time. A lot of those documents management systems are very dated. Some of them have been updated, but you may have to span five different document management systems, because the information may be across all of them, and understanding how that specific system functions, how you’re going to get the data, how you’re going to correlate the data and load it into a review, they’re very typically non-typical data repositories, very frequently not typically. They are very frequently specialized systems that house all that data. 

So, really just driving home the last point is really the collection planning becomes very critical to support these investigations and you can wind up with all sorts of data types. A lot of them don’t get thought about until too late to properly address, like voicemail and faxes and things of those natures, or items that are in other document management or document control systems within the organization that are more data-driven and become much harder to find your relative sources in a typical review type format. 

Also, backup tapes, do you have to go into archives? Do you have to get into backup tapes for some of the data, because that may be the only place some of it’s stored, or offsite storage facilities like an Iron Mountain or places of that nature where you’ve got to go into a warehouse with 8,000 boxes and find the six boxes for this particular product. How are you going to get those documents? How are you going to get them scanned? How are you going to get them identified when you’ve got a 45-day window and you’ve got 8,000 boxes that you need six of? All of those things have to go into the larger-scale collection plan and data map to help support these investigations. 

Really, the last comment is, keeping in mind, a lot of these investigations are global. You have a company that was doing R&D here in the US and they might have been doing manufacturing in India or Norway or Germany, a lot of different places. They may have been doing clinical trials somewhere else, so you’ve got to take into consideration all these global locations and global access points for all of this data. 

From there, I will turn it back over to Mike and the rest of the team. 

Michael Sarlo

Thanks, John. Really, the name of the game here is don’t get caught unawares. Just have a strong sense of where data is, what relates to drugs that might be expiring. HaystackID with our information governance offering does a lot of work in this domain to help organizations organize all of their fringe data and really building out a data retrieval plan, when we start to get historical documents, like [inaudible] long timelines that we’re preparing for. 

I’m going to kick it off to Vazantha Meyers, Vee for short, who is going to talk about all of the document review magic that we bring to every [support opportunity].

Vazantha Meyers

Thank you, Mike. So, let me set the stage before I go into the next few slides. Mike and Albert and John have described the process, and all of that information from the timeline to the terms that are being used, to what were the goals that were being accomplished, the data sources and the milestones, and the key players have to be conveyed to a team so that they can then take that data and categorize it. 

So, all of what they’ve talked about has to be taught to the team and usually, that’s done through protocols, towards framing sessions, and a protocol that the reviewers can reference in order to make decisions on that document. The other thing that we’re asking reviewers to do is understand the data. What documents are they looking at and what’s in the document? 

So, one of the things that we understand about these particular Hatch-Waxman reviews and pharmaceutical reviews, in general, is that they contain a lot of medical terms and abbreviations that’s difficult for the industry. A lot of the drugs have long names, the protocols have long names, the projects have long names, and in order to efficiently communicate about those drugs, processes, and protocols, internally/externally, medical terms are used, and abbreviations are used across the board, internally and externally. 

One of the things that is important for a reviewer to do, in addition to understanding the process in terms of the goal of the process and the timeline and the key players, is understanding those terms in the documents. They cannot make a coding decision if they don’t understand the words that are coming out of the mouth, to quote a movie phrase. So, they have to understand the words on the paper, and so we want to make sure that that is being taught to the reviewers, and we also want to make sure that we’re being accountable for this timeframe and that we can do this teaching. So, we want to streamline that process. 

One of the ways that we can do that is by a few of the things I’m going to talk about in this next slide. So, one of the things that we do is that we make sure, in addition, is to review the protocol, the bible of the review. This is how the drug was developed, here are the timelines, the key players, the milestones, all of the information you know about the particular process in which the drug was developed. We also want to share with them background information, and that background information will be the terminology, the key phrases, the abbreviations, the project code names, etc. that we know about. A lot of times, that is shared information that comes from the client or the counsel, and it’s given to the reviewer. The other thing that we can do is take that shared resource, we mean the background information that’s available to the review team, and create a library. So, that library is everything that we’ve talked about in terms of terms, abbreviations, protocol names, project names, code names etc, and then we make that available not just on the particular project, but across several reviews for that same client, so it’s a library of terms that the reviewers have access to for every project that they work on for pharmaceutical clients, including these Hatch-Waxman reviews that have very truncated timelines. 

The other thing that we do in terms of making sure that we’re taking advantage of best knowledge is that we create client teams, so the same way that we have taken shared resources and created a library that can go across particular reviews for pharmaceutical clients, we take client teams and have review managers, key reviewers and first-level reviewers who have worked with the client, and we put them on the same – put on projects with the same clients, so that they can take that knowledge that they gained on the first few projects they work on and take that through the last project they work continuously, and they’re building their information, they’re sharing that information, which means team members go across projects, sometimes even with new counsel. And that’s a way of sharing information, sort of the library of review teams, for lack of a better way of phrasing that. 

The other thing that is available is public sources. There are public sources out there that have information about medical terms, abbreviations that’s sort of common in the industry. I will also encourage folks if they’re using that [inaudible] the one thing that we found, and this is true for every single thing that’s listed on this slide, is that these are living organisms, meaning you have background information, you have these libraries, and you have this vested team, but they are always learning new information as they’re going through the documents, and then they’re feeding that information back into the resources, meaning if I have some background information that has protocol names or medical terms or abbreviations and I go through the documents and I learn a few, I want to make sure I’m giving that information back to whoever created that shared resource, so they can update it. The same with the library, if I’m updating the shared resource, I want to make sure I’m updating the library. And the client team – and we’re going to talk about this a little bit later – client teams are always learning more information and they need to share that amongst themselves and also take that into the next review. The same with public resources, if you find that there’s something in that public resources that are lacking, please inform them and build that resource, because it benefits all of us. 

The other thing that happens in terms of a review, and I know you guys are familiar with this in terms of the day-to-day [inaudible] and communication with the review is that reviewers have a lot of questions, or they’re finding information as they go through the documents and we’ve talked about giving that back to those resources, but also we want to make sure that the reviewers are able to ask about that information in real-time. So, we use a chat room, and this is a secure chat room, but it allows the reviewers to ask questions to their whole team in real-time, meaning I have this information, I think this might be an acronym that will affect all of what we’re reviewing, can I get some clarification, can I inform you guys of this information in real-time. Everyone sees it, the QC reviewers, and the project managers, and the team leads can opine on that, they can escalate those questions, and get information back to the team in real-time. It’s really important, especially for fast-moving reviews, that reviewers are able to ask questions and get answers in real-time or give information and validate their understanding in real-time. And so, the chat room allows us to do that. 

And so, now having said that, all the information that’s pertinent that needs to go the library, go these other shared resources, or even to these public resources, it sort of needs to be documented and it needs to be [inaudible] issue logs documentation of anything that we think is impactful to the review. All of the terminology, the medical terms, the validations, the understandings, the clarifications that impact how reviewers categorize documents. We then do categorize that information in the issue log, particular to that review, and then we share that information and update our resources, these living things, these living resources I talked about after that fact. 

So, I’ve talked about… before I get into the next few slides, I’ve talked about these client teams, so one of the things that’s important for all review, but particularly reviews that have this need to understand the background information, is we select the team appropriately. So, I’m going to talk about a little bit about the selection of teams, generally, and then specifically for these particular types of review. 

So, one of the things that we have at HaystackID is we have the ability, we have our proprietary ReviewRight software that gives us the ability to gather a ton of information about reviewers and then match that reviewer to the project that is best suited for them, or at least match the project to the reviewers that are best suited for that. We do this through a qualification process, an identification process, a framing process, and then a ratings and certification process. 

In terms of qualification, we test the reviewers and we give them a 15-part test that goes through – across the review, issue coding, [prevalence review], and what we’re looking for is to see which is the best reviewer, who is going to sit up in this top right quadrant in terms of speed and accuracy and recall, who are the best reviewers technically. That doesn’t tell us if they’re better on this particular project, but it does tell us who has the best skills in terms of a reviewer. So, that’s the first assessment that we make on reviewer. 

The second thing that we’re doing is we’re looking to see what their background qualifications are, so we ask them questions about what reviews they’ve worked on, how many reviews they’ve worked on, what foreign languages do they have, skills in either fluent or reading or native etc, we want to know what practice areas they’ve worked in. Also, what tools they’ve worked on, and in particular what their scientific and their school background. What have they worked on outside of the legal field? We collect all of that information during the onboarding process. We want to be sure that we are selecting reviewers who are suitable for these Hatch-Waxman reviews. This list that I have here – you can see on the slide – we are looking for reviewers, and this is a list in ranking. 

First, we want to see what reviewers – if we’re selecting them for this particular type of review – do you have experience on Hatch-Waxman reviews. Do you have experience with this particular pharmaceutical client? Have you worked on projects with them before and are you familiar with their data, terms, and terminologies that they use in their data and communicating? Do you have experience in this industry? So, maybe you haven’t worked with this client specifically, but have you worked with other pharmaceutical clients similar to the one that we’re staffing for. Do you have patent experience? Do you understand the process, the timeline etc, the terminologies used and even that process? Then lastly, do you have at least a science or a chemistry background? 

A lot of times, reviewers will have all of these or some of these, but this is for me the [inaudible], and this is what we’re looking for and we collect that information during the onboarding process, so that we can match the reviewer to the project at hand when we’re staffing, which is particularly important, because like we talked about earlier, it’s very specific in terms of the terminology, the abbreviations, the processes being used and we’re assessing. We want to make sure that reviewers can look at a document and understand what they’re looking at. 

And then I’m not going to go through this slide in-depth, but we do a background check. Security is also very key. And we have some security information about our environment, so since we’re talking about reviewers, we do a background check. We do a general background check. We look to make sure their license is verified and we do a conflict of interest screening, so we check whether or not they have a conflict of interest-based on the employment information they’ve given us, and we also ask the reviewer to attest that they don’t have a conflict based on the parties of a particular project that we’re working on, and that’s for every project that we work on. 

So, the other goal… and I have five minutes, so I’m going to go pretty fast, so that I won’t hold you guys up. But the overall goal for managed review project is to get through the documents in a timely manner, efficiently, meaning you’re not going to cost the client any unnecessary money, accurately so you won’t make a mistake, and then defensibly so that you’re doing it according to prescribed standards. 

One of the things that we do is we want to optimize the workflow. We want to reduce the review count and then we want to optimize the workflow. Reducing the review count is interesting, when it comes to Hatch-Waxman reviews, because there’s targeted pools, so we’re looking at rich data sets. There’s not a whole lot to call [inaudible], but typically they have, and this is true for a lot of the pharmaceutical projects, they have a higher responsive rate, so their targeted pools, we understand what drug we’re looking at, this isn’t a data dump. And so, we have a higher review rate, a lower cull range, we want to go through the process and make sure that you’re optimizing your workflow. 

So, how do you that? It’s typical for a lot of reviews, so you want to make sure that you’re analyzing your search terms and that you are testing them, and that can be done pre-linear review or pre-analytical review, whichever one you’re using, and then there’s this decision on whether or not to use analytical review or linear review. 

Now, I found that with pharmaceutical clients, it’s a mixed set of data, and that data works well with certain workflows. For instance, spreadsheets and image files don’t really work that well with TAR, so 2.0 or 1.0, so continuous active learning or predictive coding. But the other documents do, like emails and regular Word documents do work well with TAR. What we’ve done for other clients is we’ve split that data set, so we have the data that works well with TAR, it goes through that process and then we pool adaptive data that doesn’t work well with TAR and put it through more of a linear process. The idea is that we’re optimizing the workflow for the data that we have, as opposed to making a decision for the overall project, so we’re being adaptive and that’s what you kind of are going to have to do with the data that we’re getting. We use custom de-duping, we make sure that we are culling out non-responsive documents as we identify them, either by similar documents or filenames, or we know that we have a newsletter that’s coming in and we want to make sure we call that out, even though it wasn’t called out at the search term level. We want to make sure we’re doing single instance review of search term hits, we’re using propagation. Particularly with redaction, most of folks who have been involved with managed review, you know that redaction can slow down the review and increase costs, so we want to make sure that we’re using the methodology available as to reduce that cost and clean up the review, and propagation happens to be one of them, as well as negotiating the use of using example redaction documents. 

Then there’s quality control, which is key for every review that you’re working on. So, I’m going to go through this, again, pretty quickly. We have a gauge analysis, and this is similar to what we’ve talked about in terms of testing reviewers as they come into our system. We test them as they come onto review, and so this allows us to give to the reviewers the same set of documents across the board. We have 10 reviewers; all 10 reviewers are looking at the same 50 documents. Outside counsel is looking at the same 50 documents as with someone in-house that’s managing the review who has been a part of the QC process. They can look at those same documents too. Everyone is coding those documents at the same time, and what that allows us to do is test understanding and instruction. 

We give the documents back [inaudible] for the reviewer and we get information about how well they do in terms of coding the documents and how well we do in terms of instructing them about how to code the documents. The solution to any low score is retraining, rewriting the protocol, or replacing reviewers, etc. So, we want to know that information upfront because it sets us off the right pace, everyone is on the same place with the review, and what that does and how it circles back to these particular reviews is that we’re on a staff timeline and we want to make sure that you’re catching any issues upfront, so it might be like a day that you have to do this gauge analysis, but it saves you so much time and additionals you see down the road because you’re making everyone should be on the same page, and all of the instructions that are given to the team should be given to the team, so it’s a really good [inaudible] go forward. 

We do traditional sampling, and targeted QC for sampling is looking at a percentage of what the reviewers have coded, looking for mistakes, and then the targeted QC would be [inaudible] in the data set and cleaning them up and that should be a typical part of most reviews. 

The other thing that we do, which is a quality control tool is event handlers, so event handlers prevent reviewers from making obvious mistakes. For instance, if I know I have a responsive document and every responsive document has to have a privileged coding or issue coding or a confidentiality coding, the event handler will trigger if the reviewer tries to save that document without making some of the necessary coding. So, if it has to have a responsive coding, the event handler will not let the reviewer save that document until it makes a privileged call or the confidentiality call or issue call. Event handlers are handlers that eliminate mistakes that we have to find later. However, for all of the systems that we can’t control, cleaning up the bottom is really important, so we want to make sure that we’re doing clean-ups and [inaudible] and conformity and consistency searches. One of the tools talked about already, if you know you have a mistake that you found with sampling or someone has told you about a mistake that you are aware of, you want to make sure that you’re going through and finding those mistakes as [inaudible] the data set so that mistake doesn’t exist, we also want to make sure that the documents are coded consistently and that redactions and very important privileged coding is very important, so you can check that in several ways. You can do hashtag searches, you can look for near dupes, and we can look for similar text and similar filenames, to make sure to clean up those documents. 

This has to be proactive and continuous. So, proactive in that you’re making sure that you are aware of mistakes that can happen with the event handlers, you’re looking at making sure everyone is on the same page in terms of the coding, and then you’re continuously looking for mistakes and [inaudible] to process. It has to happen in real-time, because we just don’t have time to clean it up after the review is over. And so, it’s really important on all reviews, it’s particularly important [inaudible] that we process that because we just don’t have the time to go back and fix it later. It’s a truncated timeline. 

With that, I apologize for breezing through these slides, but if you have any questions, please let us know. I will turn this back over to Mike Sarlo. 

Michael Sarlo

Thanks for that, Vee, really appreciate it, and I know all of our clients do as well. We have a question here and thank you all for joining. Here we go. “what is the best way to collect and especially produce the regulatory data? I assume this means the eCTD files, the NDAs, and those things. This has caused some issues in the past with respect to pages and pages of blank sheets when producing these types of documents.” 

First, that would be to understand if there’s an active eCTD management system behind the organization’s firewall or if they’ve used a cloud solution, if it is a newer matter where maybe the whole thing is digital. At that point, you would want to handle it just like any kind of unknown repository. We would test and triage it and get a repeatable outcome as we export data out from and audit it to make sure it’s the way that we think it should be. 

If these are just historical files that are sitting on a CD somewhere, that can be a process where we can scan for blank pages and things like that using some custom scripts based on pixel content or file size and look for those, but I would say that, typically speaking, you’re going to want to go back to whoever gave you the data and understand where it came from and how it was gathered, or bring in an expert company like HaystackID to work with you. 

It doesn’t cost a lot to do this right, but there can be so many systems involved and so many point of handoff, so to speak, from an eCTD becoming relevant to a matter and somebody else makes a call internally to somebody else and yada-yada-yada, it’s important to really audit that process so that you know that you have everything.

Then as far as produce it, it can then be uploaded through our tool in Relativity, where it can be acted on and tested and converted and thrown out like any regular production document. I’ve seen organizations try to produce the entire file. We’ve had them come to us with these types of issues. 

So, once we get the eCTD, handling the production is really easy. 

Any other questions? 

Great, well, thank you all for joining us today. We look forward to having you guys every month. We see a lot of the same names and faces, so we really appreciate the support. I will hand it back to Rob Robinson to close out. Any questions that pop up, please feel free to email us. You have access to these slides. We also post these on our learning section on our website. 

Go ahead, Rob. Thank you, guys. 

Closing

Thank you so much, Mike, we appreciate it. Thank you, John, Vee, and Albert for the excellent information and insight. We also want to thank each of you who took time out of your schedule to attend today. We know how valuable that time is, and we don’t take for granted you sharing it with us, so we appreciate that. 

Additional, we hope you have an opportunity to attend our next monthly webcast, and that’s scheduled for 14 October, Wednesday at 12 p.m. Eastern Time, and that will be on the topic of the Dynamics of Antitrust Investigations, and that presentation, which will be led by Michael, again will include some recent updates on FTC and DOJ practices and procedures regarding Second Requests, so please take the opportunity to attend. You can find a detailed description of that on our website and also register there. 

Again, thank you for attending. Have a great rest of the day and this formally concludes today’s webcast. 


CLICK HERE TO DOWNLOAD THE PRESENTATION SLIDESPage 1 / 45Zoom 100%


Learn More About HaystackID Electronic Common Technical Document (eCTD) SupportPage 1 / 2Zoom 100%


CLICK HERE FOR THE ON-DEMAND PRESENTATION (BrightTalk)

Avatar
Marketing Team

Editor’s Note: On September 16, 2020, HaystackID shared an educational webcast designed to inform and update legal and data discovery professionals on the complexities of eDiscovery support in pharmaceutical industry matters through the lens of the Hatch-Waxman Act. While the full recorded presentation is available for on-demand viewing via the HaystackID website, provided below is a transcript of the presentation as well as a PDF version of the accompanying slides for your review and use.

Hatch-Waxman Matters and eDiscovery: Turbo-Charging Pharma Collections and Reviews

Navigating Hatch-Waxman legislation can be complex and challenging from legal, regulatory, and eDiscovery perspectives. The stakes are high for both brand name and generic pharmaceutical manufacturers as timing and ability to act swiftly in application submissions and responses many times mean the difference between market success or undesired outcomes.

In this presentation, expert eDiscovery technologists and authorities will share information, insight, and proven best practices for planning and supporting time-sensitive pharmaceutical collections and reviews so Hatch-Waxman requirements are your ally and not your adversary on the road to legal and business success.

Webcast Highlights

+ NDA and ANDA Processes Through the Lens of Hatch-Waxman
+ ECTD Filing Format Overview For FDA (NDA/ANDA Submissions)
+ Information Governance and Collections Under Hatch-Waxman
+ Dealing with Proprietary Data Types and Document Management Systems at Life Sciences Companies
+ Streamlining the Understanding of Specific Medical Abbreviations and Terminology
+ Best Practices and Proprietary Technology for Document Review in Pharmaceutical Litigation

Presenting Experts

Michael Sarlo, EnCE, CBE, CCLO, RCA, CCPA – Michael is a Partner and Sr. EVP of eDiscovery and Digital Forensics for HaystackID.

John Wilson, ACE, AME, CBE – As CISO and President of Forensics at HaystackID, John is a certified forensic examiner, licensed private investigator, and infotech veteran with more than two decades of experience.

Albert Barsocchini, Esq. – As Director of Strategic Consulting for NightOwl Global, Albert brings more than 25 years of legal and technology experience in discovery, digital investigations, and compliance.

Vazantha Meyers, Esq. – As VP of Managed Review for HaystackID, Vazantha has extensive experience in advising and helping customers achieve their legal document review objectives.


Presentation Transcript

Introduction

Hello, and I hope you’re having a great week. My name is Rob Robinson. On behalf of the entire team at HaystackID, I’d like to thank you for attending today’s webcast titled Hatch-Waxman Matters and eDiscovery, Turbo-Charging Pharma Collections and Reviews. Today’s webcast is part of HaystackID’s monthly series of educational presentations conducted on the BrightTALK, and designed to ensure listeners are proactively prepared to achieve their computer forensics, eDiscovery, and legal review objectives during investigations and litigation, and our expert presenters for today’s webcast include four of the industry’s foremost subject matter experts and authorities on eDiscovery, all with extensive experience in pharmaceutical matters. 

Our first presenter that I’d like to introduce you to is Michael Sarlo. Michael is a Partner and Senior Executive Vice President of eDiscovery and Digital Forensics for HaystackID. In this role, Michael facilitates all operations related to eDiscovery, digital forensics, and litigation strategy both in the US and abroad for a HaystackID. 

Our second presenter is digital forensics and cybersecurity expert John Wilson. As Chief Information Security Officer and President of Forensics at HaystackID, John’s a certified forensic examiner, licensed private investigator, and information technology veteran of more than two decades of experience working with the US government in both public and private companies. 

Our next presenting expert, Vazantha Meyers serves, as Vice President of Discovery for HaystackID, and Vazantha has extensive experience in advising and helping customers achieve their legal document review objectives. She’s recognized as an expert in all aspects of traditional and technology-assisted review. Additionally, Vazantha graduated from Purdue University and obtained her JD from Valparaiso University School of Law. 

Our final presenting expert is Albert Barsocchini. As Director of Strategic Consulting for NightOwl Global, newly merged with HaystackID, Albert brings more than 25 years of legal and technology experience in discovery, digital investigations, and compliance to his work supporting clients in all things eDiscovery. 

Today’s presentation will be recorded and provided for future viewing and a copy of the presentation materials are available for all attendees, and in fact, you can access those materials directly beneath the presentation viewing window on your screen by selecting the Attachments tab on the far left of the toolbar beneath the viewing window, and also a recorded version of this presentation will be available directly from the HaystackID and BrightTALK network websites upon completion of today’s presentation, and a full transcript will be available via the HaystackID blog. At this time, with no further ado, I’d like to turn the microphone over to our expert presenters, led by Mike Sarlo, for their comments and considerations on the Hatch-Waxman Matters and eDiscovery presentation. Mike? 

Michael Sarlo

Thanks for the introduction, Rob, and thank you all for joining our monthly webinar series. We’re going to be covering a broad array of topics around pharmaceutical litigation in general, the types of data types, in particular around Electronic Common Technical Documents (eCTDs), which we’ll learn more about. We’re going to start out with really looking at Hatch-Waxman as a whole and new drug application and ANDA processes related to Hatch-Waxman. We’re going to get into those eCTDs and why those are important for pharmaceutical-related matters on a global scale. I’m going to start to talk about more information governance and strategies around really building a data map, which is also more of a data map that is a fact map. These matters have very long timelines when you start to look at really just the overall lifecycle of an original patent of a new drug going through a regulatory process, and then actually hitting market and then having that patent expire. We’ll learn more about that, then we’re going to get into some of the nitty-gritties of really how we enhance document reviews at HaystackID for pharmaceutical matters and scientific matters in general, and then finish off with some best practices and just a brief overview of our proprietary testing mechanism and placement platform ReviewRight. 

So, without further ado, I’m going to kick it off to Albert. 

Albert Barsocchini

Thank you very much, Michael. So, I’m going to start off with a 30,000-foot level view of Hatch-Waxman, and I always like to start off with a caveat any time I’m talking about pharma related matters. Pharma is a very complex process, complex laws, and very nuanced, and especially Hatch-Waxman. So, my goal today is really just to give you the basic things you need to know about Hatch-Waxman, and it’s very interesting. In fact, in 1984, generic drugs accounted for 19% of retail prescriptions, and in 2018, they accounted for 90% and that’s because of Hatch-Waxman. In a recent report, the President’s cancer panel found that the US generic drug market saved the US healthcare system an estimated $253 billion overall in 2018, including $10 billion in savings for cancer drugs. So, Hatch-Waxman really has been very important to the generic drug market and to us, in public, for being able to get drugs at an affordable price. 

So, how did Hatch-Waxman start? And it started with a case called Roche v. Bolar. So, Roche made a drug, it was a sleeping pill, Dalmane, I don’t know if anybody’s taken it, I haven’t. Anyway, it was very popular, it made them literally millions and billions of dollars, and so what, and normally they have a certain patent term, and what a generic drug company likes to do is to make a bioequivalent of that, and to do that, they want to try to be timed, so at the termination of a patent, the generic drugs can start marketing their product. So, in this case, Bolar started the research and development before the Roche patent expired, and because of that, they were per se infringing on the Roche patent, and so a lawsuit pursued and Bolar lost. 

Now, a couple of terms that I think are important, and I’m going to throw them out now just because there are so many nuanced pharma terms. One is branded biologic, and biosimilar generic, and then there’s branded synthetic, and bioequivalent generic. Now, branded drugs are either synthetic, meaning they’re made from a chemical process or biological, meaning they’re made from a living source. We’re going to be talking today about synthetics and what is important is that synthetic branded drugs can be exactly replicated into more affordable generic versions, bioequivalents, but because biologics involve large complex molecules, because they’re talking about living sources, that’s where biosimilar comes in. So, today, we’re going to just focus on the bioequivalents, on synthetic drugs, and just as another point, there was a… in signing the law by President Obama, I think it was around 2010, the Biosimilar Act became law, which is another law very similar to the Hatch-Waxman. So, anyway, because of the Roche case, we came out in 1983 with the Hatch-Waxman Act, and the reason they wanted this was because what was happening is since a generic company could not start to research and development until after a patent expired, this in essence gave the new drug application additional years of patent, and which means millions of more dollars, and so Congress came in, and they thought this wasn’t fair, and so they decided that they were going to allow generic companies to start the research and development process before the patent expired, and this prevented that from happening in terms of giving the original patent holder more years on the patent, and also allowed generics to get on the market quicker and get to the public at cheaper prices, and that’s just trying to strike a balance, and as you can see, between the pharmaceutical formulations, the original patents, and the new generic versions, and so it’s a delicate balance, but they seem to have achieved it because of the fact that generics are now so prevalent in the market. 

And one thing about this act, generic drug companies are not required to conduct their own independent clinical trials to prove safety and efficacy but can instead rely on research of the pioneer pharmaceutical companies, and they can start development before the original patent expires. So, that’s already a headstart because they don’t have to produce their own data, they can rely on the data of the original patent holder, and that allowed this exploration in the patent process for generic drugs. 

So, one of the important areas that is part of this whole act is the so-called “Orange Book”. So, before you can have an abbreviated new drug application, called ANDA, for approving that generic drug, you must first have a new drug application or an NDA. Now the NDA is a pioneering brand name drugs company seeking to manufacture a new drug, and they must prepare, file, and have approved its drug by the FDA. Additionally, as part of this new drug application process, the pioneering drug company submits the information on the new drug safety and efficacy [obtained] from the trials. Now, the NDA applicant must also identify all patents that could reasonably be asserted, if a person not licensed by the owner engaged in the manufacture, use, or sell the drug, and the patents covering approved drugs, or use thereof, are published in what’s called the “Orange Book”. So, a generic company will be going to this “Orange Book”, which is like a pharma bible, to see what patents are in effect, and this helps them target certain patents they want to create a generic version of, so it’s a very important starting point and this process can start while the original patent hasn’t even gone to market. 

And so, you can see things start to heat up pretty early, and one of the things that we notice in this whole process is that when a patent is filed, the clock ticks on the patent, and so it may be another six years before that patent goes to market, and so because of that, there is a… it can be very unfair, and so there’s a lot of extensions that occur for the patent holder. 

Now, what happens in this particular situation with an ANDA is that we’re going to have a Paragraph IV certification, and briefly, in making a Paragraph IV certification, the generic drugmaker says the patent is at least one of the following. It’s either invalid, not infringed, or unenforceable, and that’s really the Reader’s Digest version on their Paragraph IV certification after the story gets much more complicated and adversarial, and that’s why I always give the warning that this is a very complex dance that’s occurring with Hatch-Waxman, but ANDA really is a very, I would say, important piece of this whole puzzle, and once the ANDA information is put together, it’s filed by what’s called the Electronic Common Technical Document, eCTD, and it’s a standard format for submitting application amendments, supplements, and reports and we’re going to talk about this a little later on in the presentation. Very similar to electronic court filings, but there’s a lot more to it, but it is something that is part of the process when you start the whole process. 

Now the patent owner, their patents and a pharma patent is good for about 20 years after the drug’s invention, and the Hatch-Waxman Act gave patent extensions to name-brand drug companies to account for delays in the approval process, and that is taken into the fact that, as pointed out earlier, that when the patent is filed, research is still in development, and it may be another six years, so realizing that, they decided to extend the 20-year patent and so it can be extended for another five years, and there are also other extensions that can occur during this time. So, with that, the patent owner is also concerned about these generic drug companies and so they’re always looking over their shoulder and looking for where there may be threats to their patent, and so once a patent owner files an action for infringement, in other words, we have the ANDA, we have the certification, it’s published, and then the patent owner has a certain amount of time, within 45 days of receiving notice of the Paragraph IV certification, to file their infringement action. At that point, there’s a 30-month period that protects the patent owner from the harm that could otherwise ensue from the FDA granting marketing approval to the potentially infringing product. 

But that’s really the start of where the race begins, and it’s very important to realize that during this race, what’s going to happen is that there could be other types of generic drug applicants that want to get in on it and they want to get in on it for a very specific reason because if their certification is granted, they get a 180-day exclusivity, which means that they could go to market for their generic product, and in countries like Europe and other countries, this can be worth hundreds of millions of dollars, this exclusivity. So, you’re going to have this 45-day period where the original patent holder will file their response to it, and then everything gets locked down for 30 months, and then there’s a lot of information that has to be exchanged from all the data during the research process, and all these certifications, and so it’s a very compressed time period. 

And what Michael is going to show in these next couple slides is that compressed time period means that you have to have your ducks in order, you have to have robust collection planning, you have to have legal review teams using the latest technology, and trying to digest this patent information that has a lot of terms that can be very difficult to assimilate, and for anybody that’s not familiar with patent litigation. HaystackID has been through a lot of this, so we have a good, solid basis and understanding of this whole process, and a very interesting process that we specifically designed for Hatch-Waxman. 

So, without further ado, I’m going to hand this over to Michael, and he’s going to go through it just to show you some of that compressed timelines and then get into the whole electronic filing process. Michael? 

Michael Sarlo

Thanks for that. Appreciate it. Thank you, Albert. That was a great overview. So, as Albert mentioned, really the timeline and lifecycle of a new drug is incredibly long. Really, the drug discovery itself, finding a compound that may have some clinical efficacy, that can take anywhere from three to six years, and at that same time you’re doing testing and you’re preparing to then file an IND, which is an investigational new drug application, so a lengthy process from an administrative standpoint, and really, as we get toward litigation, the lifecycle of litigation oftentimes begins at year zero, and if an IND is approved, you’ll get into Phase I, II, and III clinical studies. At that point, assuming you’re meeting your target metrics for the IND and the study’s end goals, you can choose to submit an NDA, and that review of an NDA can take quite some time, years often, and at the end of that process, the FDA might come back and say, well, we actually want some more information and wants you to go do this or do that, which is usually pretty devastating for organizations. It really can add on years of timeframe, and if they do accept it, then you’re at a point where it’s approved and you can start to go to market and the marketing process is highly regulated, and there are specific verticals you could market, and actually, marketing would be attended to oftentimes an NDA. 

So, right here alone, we have several different data points that might all be relevant for a Hatch-Waxman matter. On the flip side, a generic manufacturer has a much shorter timeframe, and they’re much less invested from a time standpoint. Typically speaking, they’re looking at a couple of years to develop something, to do some testing, they file an NDA, and then there’s this marketing period where they get 18 to 36 months before the marketplace becomes so crowded just due to so many generics, and at that point, usually, they move on or there’s this big stockpile, and all this is important because as we start to talk about these different applications and abbreviations, it’s important to understand the mechanisms, since most people here are on this presentation for eDiscovery purposes, of how this data is organized, and really, it started out with what’s called the Common Technical Document format, which is really a set of specifications for an application dossier for the registration of medicines designed to be used across Europe, Japan, and the United States. This was the paper format version. So, really, there are many other countries who also would adhere to the modern eCTD Common Technical Document, and really what’s the goal here, is that you can choose to streamline the regulatory approval process for any application so that the application itself can adhere to many different regulatory requirements, and these cost a lot of money, millions of dollars to put these together, millions of dollars to assemble these. You’re talking tens of thousands of pages, and these have a long lifecycle, and on January 1, 2008, actually, there was more of a scanning format for submitting an eCTD to the FDA, and at that point, they actually mandated a certain format, which became the eCTD format for these submissions. 

These are broken up into five different modules, and we’ll get into that, but the prevalence and rise of the eCTD format really began in 2008, and as you can see in the above graphic, on the right here, they became highly prevalent around 2017/2018. That’s really all there is, and that’s because as of 2017, NDAs, the FDA required that they would all be in eCTD format. The same thing for ANDAs, and then also, BLAs, and then INDs in 2018 – that actually got a little bit pushed, but we don’t need to get into that here. What’s important is that all subsequent submissions to these applications, including any amendments, supplements, reports, they need to be in digital format. This is important because a common strategy when you’re trying to… I’m a large pharmaceutical company, I’m trying to get all the value I possibly can out of my invention, this drug, we’ve spent probably millions, hundreds of millions of dollars on going to market, and something that could be making us billions of dollars, is oftentimes to really go through these, more of these NDA like processes for off label uses, for new populations that were outside the original study groups that the drug was approved for, and this is where it becomes incredibly complex, and there’s this concept of exclusivity around new novel treatments relating to use of a previous compound, and this is one of the major components of that of the Hatch-Waxman dance, how big pharma really has found many different mechanisms to extend these patents beyond their term life. 

It’s also important to note that master files, Trial Master Files, these are all of your trial data, human clinical trials, all that stuff actually would get appended to these files, and just in general you think about how fast we’re approving vaccines for coronavirus, you can see why there’s concern, that our system isn’t doing due diligence when you realize that these lifecycles of any normal drug is oftentimes 15 years. Trial Master Files, we commonly handle them the same way as an eCTD package, but there is actually a new format that more international standards are trying to move to, which is the electronic Trial Master File and having more set defined specifications regarding what the structure of that looks like is something that’s going on. 

What an eCTD is, is a collection of files. So, when we think eDiscovery, we often… we do production, let’s say now, in today’s world, it’s usually a Concordance load file, and you get an Opticon and DAT file. The eCTD file, you have to think about it very much in the same way. There’s an XML transform file, think about that more like your DATs, your load files. This is going to basically have all of the metadata. It’s going to contain all the structure of the application. It’s going to have more metadata about folders. It’s also going to track when additions and changes for when documents were removed from any eCTD and this is very important. So, there’s a whole industry that services creating these. It’s very much like where someone in a niche industry and eDiscovery, everything related to drug development from a technology standpoint has very similar functions that almost cross-correlate to eDiscovery. You have your folks who are supporting the scientists as they build out these applications, and one thing is these platforms are calibrated, and they’re calibrated by a third party. It’s very important that timing and timestamps as far as when something was touched, when it was looked at, and when it was deleted, so that metadata can be incredibly important. Outside the context of Hatch-Waxman, thinking about maybe a shareholder lawsuit against some executives at a pharmaceutical company who might have been accused of having access to a failed trial prior to the general public, you see these accusations quite a bit in small pharma companies, and they dump some shares and there’s an investigation, and you can see now why this type of information of who accessed what, when, and when something was added or removed might be important. 

The same thing goes for trial data itself. It’s highly audited, who accessed it, when. That type of data is really highly confidential, even to the company that is conducting the trial. It’s usually a third party that’s handling that, and so all this history is in there, and we have metadata about each module, and you’ll see here on the right-hand side, we have a structure here. 

It looks pretty basic. There are folders, there are files. There are also more stylesheet files, schema files that are similar to XML that will more control the formatting and should be thought of as extended metadata. Likewise, we’re also going to see files and folders, PDFs, Word docs, scientific data, big databases like Tableau, things like that. So, as you start getting into all of the extra stuff that goes with an application, these can become massive, and this is usually something that spans both paper sources and digital sources, so it’s really important to basically work on these to parse them appropriately for eDiscovery purposes. 

It should be something you have a lookout for if you ever see these modules, these little “Ms” in a folder structure that you get from your client; you should stop and say, wait a minute, this looks like it has some structure, what is this, and you’ll see it’s an eCTD, and oftentimes, because of their interlinked nature between what can be a paper file that was just scanned and thrown in a folder, and/or a digital file, and then all of these additions and adds, and these are also something that these filings go back and forth between the regulators and the organization that’s putting through an application. So, they might submit something, they say, okay, we want to see more of this or that, or we want more information here. They add it to the existing eCTD. So, in that way, you can also get a separate revision history that oftentimes wraps around the discourse between the regulator and the drug company. HaystackID deals with these often and is first to market in eDiscovery to have a solution to view, parse, review, and produce eCTDs or files from eCTDs right out of Relativity, and we’d be happy to do a demo for anybody. Just shoot us an email and it’s highly useful and has been really impactful in several large cases for us where we dealt with a lot of NDAs or INDs. 

We’ll say one thing, too, here is it’s important to realize that many different organizations may be a part of this process. 

So, now, here’s a screenshot as well for you. You see a little Relativity tree over here where we break out and parse everything. We also give you full metadata, both for your eDiscovery files, your PDFs, your Word docs, all of that, that may not be contained in the eCTD. So, this is important to note too. You can’t just load this as a load file and then not actually process the data. The data needs to be processed and it needs to be linked at the same time. And here in this application, a really unique feature is your ability to sort, filter, and search based on revisions and changes. So, if we have a case, we’re just interested in the final eCTD that resulted in an approval, we can get right to that, maybe cutting out 50% of the application. If we have a case where we’re interested about the actual approval process and the application process, then we can start to look at that and look at anything that was deleted, anything that was changed – a highly useful tool. 

Right, I’m going to kick it off to my colleague, John Wilson. I probably will jump in and cut him off a few times as well, because that’s what I do, then we’re going to talk more about information governance for these matters that have an incredibly long lifecycle, like legal hold and just preparing to respond to a Paragraph IV notice as more of a large pharmaceutical organization. 

John Wilson

Thanks, Mike. So, as Mike just said, there is a significant timeline involved with these projects, and the other side of the coin is you have a short time fuse for actually responding to requests and doing the appropriate activities. So, those two things are fighting each other because you’ve got this long history of information that you’ve got to deal with, and so, as soon as you receive the Paragraph IV acknowledgment letter, you should definitely have triggered your legal hold process. There are very short timeframes for receiving and acknowledging that letter, as well as the opposing sides have typically 45 days to take action and then decide if they’re going to sue or get involved. 

So, again, short timeframes, a lot of data, and data that spans a lot of different systems because you’re talking about a lot of historical information. The pharmaceutical companies need to be prepared to challenge all their generic manufacturers ahead of the patent expirations, because that is their – if that is their prerogative because waiting until it’s filed, you’re going to have a hard getting it all together in that short order. The INDs, the NDAs, the timelines, again, you have 20 years on the patent and the timelines of when the original work was done when the IND and the NDA were filed can be over 15 years and you’ve got to deal with paper documents, you’ve got to deal with lab notebooks and digital documents across a lot of different spectrums. A lot of the information may not even be documents. A lot of it may be logging data from your clinical trials that’s in a database system, and lab notebooks that are actual physical notebooks and they’re very fragile and you can have hundreds and hundreds of them. So, how do you identify them, find them? Where are they located? Get them all brought into your legal hold. There’s a lot of challenges around that. 

So, be prepared. Preparedness is certainly the key here. Also, because you’re talking about a lot of disparate data types, how do you parse all that properly into a review so that you can actually find the information you need and action your review. So, you’ve got to actually take a lot of preparation, you’ve got to plan out and create a data map. There’s a lot of historical data systems here involved, typically, so you’ve got to really understand your fact timeline in relation with your data maps. So, lab notebooks, how were they kept 15-20 years ago, how are they kept today? Clinical trials, how is that data stored? Is it in a database? Is it in log sheets or is it in a ticker tape that’s been clipped and put into the lab notebooks? Understanding all of those different aspects is why the timeline becomes really important. You’ve got to be able to tie that whole timeline back to all the different data sources at the relevant timeframes. 

So, always assume you’re going to have a mix of paper and digital when you’re dealing in these requests, because so much of the data is so much older and the timelines go far back. It’s really important that you identify who your key players in the drug developments are, the key milestones within the timeline, because your benchmark points through your process, when did you go to clinical trials? When did you file your IND? When did you file your NDA? All of those key milestones are going to be really important because you may have a lot of key people that you may have to deal with that may no longer be around because these things happened 15 years ago, 20 years ago, so understanding who those individuals are, who the inventors are, and what files they may have, how you’re going to track those, how you’re going to get those produced for your requests. 

Also, in a lot of these matters, a smaller pharmaceutical company may have gone out and used five, six, 10 other companies that were supporting distribution or packaging, all sorts of different aspects relative to that pharmaceutical, so how are you going to get the information from those companies. What if they don’t exist anymore? Do you have retention of your own information around it? There are a lot of moving parts. Really, that fact timeline data map becomes really critical to make sure that you’ve addressed all of that. 

Then like the lab notebooks, not only are they, a lot of times, paper, they can be very fragile. You have a lot of information. Sometimes it’s old logs off thermal printers that have been cut out and pasted into the lab notebooks. Sometimes those lab notebooks are on rice paper and very think and fragile, so understanding how those are all going to be handled and that they have to be handled with care, how you’re going to get them, how you’re going to get them all scanned. They can be very challenging to actually scan a lot of that content. 

Michael Sarlo

Let me actually say one thing too is that some organizations will not let those lab notebooks out of their sight. They’re considered the absolute crown jewels, like [hyperbaric states], and big pharmaceutical companies have a strong line and track on this stuff, so they are managing it, so if you’re a third party, you’re a law firm, you’re a vendor, you may be under some heavy constraints as it relates to getting access to those lab notebooks, scanning them or even taking photos. as John said, usually they’re very old. Then actually having to track down, in some cases, people who kept their own notes and these can be dead people with how long these go on for. 

Just something to keep in mind there. Go ahead, John. 

John Wilson

Then the last part is document management systems, pharmaceutical health sciences companies have used document management systems for a long time. A lot of those documents management systems are very dated. Some of them have been updated, but you may have to span five different document management systems, because the information may be across all of them, and understanding how that specific system functions, how you’re going to get the data, how you’re going to correlate the data and load it into a review, they’re very typically non-typical data repositories, very frequently not typically. They are very frequently specialized systems that house all that data. 

So, really just driving home the last point is really the collection planning becomes very critical to support these investigations and you can wind up with all sorts of data types. A lot of them don’t get thought about until too late to properly address, like voicemail and faxes and things of those natures, or items that are in other document management or document control systems within the organization that are more data-driven and become much harder to find your relative sources in a typical review type format. 

Also, backup tapes, do you have to go into archives? Do you have to get into backup tapes for some of the data, because that may be the only place some of it’s stored, or offsite storage facilities like an Iron Mountain or places of that nature where you’ve got to go into a warehouse with 8,000 boxes and find the six boxes for this particular product. How are you going to get those documents? How are you going to get them scanned? How are you going to get them identified when you’ve got a 45-day window and you’ve got 8,000 boxes that you need six of? All of those things have to go into the larger-scale collection plan and data map to help support these investigations. 

Really, the last comment is, keeping in mind, a lot of these investigations are global. You have a company that was doing R&D here in the US and they might have been doing manufacturing in India or Norway or Germany, a lot of different places. They may have been doing clinical trials somewhere else, so you’ve got to take into consideration all these global locations and global access points for all of this data. 

From there, I will turn it back over to Mike and the rest of the team. 

Michael Sarlo

Thanks, John. Really, the name of the game here is don’t get caught unawares. Just have a strong sense of where data is, what relates to drugs that might be expiring. HaystackID with our information governance offering does a lot of work in this domain to help organizations organize all of their fringe data and really building out a data retrieval plan, when we start to get historical documents, like [inaudible] long timelines that we’re preparing for. 

I’m going to kick it off to Vazantha Meyers, Vee for short, who is going to talk about all of the document review magic that we bring to every [support opportunity].

Vazantha Meyers

Thank you, Mike. So, let me set the stage before I go into the next few slides. Mike and Albert and John have described the process, and all of that information from the timeline to the terms that are being used, to what were the goals that were being accomplished, the data sources and the milestones, and the key players have to be conveyed to a team so that they can then take that data and categorize it. 

So, all of what they’ve talked about has to be taught to the team and usually, that’s done through protocols, towards framing sessions, and a protocol that the reviewers can reference in order to make decisions on that document. The other thing that we’re asking reviewers to do is understand the data. What documents are they looking at and what’s in the document? 

So, one of the things that we understand about these particular Hatch-Waxman reviews and pharmaceutical reviews, in general, is that they contain a lot of medical terms and abbreviations that’s difficult for the industry. A lot of the drugs have long names, the protocols have long names, the projects have long names, and in order to efficiently communicate about those drugs, processes, and protocols, internally/externally, medical terms are used, and abbreviations are used across the board, internally and externally. 

One of the things that is important for a reviewer to do, in addition to understanding the process in terms of the goal of the process and the timeline and the key players, is understanding those terms in the documents. They cannot make a coding decision if they don’t understand the words that are coming out of the mouth, to quote a movie phrase. So, they have to understand the words on the paper, and so we want to make sure that that is being taught to the reviewers, and we also want to make sure that we’re being accountable for this timeframe and that we can do this teaching. So, we want to streamline that process. 

One of the ways that we can do that is by a few of the things I’m going to talk about in this next slide. So, one of the things that we do is that we make sure, in addition, is to review the protocol, the bible of the review. This is how the drug was developed, here are the timelines, the key players, the milestones, all of the information you know about the particular process in which the drug was developed. We also want to share with them background information, and that background information will be the terminology, the key phrases, the abbreviations, the project code names, etc. that we know about. A lot of times, that is shared information that comes from the client or the counsel, and it’s given to the reviewer. The other thing that we can do is take that shared resource, we mean the background information that’s available to the review team, and create a library. So, that library is everything that we’ve talked about in terms of terms, abbreviations, protocol names, project names, code names etc, and then we make that available not just on the particular project, but across several reviews for that same client, so it’s a library of terms that the reviewers have access to for every project that they work on for pharmaceutical clients, including these Hatch-Waxman reviews that have very truncated timelines. 

The other thing that we do in terms of making sure that we’re taking advantage of best knowledge is that we create client teams, so the same way that we have taken shared resources and created a library that can go across particular reviews for pharmaceutical clients, we take client teams and have review managers, key reviewers and first-level reviewers who have worked with the client, and we put them on the same – put on projects with the same clients, so that they can take that knowledge that they gained on the first few projects they work on and take that through the last project they work continuously, and they’re building their information, they’re sharing that information, which means team members go across projects, sometimes even with new counsel. And that’s a way of sharing information, sort of the library of review teams, for lack of a better way of phrasing that. 

The other thing that is available is public sources. There are public sources out there that have information about medical terms, abbreviations that’s sort of common in the industry. I will also encourage folks if they’re using that [inaudible] the one thing that we found, and this is true for every single thing that’s listed on this slide, is that these are living organisms, meaning you have background information, you have these libraries, and you have this vested team, but they are always learning new information as they’re going through the documents, and then they’re feeding that information back into the resources, meaning if I have some background information that has protocol names or medical terms or abbreviations and I go through the documents and I learn a few, I want to make sure I’m giving that information back to whoever created that shared resource, so they can update it. The same with the library, if I’m updating the shared resource, I want to make sure I’m updating the library. And the client team – and we’re going to talk about this a little bit later – client teams are always learning more information and they need to share that amongst themselves and also take that into the next review. The same with public resources, if you find that there’s something in that public resources that are lacking, please inform them and build that resource, because it benefits all of us. 

The other thing that happens in terms of a review, and I know you guys are familiar with this in terms of the day-to-day [inaudible] and communication with the review is that reviewers have a lot of questions, or they’re finding information as they go through the documents and we’ve talked about giving that back to those resources, but also we want to make sure that the reviewers are able to ask about that information in real-time. So, we use a chat room, and this is a secure chat room, but it allows the reviewers to ask questions to their whole team in real-time, meaning I have this information, I think this might be an acronym that will affect all of what we’re reviewing, can I get some clarification, can I inform you guys of this information in real-time. Everyone sees it, the QC reviewers, and the project managers, and the team leads can opine on that, they can escalate those questions, and get information back to the team in real-time. It’s really important, especially for fast-moving reviews, that reviewers are able to ask questions and get answers in real-time or give information and validate their understanding in real-time. And so, the chat room allows us to do that. 

And so, now having said that, all the information that’s pertinent that needs to go the library, go these other shared resources, or even to these public resources, it sort of needs to be documented and it needs to be [inaudible] issue logs documentation of anything that we think is impactful to the review. All of the terminology, the medical terms, the validations, the understandings, the clarifications that impact how reviewers categorize documents. We then do categorize that information in the issue log, particular to that review, and then we share that information and update our resources, these living things, these living resources I talked about after that fact. 

So, I’ve talked about… before I get into the next few slides, I’ve talked about these client teams, so one of the things that’s important for all review, but particularly reviews that have this need to understand the background information, is we select the team appropriately. So, I’m going to talk about a little bit about the selection of teams, generally, and then specifically for these particular types of review. 

So, one of the things that we have at HaystackID is we have the ability, we have our proprietary ReviewRight software that gives us the ability to gather a ton of information about reviewers and then match that reviewer to the project that is best suited for them, or at least match the project to the reviewers that are best suited for that. We do this through a qualification process, an identification process, a framing process, and then a ratings and certification process. 

In terms of qualification, we test the reviewers and we give them a 15-part test that goes through – across the review, issue coding, [prevalence review], and what we’re looking for is to see which is the best reviewer, who is going to sit up in this top right quadrant in terms of speed and accuracy and recall, who are the best reviewers technically. That doesn’t tell us if they’re better on this particular project, but it does tell us who has the best skills in terms of a reviewer. So, that’s the first assessment that we make on reviewer. 

The second thing that we’re doing is we’re looking to see what their background qualifications are, so we ask them questions about what reviews they’ve worked on, how many reviews they’ve worked on, what foreign languages do they have, skills in either fluent or reading or native etc, we want to know what practice areas they’ve worked in. Also, what tools they’ve worked on, and in particular what their scientific and their school background. What have they worked on outside of the legal field? We collect all of that information during the onboarding process. We want to be sure that we are selecting reviewers who are suitable for these Hatch-Waxman reviews. This list that I have here – you can see on the slide – we are looking for reviewers, and this is a list in ranking. 

First, we want to see what reviewers – if we’re selecting them for this particular type of review – do you have experience on Hatch-Waxman reviews. Do you have experience with this particular pharmaceutical client? Have you worked on projects with them before and are you familiar with their data, terms, and terminologies that they use in their data and communicating? Do you have experience in this industry? So, maybe you haven’t worked with this client specifically, but have you worked with other pharmaceutical clients similar to the one that we’re staffing for. Do you have patent experience? Do you understand the process, the timeline etc, the terminologies used and even that process? Then lastly, do you have at least a science or a chemistry background? 

A lot of times, reviewers will have all of these or some of these, but this is for me the [inaudible], and this is what we’re looking for and we collect that information during the onboarding process, so that we can match the reviewer to the project at hand when we’re staffing, which is particularly important, because like we talked about earlier, it’s very specific in terms of the terminology, the abbreviations, the processes being used and we’re assessing. We want to make sure that reviewers can look at a document and understand what they’re looking at. 

And then I’m not going to go through this slide in-depth, but we do a background check. Security is also very key. And we have some security information about our environment, so since we’re talking about reviewers, we do a background check. We do a general background check. We look to make sure their license is verified and we do a conflict of interest screening, so we check whether or not they have a conflict of interest-based on the employment information they’ve given us, and we also ask the reviewer to attest that they don’t have a conflict based on the parties of a particular project that we’re working on, and that’s for every project that we work on. 

So, the other goal… and I have five minutes, so I’m going to go pretty fast, so that I won’t hold you guys up. But the overall goal for managed review project is to get through the documents in a timely manner, efficiently, meaning you’re not going to cost the client any unnecessary money, accurately so you won’t make a mistake, and then defensibly so that you’re doing it according to prescribed standards. 

One of the things that we do is we want to optimize the workflow. We want to reduce the review count and then we want to optimize the workflow. Reducing the review count is interesting, when it comes to Hatch-Waxman reviews, because there’s targeted pools, so we’re looking at rich data sets. There’s not a whole lot to call [inaudible], but typically they have, and this is true for a lot of the pharmaceutical projects, they have a higher responsive rate, so their targeted pools, we understand what drug we’re looking at, this isn’t a data dump. And so, we have a higher review rate, a lower cull range, we want to go through the process and make sure that you’re optimizing your workflow. 

So, how do you that? It’s typical for a lot of reviews, so you want to make sure that you’re analyzing your search terms and that you are testing them, and that can be done pre-linear review or pre-analytical review, whichever one you’re using, and then there’s this decision on whether or not to use analytical review or linear review. 

Now, I found that with pharmaceutical clients, it’s a mixed set of data, and that data works well with certain workflows. For instance, spreadsheets and image files don’t really work that well with TAR, so 2.0 or 1.0, so continuous active learning or predictive coding. But the other documents do, like emails and regular Word documents do work well with TAR. What we’ve done for other clients is we’ve split that data set, so we have the data that works well with TAR, it goes through that process and then we pool adaptive data that doesn’t work well with TAR and put it through more of a linear process. The idea is that we’re optimizing the workflow for the data that we have, as opposed to making a decision for the overall project, so we’re being adaptive and that’s what you kind of are going to have to do with the data that we’re getting. We use custom de-duping, we make sure that we are culling out non-responsive documents as we identify them, either by similar documents or filenames, or we know that we have a newsletter that’s coming in and we want to make sure we call that out, even though it wasn’t called out at the search term level. We want to make sure we’re doing single instance review of search term hits, we’re using propagation. Particularly with redaction, most of folks who have been involved with managed review, you know that redaction can slow down the review and increase costs, so we want to make sure that we’re using the methodology available as to reduce that cost and clean up the review, and propagation happens to be one of them, as well as negotiating the use of using example redaction documents. 

Then there’s quality control, which is key for every review that you’re working on. So, I’m going to go through this, again, pretty quickly. We have a gauge analysis, and this is similar to what we’ve talked about in terms of testing reviewers as they come into our system. We test them as they come onto review, and so this allows us to give to the reviewers the same set of documents across the board. We have 10 reviewers; all 10 reviewers are looking at the same 50 documents. Outside counsel is looking at the same 50 documents as with someone in-house that’s managing the review who has been a part of the QC process. They can look at those same documents too. Everyone is coding those documents at the same time, and what that allows us to do is test understanding and instruction. 

We give the documents back [inaudible] for the reviewer and we get information about how well they do in terms of coding the documents and how well we do in terms of instructing them about how to code the documents. The solution to any low score is retraining, rewriting the protocol, or replacing reviewers, etc. So, we want to know that information upfront because it sets us off the right pace, everyone is on the same place with the review, and what that does and how it circles back to these particular reviews is that we’re on a staff timeline and we want to make sure that you’re catching any issues upfront, so it might be like a day that you have to do this gauge analysis, but it saves you so much time and additionals you see down the road because you’re making everyone should be on the same page, and all of the instructions that are given to the team should be given to the team, so it’s a really good [inaudible] go forward. 

We do traditional sampling, and targeted QC for sampling is looking at a percentage of what the reviewers have coded, looking for mistakes, and then the targeted QC would be [inaudible] in the data set and cleaning them up and that should be a typical part of most reviews. 

The other thing that we do, which is a quality control tool is event handlers, so event handlers prevent reviewers from making obvious mistakes. For instance, if I know I have a responsive document and every responsive document has to have a privileged coding or issue coding or a confidentiality coding, the event handler will trigger if the reviewer tries to save that document without making some of the necessary coding. So, if it has to have a responsive coding, the event handler will not let the reviewer save that document until it makes a privileged call or the confidentiality call or issue call. Event handlers are handlers that eliminate mistakes that we have to find later. However, for all of the systems that we can’t control, cleaning up the bottom is really important, so we want to make sure that we’re doing clean-ups and [inaudible] and conformity and consistency searches. One of the tools talked about already, if you know you have a mistake that you found with sampling or someone has told you about a mistake that you are aware of, you want to make sure that you’re going through and finding those mistakes as [inaudible] the data set so that mistake doesn’t exist, we also want to make sure that the documents are coded consistently and that redactions and very important privileged coding is very important, so you can check that in several ways. You can do hashtag searches, you can look for near dupes, and we can look for similar text and similar filenames, to make sure to clean up those documents. 

This has to be proactive and continuous. So, proactive in that you’re making sure that you are aware of mistakes that can happen with the event handlers, you’re looking at making sure everyone is on the same page in terms of the coding, and then you’re continuously looking for mistakes and [inaudible] to process. It has to happen in real-time, because we just don’t have time to clean it up after the review is over. And so, it’s really important on all reviews, it’s particularly important [inaudible] that we process that because we just don’t have the time to go back and fix it later. It’s a truncated timeline. 

With that, I apologize for breezing through these slides, but if you have any questions, please let us know. I will turn this back over to Mike Sarlo. 

Michael Sarlo

Thanks for that, Vee, really appreciate it, and I know all of our clients do as well. We have a question here and thank you all for joining. Here we go. “what is the best way to collect and especially produce the regulatory data? I assume this means the eCTD files, the NDAs, and those things. This has caused some issues in the past with respect to pages and pages of blank sheets when producing these types of documents.” 

First, that would be to understand if there’s an active eCTD management system behind the organization’s firewall or if they’ve used a cloud solution, if it is a newer matter where maybe the whole thing is digital. At that point, you would want to handle it just like any kind of unknown repository. We would test and triage it and get a repeatable outcome as we export data out from and audit it to make sure it’s the way that we think it should be. 

If these are just historical files that are sitting on a CD somewhere, that can be a process where we can scan for blank pages and things like that using some custom scripts based on pixel content or file size and look for those, but I would say that, typically speaking, you’re going to want to go back to whoever gave you the data and understand where it came from and how it was gathered, or bring in an expert company like HaystackID to work with you. 

It doesn’t cost a lot to do this right, but there can be so many systems involved and so many point of handoff, so to speak, from an eCTD becoming relevant to a matter and somebody else makes a call internally to somebody else and yada-yada-yada, it’s important to really audit that process so that you know that you have everything.

Then as far as produce it, it can then be uploaded through our tool in Relativity, where it can be acted on and tested and converted and thrown out like any regular production document. I’ve seen organizations try to produce the entire file. We’ve had them come to us with these types of issues. 

So, once we get the eCTD, handling the production is really easy. 

Any other questions? 

Great, well, thank you all for joining us today. We look forward to having you guys every month. We see a lot of the same names and faces, so we really appreciate the support. I will hand it back to Rob Robinson to close out. Any questions that pop up, please feel free to email us. You have access to these slides. We also post these on our learning section on our website. 

Go ahead, Rob. Thank you, guys. 

Closing

Thank you so much, Mike, we appreciate it. Thank you, John, Vee, and Albert for the excellent information and insight. We also want to thank each of you who took time out of your schedule to attend today. We know how valuable that time is, and we don’t take for granted you sharing it with us, so we appreciate that. 

Additional, we hope you have an opportunity to attend our next monthly webcast, and that’s scheduled for 14 October, Wednesday at 12 p.m. Eastern Time, and that will be on the topic of the Dynamics of Antitrust Investigations, and that presentation, which will be led by Michael, again will include some recent updates on FTC and DOJ practices and procedures regarding Second Requests, so please take the opportunity to attend. You can find a detailed description of that on our website and also register there. 

Again, thank you for attending. Have a great rest of the day and this formally concludes today’s webcast. 


CLICK HERE TO DOWNLOAD THE PRESENTATION SLIDESPage 1 / 45Zoom 100%


Learn More About HaystackID Electronic Common Technical Document (eCTD) SupportPage 1 / 2Zoom 100%


CLICK HERE FOR THE ON-DEMAND PRESENTATION (BrightTalk)

Avatar
Marketing Team

https://haystackid.com/webcast-transcript-hatch-waxman-matters-and-ediscovery-turbo-charging-pharma-collections-and-reviews/

UK COVID-19 Stratagy: “The key is to Remove Restrictions from those segments of the population that are at low risk of Death from Infection” via Iowa EDU

By Nic Lewis The current approach A study by the COVID-19 Response Team from Imperial College (Ferguson et al. 2020[i]) appears to be largely responsible for driving UK government policy actions. The lockdown imposed in the UK appears, unsurprisingly, to have slowed the growth of COVID-19 infections, and may well soon lead to total active […]

via A sensible COVID-19 exit strategy for the UK — Iowa Climate Science Education

Global Standardization of Forensics will Decrease the Bias Factor of Evidence Collection Procedures and Court Rulings

Interviews – 2018

Angus Marshall, Digital Forensic Scientist

via Angus Marshall
Angus, tell us a bit about yourself. What is your role, and how long have you been working in digital forensics?

Where to begin? I have a lot of different roles these days, but by day I’m a Lecturer in Cybersecurity – currently at the University of York, and also run my own digital forensic consultancy business. I drifted into the forensic world almost by accident back in 2001 when a server I managed was hacked. I presented a paper on the investigation of that incident at a forensic science conference and a few weeks later found myself asked to help investigate a missing person case that turned out to be a murder. There’s been a steady stream of casework ever since.

I’m registered as an expert adviser and most of my recent casework seems to deal with difficult to explain or analyse material. Alongside that, I’ve spent a lot of time (some might say too much) working on standards during my time on the Forensic Science Regulator’s working group on digital evidence and as a member of BSI’s IST/033 information security group and the UK’s digital evidence rep. on ISO/IEC JTC1 SC27 WG4, where I led the work to develop ISO/IEC 27041 and 27042, and contributed to the other investigative and eDiscovery standards.

You’ve recently published some research into verification and validation in digital forensics. What was the goal of the study?

It grew out of a proposition in ISO/IEC 27041 that tool verification (i.e. evidence that a tool conforms to its specification) can be used to support method validation (i.e. showing that a particular method can be made to work in a lab). The idea of the 27041 proposal is that if tool vendors can provide evidence from their own development processes and testing, the tool users shouldn’t need to repeat that. We wanted to explore the reality of that by looking at accredited lab processes and real tools. In practice, we found that it currently won’t work because the requirement definitions for the methods don’t seem to exist and the tool vendors either can’t or won’t disclose data about their internal quality assurance.

The effect of it is that it looks like there may be a gap in the accreditation process. Rather than having a selection of methods that are known to work correctly (as we see in calibration houses, metallurgical and chemical labs etc. – where the ISO 17025 standard originated) which can be chosen to meet a specific customer requirement, we have methods which satisfy much fuzzier customer requirements which are almost always non-technical in nature because the customers are CJS practitioners who simply don’t express things in a technical way.

We’re not saying that anyone is necessarily doing anything wrong, by the way, just that we think they’ll struggle to provide evidence that they’re doing the right things in the right way.

Where do we stand with standardisation in the UK at the moment?

Standardization is a tricky word. It can mean that we all do things the same way, but I think you’re asking about progress towards compliance with the regulations. In that respect, it looks like we’re on the way. It’s slower than the regulator would like. However, our research at York suggests that even the accreditations awarded so far may not be quite as good as they could be. They probably satisfy the letter of the regulator’s documents, but not the spirit of the underlying standard. The technical correctness evidence is missing.

ISO 17025 has faced a lot of controversy since it has been rolled out as the standard for digital forensics in the UK. Could you briefly outline the main reasons why?

Most of the controversy is around cost and complexity. With accreditation costing upwards of £10k for even a small lab, it makes big holes in budgets. For the private sector, where turnover for a small lab can be under £100k per annum, that’s a huge issue. The cost has to be passed on. Then there’s the time and disruption involved in producing the necessary documents, and then maintaining them and providing evidence that they’re being followed for each and every examination.

A lot of that criticism is justified, but adoption of any standard also creates an opportunity to take a step back and review what’s going on in the lab. It’s a chance to find a better way to do things and improve confidence in what you’re doing.

In your opinion, what is the biggest stumbling block either for ISO 17025 specifically, or for standardizing digital forensics in general?

Two things – as our research suggests, the lack of requirements makes the whole verification and validation process harder, and there’s the confusion about exactly what validation means. In ISO terms, it’s proof that you can make a process work for you and your customers. People still seem to think it’s about proving that tools are correct. Even a broken tool can be used in a valid process, if the process accounts for the errors the tool makes.

I guess I’ve had the benefit of seeing how standards are produced and learning how to use the ISO online browsing platform to find the definitions that apply. Standards writers are a lot like Humpty Dumpty. When we use a word it means exactly what we choose it to mean. Is there a way to properly standardise tools and methods in digital forensics?

It’s not just a UK problem – it’s global. There’s an opportunity for the industry to review the situation, now, and create its own set of standard requirements for methods. If these are used correctly, we can tell the tool makers what we need from them and enable proper objective testing to show that the tools are doing what we need them to. They’ll also allow us to devise proper tests for methods to show that they really are valid, and to learn where the boundaries of those methods are.

Your study also looked at some existing projects in the area: can you tell us about some of these? Do any of them present a potential solution?

NIST and SWGDE both have projects in this space, but specifically looking at tool testing. The guidance and methods look sound, but they have some limitations. Firstly, because they’re only testing tools, they don’t address some of the wider non-technical requirements that we need to satisfy in methods (things like legal considerations, specific local operational constraints etc.).

Secondly, the NIST project in particular lacks a bit of transparency about how they’re establishing requirements and choosing which functions to test. If the industry worked together we could provide some more guidance to help them deal with the most common or highest priority functions.

Both projects, however, could serve as a good foundation for further work and I’d love to see them participating in a community project around requirements definition, test development and sharing of validation information.

Is there anything else you’d like to share about the results?

We need to get away from thinking solely in terms of customer requirements and method scope. These concepts work in other disciplines because there’s a solid base of fundamental science behind the methods. Digital forensics relies on reverse-engineering and trying to understand the mind of a developer in order to work out how extract and interpret data. That means we have a potentially higher burden of proof for any method we develop. We also need to remember that we deal with a rate of change caused by human ingenuity and marketing, instead of evolution.

Things move pretty fast in DF, if we don’t stop and look at what we’re doing once in a while, we’ll miss something important.

Read Angus Marshall’s paper on requirements in digital forensics method definition here. Angus Marshall

The hottest topic in digital forensics at the moment, standardisation is on the tip of everyone’s tongues. Following various think pieces on the subject and a plethora of meetings at conferences, I spoke to Angus Marshall about his latest paper and what he thinks the future holds for this area of the industry. You can […]

via Angus Marshall talks about standardisation — scar

Forensic Failures Described via Law in Focus @ CSIDDS |

Faulty Forensics: Explained

By Jessica Brand

(West Midlands Police / Flickr [CC])

In our Explainer series, Fair Punishment Project lawyers help unpack some of the most complicated issues in the criminal justice system. We break down the problems behind the headlines — like bail, civil asset forfeiture, or the Brady doctrine — so that everyone can understand them. Wherever possible, we try to utilize the stories of those affected by the criminal justice system to show how these laws and principles should work, and how they often fail. We will update our Explainers quarterly to keep them current.

In 1992, three homemade bombs exploded in seemingly random locations around Colorado. When police later learned that sometime after the bombs went off, Jimmy Genrich had requested a copy of The Anarchist Cookbook from a bookstore, he became their top suspect. In a search of his house, they found no gunpowder or bomb-making materials, just some common household tools — pliers and wire cutters. They then sent those tools to their lab to see if they made markings or toolmarks similar to those found on the bombs.

At trial, forensic examiner John O’Neil matched the tools to all three bombs and, incredibly, to an earlier bomb from 1989 that analysts believed the same person had made — a bomb Genrich could not have made because he had an ironclad alibi. No research existed showing that tools such as wire cutters or pliers could leave unique markings, nor did studies show that examiners such as O’Neil could accurately match markings left by a known tool to those found in crime scene evidence. And yet O’Neil told the jury it was no problem, and that the marks “matched … to the exclusion of any other tool” in the world. Based on little other evidence, the jury convicted Genrich.

Twenty-five years later, the Innocence Project is challenging Genrich’s conviction and the scientific basis of this type of toolmark testimony, calling it “indefensible.” [Meehan Crist and Tim Requarth / The Nation]

There are literally hundreds of cases like this, where faulty forensic testimony has led to a wrongful conviction. And yet as scientists have questioned the reliability and validity of “pattern-matching” evidence — such as fingerprints, bite marks, and hair — prosecutors are digging in their heels and continuing to rely on it. In this explainer, we explore the state of pattern-matching evidence in criminal trials.

What is pattern-matching evidence?

In a pattern-matching, or “feature-comparison,” field of study, an examiner evaluates characteristics visible on evidence found at the crime scene — e.g., a fingerprint, a marking on a fired bullet (“toolmark”), handwriting on a note — and compares those features to a sample collected from a suspect. If the characteristics, or patterns, look the same, the examiner declares a match. [Jennifer Friedman & Jessica Brand / Santa Clara Law Review]

Typical pattern-matching fields include the analysis of latent fingerprints, microscopic hair, shoe prints and footwear, bite marks, firearms, and handwriting. [“A Path Forward” / National Academy of Sciences”] Examiners in almost every pattern-matching field follow a method of analysis called “ACE-V” (Analyze a sample, Compare, Evaluate — Verify). [Jamie Walvisch / Phys.org]

Here are two common types of pattern-matching evidence:

Fingerprints: Fingerprint analysts try to match a print found at the crime scene (a “latent” print) to a suspect’s print. They look at features on the latent print — the way ridges start, stop, and flow, for example — and note those they believe are “significant.” Analysts then compare those features to ones identified on the suspect print and determine whether there is sufficient similarity between the two. (Notably, some analysts will deviate from this method and look at the latent print alongside the suspect’s print before deciding which characteristics are important.) [President’s Council of Advisors on Science and Technology]

Firearms: Firearm examiners try to determine if shell casings or bullets found at a crime scene are fired from a particular gun. They examine the collected bullets through a microscope, mark down characteristics, and compare these to characteristics on bullets test-fired from a known gun. If there is sufficient similarity, they declare a match. [“A Path Forward” / National Academy of Sciences”]

What’s wrong with pattern-matching evidence?

There are a number of reasons pattern-matching evidence is deeply flawed, experts have found. Here are just a few:

These conclusions are based on widely held, but unproven, assumptions.

The idea that handwriting, fingerprints, shoeprints, hair, or even markings left by a particular gun, are unique is fundamental to forensic science. The finding of a conclusive match, between two fingerprints for example, is known as “individualization.” [Kelly Servick / Science Mag]

However, despite this common assumption, examiners actually have no credible evidence or proof that hair, bullet markings, or things like partial fingerprints are unique — in any of these pattern matching fields.

In February 2018, The Nation conducted a comprehensive study of forensic pattern-matching analysis (referenced earlier in this explainer, in relation to Jimmy Genrich). The study revealed “a startling lack of scientific support for forensic pattern-matching techniques.” Disturbingly, the authors also described “a legal system that failed to separate nonsense from science in capital cases; and consensus among prosecutors all the way up to the attorney general that scientifically dubious forensic techniques should not only be protected, but expanded.” [Meehan Crist and Tim Requarth / The Nation]

Similarly, no studies show that one person’s bite mark is unique and therefore different from everyone else’s bite mark in the world. [Radley Balko / Washington Post] No studies show that all markings left on bullets by guns are unique. [Stephen Cooper / HuffPost] And no studies show that one person’s fingerprints — unless perhaps a completely perfect, fully rolled print — are completely different than everyone else’s fingerprints. It’s just assumed. [Sarah Knapton / The Telegraph]

Examiners often don’t actually know whether certain features they rely upon to declare a “match” are unique or even rare.

On any given Air Jordan sneaker, there are a certain number of shared characteristics: a swoosh mark, a tread put into the soles. That may also be true of handwriting. Many of us were taught to write cursive by tracing over letters, after all, so it stands to reason that some of us may write in similar ways. But examiners do not know how rare certain features are, like a high arch in a cursive “r” or crossing one’s sevens. They therefore can’t tell you how important, or discriminating, it is when they see shared characteristics between handwriting samples. The same may be true of characteristics on fingerprints, marks left by teeth, and the like. [Jonathan Jones / Frontline]

There are no objective standards to guide how examiners reach their conclusions.

How many characteristics must be shared before an examiner can definitively declare “a match”? It is entirely up to the discretion of the individual examiner, based on what the examiner usually chalks up to “training and experience.” Think Goldilocks. Once she determines the number that is “just right,” she can pick. “In some ways, the process is no more complicated than a child’s picture-matching game,” wrote the authors of one recent article. [Liliana Segura & Jordan Smith / The Intercept] This is true for every pattern-matching field — it’s almost entirely subjective. [“A Path Forward” / National Academy of Sciences”]

Unsurprisingly, this can lead to inconsistent and incompatible conclusions.

In Davenport, Iowa, police searching a murder crime scene found a fingerprint on a blood-soaked cigarette box. That print formed the evidence against 29-year-old Chad Enderle. At trial, prosecutors pointed to seven points of similarity between the crime scene print and Enderle’s print to declare a match. But was that enough? Several experts hired by the newspaper to cover the case said they could not draw any conclusions about whether it matched Enderle. But the defense lawyer didn’t call an expert and the jury convicted Enderle. [Susan Du, Stephanie Haines, Gideon Resnick & Tori Simkovic / The Quad-City Times]

Why faulty forensics persist

Despite countless errors like these, experts continue to use these flawed methods and prosecutors still rely on their results. Here’s why:

Experts are often overconfident in their abilities to declare a match.

These fields have not established an “error rate” — an estimate of how often examiners erroneously declare a “match,” or how often they find something inconclusive or a non-match when the items are from the same source. Even if your hair or fingerprints are “unique,” if experts can’t accurately declare a match, that matters. [Brandon L. Garrett / The Baffler]

Analysts nonetheless give very confident-sounding conclusions — and juries often believe them wholesale. “To a reasonable degree of scientific certainty” — that’s what analysts usually say when they declare a match, and it sounds good. But it actually has no real meaning. As John Oliver explained on his HBO show: “It’s one of those terms like basic or trill that has no commonly understood definition.” [John Oliver / Last Week Tonight] Yet, in trial after trial, jurors find these questionable conclusions extremely persuasive. [Radley Balko / Washington Post]

Why did jurors wrongfully convict Santae Tribble of murdering a Washington, D.C., taxi driver, despite his rock-solid alibi supported by witness testimony? “The main evidence was the hair in the stocking cap,” a juror told reporters. “That’s what the jury based everything on.” [Henry Gass / Christian Science Monitor]

But it was someone else’s hair. Twenty-eight years later, after Tribble had served his entire sentence, DNA evidence excluded him as the source of the hair. Incredibly, DNA analysis established that one of the crime scene hairs, initially identified by an examiner as a human hair, belonged to a dog. [Spencer S. Hsu / Washington Post]

Labs are not independent — and that can lead to biased decision-making.

Crime labs are often embedded in police departments, with the head of the lab reporting to the head of the police department. [“A Path Forward” / National Academy of Sciences] In some places, prosecutors write lab workers’ performance reviews. [Radley Balko / HuffPost] This gives lab workers an incentive to produce results favorable to the government. Research has also shown that lab technicians can be influenced by details of the case and what they expect to find, a phenomenon known as “cognitive bias.” [Sue Russell / Pacific Standard]

Lab workers may also have a financial motive. According to a 2013 study, many crime labs across the country received money for each conviction they helped obtain. At the time, statutes in Florida and North Carolina provided remuneration only “upon conviction”; Alabama, Arizona, California, Missouri, Wisconsin, Tennessee, New Mexico, Kentucky, New Jersey, and Virginia had similar fee-based systems. [Jordan Michael Smith / Business Insider]

In North Carolina, a state-run crime lab produced a training manual that instructed analysts to consider defendants and their attorneys as enemies and warned of “defense whores” — experts hired by defense attorneys. [Radley Balko / Reason]

Courts are complicit

Despite its flaws, judges regularly allow prosecutors to admit forensic evidence. In place of hearings, many take “judicial notice” of the field’s reliability, accepting as fact that the field is accurate without requiring the government to prove it. As Radley Balko from the Washington Post writes: “Judges continue to allow practitioners of these other fields to testify even afterthe scientific community has discredited them, and even after DNA testing has exonerated people who were convicted, because practitioners from those fields told jurors that the defendant and only the defendant could have committed the crime.” [Radley Balko / Washington Post]

In Blair County, Pennsylvania, in 2017, Judge Jolene G. Kopriva ruled that prosecutors could present bite mark testimony in a murder trial. Kopriva didn’t even hold an evidentiary hearing to examine whether it’s a reliable science, notwithstanding the mounting criticism of the field. Why? Because courts have always admitted it. [Kay Stephens / Altoona Mirror]

Getting it wrong

Not surprisingly, flawed evidence leads to flawed outcomes. According to the Innocence Project, faulty forensic testimony has contributed to 46 percent of all wrongful convictions in cases with subsequent DNA exonerations. [Innocence Project] Similarly, UVA Law Professor Brandon Garrett examined legal documents and trial transcripts for the first 250 DNA exonerees, and discovered that more than half had cases tainted by “invalid, unreliable, concealed, or erroneous forensic evidence.” [Beth Schwartzapfel / Newsweek]

Hair analysis

In 2015, the FBI admitted that its own examiners presented flawed microscopic hair comparison testimony in over 95 percent of cases over a two-decade span. Thirty-three people had received the death penalty in those cases, and nine were executed. [Pema Levy / Mother Jones] Kirk Odom, for example, was wrongfully imprisoned for 22 years because of hair evidence. Convicted of a 1981 rape and robbery, he served his entire term in prison before DNA evidence exonerated him in 2012. [Spencer S. Hsu / Washington Post]

In 1985, in Springfield, Massachusetts, testimony from a hair matching “expert” put George Perrot in prison — where he stayed for 30 years — for a rape he did not commit. The 78-year-old victim said Perrot was not the assailant, because, unlike the rapist, he had a beard. Nonetheless, the prosecution moved forward on the basis of a single hair found at the scene that the examiner claimed could only match Perrot. Three decades later, a court reversed the conviction after finding no scientific basis for a claim that a specific person is the only possible source of a hair. Prosecutors have dropped the charges. [Danny McDonald / Boston Globe]

In 1982, police in Nampa, Idaho, charged Charles Fain with the rape and murder of a 9-year-old girl. The government claimed Fain’s hair matched hair discovered at the crime scene. A jury convicted him and sentenced him to death. DNA testing later exonerated him, and, in 2001, after he’d spent two decades in prison, a judge overturned his conviction. [Raymond Bonner / New York Times]

Bite mark analysis

In 1999, 26 members of the American Board of Forensic Odontologyparticipated in an informal proficiency test regarding their work on bite marks. They were given seven sets of dental molds and asked to match them to four bite marks from real cases. They reached erroneous results 63 percent of the time. [60 Minutes] One bite mark study has shown that forensic dentists can’t even determine if a bite mark is caused by human teeth. [Pema Levy / Mother Jones]

That didn’t keep bite mark “expert” Michael West from testifying in trial after trial. In 1994, West testified that the bite mark pattern found on an 84-year-old victim’s body matched Eddie Lee Howard’s teeth. Based largely on West’s testimony, the jury convicted Howard and sentenced him to death. Experts have since called bite mark testimony “scientifically unreliable.” And sure enough, 14 years later, DNA testing on the knife believed to be the murder weapon excluded Howard as a contributor. Yet the state continues to argue that Howard’s conviction should be upheld on the basis of West’s testimony. [Radley Balko / Washington Post]

West, who in 1994 was suspended from the American Board of Forensic Odontology and basically forced to resign in 2006, is at least partially responsible for several other wrongful convictions as well. [Radley Balko / Washington Post]

West himself has even discredited his own testimony, now stating that he “no longer believe[s] in bite mark analysis. I don’t think it should be used in court.” [Innocence Project]

Fingerprint analysis

The FBI has found that fingerprint examiners could have an error rate, or false match call, as high as 1 in 306 cases, with another study indicating examiners get it wrong as often as 1 in every 18 cases. [Jordan Smith / The Intercept] A third study of 169 fingerprint examiners found a 7.5 percent false negative rate (where examiners erroneously found prints came from two different people), and a 0.1 percent false positive rate. [Kelly Servick / Science Mag]

In 2004, police accused American attorney Brandon Mayfield of the notorious Madrid train bombing after experts claimed his fingerprint matched one found on a bag of detonators. Eventually, four experts agreed with this finding. Police arrested him and detained him for two weeks until the police realized their mistake and were forced to release him. [Steve Pokin / Springfield News-Leader]

In Boston, Stephan Cowans was convicted, in part on fingerprint evidence, in the 1997 shooting of a police officer. But seven years later, DNA evidence exonerated him and an examiner stated that the match was faulty. [Innocence Project]

A 2012 review of the St. Paul, Minnesota, crime lab found that over 40 percent of fingerprint cases had “seriously deficient work.” And “[d]ue to the complete lack of annotation of actions taken during the original examination process, it is difficult to determine the examination processes, including what work was attempted or accomplished.” [Madeleine Baran / MPR News]

Firearm analysis

According to one study, firearm examiners may have a false positive rate as high as 2.2 percent, meaning analysts may erroneously declare a match as frequently as 1 in 46 times. This is a far cry from the “near perfect” accuracy that examiners often claim. [President’s Council of Advisors on Science and Technology]

In 1993, a jury convicted Patrick Pursley of murder on the basis of firearms testimony. The experts declared that casings and bullets found on the scene matched a gun linked to Pursley “to the exclusion of all other firearms.” Years later, an expert for the state agreed that the examiner should never have made such a definitive statement. Instead, he should have stated that Pursley’s gun “couldn’t be eliminated.” In addition, the defense’s experts found that Pursley’s gun was not the source of the crime scene evidence. Digital imaging supported the defense. [Waiting for Justice / Northwestern Law Bluhm Legal Clinic] In 2017, a court granted Pursley a new trial. [Georgette Braun / Rockford Register Star]

Rethinking faulty forensics

Scientists from across the country are calling for the justice system to rethink its willingness to admit pattern-matching evidence.

In 2009, the National Research Council of the National Academy of Science released a groundbreaking report concluding that forensic science methods “typically lack mandatory and enforceable standards, founded on rigorous research and testing, certification requirements, and accreditation programs.” [Peter Neufeld / New York Times]

In 2016, the President’s Council of Advisors on Science and Technology (PCAST), a group of pre-eminent scientists, issued a scathing report on pattern-matching evidence. The report concluded that most of the field lacked “scientific validity” — i.e., research showing examiners could accurately and reliably do their jobs. [Jordan Smith / The Intercept] Until the field conducted better research proving its accuracy, the Council stated that forensic science had no place in the American courtroom. The study found that, regarding bite mark analysis, the error rate was so high that resources shouldn’t be wasted to attempt to show it can be used accurately. [Radley Balko / Washington Post]

After the PCAST report came out, then-Attorney General Loretta Lynch, citing no studies, stated emphatically that “when used properly, forensic science evidence helps juries identify the guilty and clear the innocent.” [Jordan Smith / The Intercept] “We appreciate [PCAST’s] contribution to the field of scientific inquiry,” Lynch said, “[but] the department will not be adopting the recommendations related to the admissibility of forensic science evidence.” [Radley Balko / Washington Post]

The National District Attorneys Association (NDAA) called the PCAST report “scientifically irresponsible.” [Jessica Pishko / The Nation] “Adopting any of their recommendations would have a devastating effect on the ability of law enforcement, prosecutors and the defense bar to fully investigate their cases, exclude innocent suspects, implicate the guilty, and achieve true justice at trial,” the association noted. [Rebecca McCray / Take Part]

The NDAA also wrote that PCAST “clearly and obviously disregard[ed] large bodies of scientific evidence … and rel[ied], at times, on unreliable and discredited research.” But when PCAST sent out a subsequent request for additional studies, neither the NDAA nor the Department of Justice identified any. [PCAST Addendum]

This problem is getting worse under the current administration. Attorney General Jeff Sessions has disbanded the National Commission on Forensic Science, formed to improve both the study and use of forensic science, and which had issued over 40 consensus recommendation documents to improve forensic science. [Suzanne Bell / Slate] He then developed a DOJ Task Force on Crime Reduction and Public Safety, tasked with “support[ing] law enforcement” and “restor[ing] public safety.” [Pema Levy / Mother Jones]

But there are also new attempts to rein in the use of disproven forensic methods. In Texasthe Forensic Science Commission has called for a ban on bite marks. “I think pretty much everybody agrees that there is no scientific basis for a statistical probability associated with a bite mark,” said Dr. Henry Kessler, chair of the subcommittee on bite mark analysis. [Meagan Flynn / Houston Press]

A bill before the Virginia General Assembly, now carried over until 2019, would provide individuals convicted on now-discredited forensic science a legal avenue to contest their convictions. The bill is modeled after similar legislation enacted in Texas and California. The Virginia Commonwealth’s Attorneys Association opposes the legislation, arguing: “It allows all sorts of opportunities to ‘game’ the system.” [Frank Green / Richmond Times-Dispatch]

Meanwhile, at least one judge has recognized the danger of forensic expert testimony. In a 2016 concurrence, Judge Catherine Easterly of the D.C. Court of Appeals lambasted expert testimony about toolmark matching: “As matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true. This is not evidence on which we can in good conscience rely, particularly in criminal cases … [T]he District of Columbia courts must bar the admission of these certainty statements, whether or not the government has a policy that prohibits their elicitation. We cannot be complicit in their use.” [Spencer S. Hsu / Washington Post]

Do you wonder how witchcraft and satanic children eating coven stories survive in this era of lies and misdemeanors and wrongful convictions? This article pushes back against what’s coming out of the US WH and DOJ (and some DAs) spiel about forensic reliability. https://injusticetoday.com/faulty-forensics-explained-fe4d41157452

via #Forensics: The usual forensic failures described : DAs don’t give much of a damn. — FORENSICS and LAW in FOCUS @ CSIDDS | News and Trends

Counteranalysis via Freud

In psychology, the term was first employed by Sigmund Freud‘s colleague Josef Breuer (1842–1925), who developed a “cathartic” treatment using hypnosis for persons suffering from extensive hysteria. While under hypnosis, Breuer’s patients were able to recall traumatic experiences, and through the process of expressing the original emotions that had been repressed and forgotten, they were relieved of their hysteric symptoms. Catharsis was also central to Freud’s concept ofpsychoanalysis, but he replaced hypnosis with free association.[16]

The term catharsis has also been adopted by modern psychotherapy, particularly Freudian psychoanalysis, to describe the act of expressing, or more accurately,experiencing the deep emotions often associated with events in the individual’s past which had originally been repressed or ignored, and had never been adequately addressed or experienced.

There has been much debate about the use of catharsis in the reduction of anger. Some scholars believe that “blowing off steam” may reduce physiological stress in the short term, but this reduction may act as a reward mechanism, reinforcing the behavior and promoting future outbursts.[17][18][19][20] However, other studies have suggested that using violent media may decrease hostility under periods of stress.[21] Legal scholars have linked “catharsis” to “closure”[22] (an individual’s desire for a firm answer to a question and an aversion toward ambiguity) and “satisfaction” which can be applied to affective strategies as diverse as retribution, on one hand, and forgiveness on the other.[23] Interestingly, there’s no “one size fits all” definition of “catharsis”,[24] and this doesn’t allow a clear definition of its use in therapeutic terms.

COUNTERANALYSIS
counteranalysis.org