CALIFORNIA — Charles Manson, the wild-eyed 1960s cult leader whose followers committed heinous murders that terrorized Los Angeles and shocked the nation, died Sunday of natural causes, according to the California Department of Corrections. He was 83.
The diminutive and charismatic Manson orchestrated a wave of violence in August 1969 that took the lives of seven people, spawned headlines worldwide and landed him and his “Manson Family” of followers in prison for most of the remainder of their lives.
Manson served nine life terms in California prisons and was denied parole 12 times. His notoriety, boosted by popular books and films, made him a cult figure to those fascinated by his dark apocalyptic visions.
“He was the dictatorial ruler of the (Manson) family, the king, the Maharaja. And the members of the family were slavishly obedient to him,” former prosecutor Victor Bugliosi told CNN in 2015.
To the point, they would kill for him.
The brutal killings began on August 9, 1969, at the home of actress Sharon Tate and her husband, famed movie director Roman Polanski. He was out of the country at the time. The first set of victims were Tate, who was eight months’ pregnant; a celebrity hairstylist named Jay Sebring; coffee fortune heiress Abigail Folger; writer Wojciech Frykowski; and Steven Parent, a friend of the family’s caretaker.
The next evening, another set of murders took place. Supermarket executive Leno LaBianca and his wife, Rosemary, were killed at their home.
Although Manson ordered the killings, he didn’t participate.
Over the course of two nights, the killers took the lives of seven people, inflicting 169 stab wounds and seven .22-caliber gunshot wounds. Both crime scenes revealed horrifying details. And a few details linked the two crime scenes.
The word pig was written in victim blood on the walls of one home and the front door of another. There was also another phrase apparently scrawled in blood: Helter Skelter (it was misspelled Healter). The reason for the disturbing writings, the prosecutor argued, was because Manson wanted to start a race war and had hoped the Black Panthers would be blamed for the killings.
On June 16, 1970, Manson and three of his followers — Susan Atkins, Patricia Krenwinkel and Leslie Van Houten — went on trial in Los Angeles.
All of those details came tumbling out in the trial that both mesmerized and horrified the nation. During the trial, Manson and his followers created a circus-like atmosphere in the court with singing, giggling, angry outbursts and even carving X’s in their foreheads.
The charges came after a major break in the case when Atkins, who was already in jail on another charge, bragged to a fellow inmate about the Tate murders. She said they did it “because we wanted to do a crime that would shock the world. …”
Manson was originally sentenced to death but the death penalty was briefly abolished in the state and his concurrent sentences were commuted to life in prison.
He also was convicted in the connection with the killings of Gary Hinman, a musician, and stuntman Donald “Shorty” Shea in 1969.
Charles Manson was born Charles Maddox in Cincinnati in 1934 to an unmarried 16-year-old mother. He later took his then-stepfather William Manson’s last name.
At age 12, Charles Manson was sent to Gibault School for Boys in Terre Haute, Indiana, for stealing. Over the next 20 years, he was in and out of reform schools and prison for various crimes.
In a 1987 prison interview with CNN, he said, “I spent the best part of my life in boys’ schools, prisons, and reform school because I had nobody.”
After marrying twice and spending half his life in prison, 32-year-old Manson made his way to Berkeley, California, by way of San Francisco in 1967. He established himself as a guru in the summer of love and soon shared a home with 18 women.
By 1968, race riots, the Black Panther movement, and anti-world violence convinced Manson that Armageddon was coming. He called it Helter Skelter after the famous Beatles song.
The so-called Manson Family made a dilapidated old movie set called Spahn’s Ranch near Los Angeles their home.
“I was mesmerized by his mind and the things he professed,” Manson Family member Leslie van Houten once said.
At the ranch Manson, who was 5-feet-2, hosted LSD fueled orgies and gave sermons. His followers were in thrall of Manson, who told them he was Jesus Christ — and the devil rolled into one.
“They worshipped Charlie like a god,” former Manson Family member Barbara Hoyt told CNN.
Music a part of his life
While in prison as a young man, Manson would listen to the radio. Inspired by the Beatles, he started writing songs and performing in prison shows.
Manson believed that the Beatles were speaking to him through the lyrics of the White Album, which was released in late 1968. The apocalyptic message, as Manson interpreted it: Blacks would “rise up” and overthrow the white establishment in a race war. Manson and his Family would be spared by hiding out in a “bottomless pit” near Death Valley until he could emerge to assume leadership of the post-revolutionary order.
After moving to California, Manson met Hinman, a music teacher who introduced him to Dennis Wilson of the Beach Boys.
Wilson took one of Manson’s songs, “Cease to Exist,” and turned it into the Beach Boys’ “Never Learn Not to Love.” Manson was furious when he didn’t get a songwriting credit.
Wilson had introduced Manson to record producer Terry Melcher, the son of actress Doris Day. After initially showing interest in Manson’s music, Melcher declined to work with him further.
Melcher later moved out of his house, which was then leased to Polanski and Tate.
Manson got people everywhere to pay attention to him.
With their brew of violence, music and anti-establishment youth counterculture, the 1969 murders and ensuing trials established Manson as a perverse cultural icon that endured until his death. Along the way, the mastermind transcended his victims, and the Tate-LaBianca murders became known as the Manson murders.
Laurie Levenson, a professor at Loyola Law School who follows high-profile cases, described Manson in 2009 as the worst of the worst, evil incarnate.
“If you’re going to be evil, you have to be off-the-charts evil, and Charlie Manson was off-the-charts evil,” Levenson told CNN.
Manson’s image can still be found on posters and T-shirts. In 1998, the animated television series “South Park” featured Manson in a Christmas special. There have been books, a play, an opera and television movies about Manson and his followers.
[van id=”us/2017/11/20/charles-manson-dead-obit-pkg.cnn”] CALIFORNIA — Charles Manson, the wild-eyed 1960s cult leader whose followers committed heinous murders that terrorized Los Angeles and shocked the nation, died Sunday of natural causes, according to the California Department of Corrections. He was 83. The diminutive and charismatic Manson orchestrated a wave of violence in August 1969 that took the lives […]
- Most forensic “scientists” have little understanding of scientific methodology, do not design or conduct research (and do not know how to), often have not read the serious scientific literature beginning to emerge in their fields. . . . Scientific findings relevant to a given forensic science often are ignored in the conduct of everyday casework.
- via: Salem Press Encyclopedia of Science,
- Although witnesses in American courtrooms are called upon to tell the truth, the whole truth, and nothing but the truth, they may be enjoined from volunteering information. A witness’s individual sense of relevance must often bow to a court’s judgment. The legal system seeks truth, yet it sometimes defers to other values, such as fairness and confidentiality, and in general demands acceptance of formalized rules of procedure. In their capacity as experts, forensic scientists typically enjoy greater latitude than ordinary witnesses in expressing opinions and making judgments in the courtroom, but they too must operate within the often cumbersome and sometimes counterintuitive requirements of the “system” of “justice.”
Definition: Principles of conduct, moral duty, and obligation that guide individuals in their decisions and actions.
Significance: As scientists, forensic scientists have a professional obligation to seek and to speak the truth about matters within their purview. As participants in a forensic process, they are subject to additional, sometimes conflicting, duties. This tension generates many ethical dilemmas.
Although witnesses in American courtrooms are called upon to tell the truth, the whole truth, and nothing but the truth, they may be enjoined from volunteering information. A witness’s individual sense of relevance must often bow to a court’s judgment. The legal system seeks truth, yet it sometimes defers to other values, such as fairness and confidentiality, and in general demands acceptance of formalized rules of procedure. In their capacity as experts, forensic scientists typically enjoy greater latitude than ordinary witnesses in expressing opinions and making judgments in the courtroom, but they too must operate within the often cumbersome and sometimes counterintuitive requirements of the “system” of “justice.”
Forensic scientists are measured against a standard of professional integrity, although the professionalization of the scientific study of crime is far from complete. Professions are substantially self-regulating, usually through agreed-upon standards and codes of ethics, and this creates the need for them to articulate appropriate expectations and the responsibility of members of professions both to act correctly themselves and to provide appropriate correction for their errant colleagues. A case in point is William Tobin’s campaign against the chemical analysis of bullet lead, also known as comparative bullet-lead analysis (CBLA).
Tobin’s Exposure of CBLA
CBLA is a technique that the Federal Bureau of Investigation (FBI) used for four decades—the investigation of the assassination of President John F. Kennedy in 1963 was an early use—to make cases against defendants when traditional firearms analysis (that is, examination of barrel rifling on bullets) was not possible. By measuring the proportions of seven trace elements (antimony, arsenic, bismuth, cadmium, copper, silver, and tin) found in the lead of a bullet in evidence, forensic scientists sought to establish the probability that the bullet came from the same provenance as a bullet in the suspect’s possession. The belief that the comparison of the chemical composition of bullets could connect two bullets rested on unexamined assumptions about the similarities and differences of the source lead from which the bullets were cast. FBI experts testified in thousands of cases that the facts ascertainable through CBLA established likely identity and therefore pointed toward the probable guilt of the accused. Sometimes, as in the case of Michael Behm, who was convicted of murder in 1997, CBLA provided essentially the only evidence of guilt.
In the 1990s, FBI metallurgist William Tobin began to question the validity of the technique. He felt strongly enough about the issue to research the matter, after his retirement in 1998, with Lawrence Livermore National Laboratory metallurgist Erik Randich. They analyzed data from two lead smelters in Alabama and Minnesota and discovered that the FBI techniques could not distinguish batches of lead produced months apart. They also discovered that differences existed within single batches. Their research was published in Forensic Science International in July 2002.
Although he still defended the technique, the director of the FBI Laboratory requested that the National Research Council (NRC) of the National Academy of Sciences review CBLA. In February 2004, the NRC report, titled Forensic Analysis: Weighing Bullet Lead Evidence, confirmed that only extremely limited claims could be made about the relationship between bullets based on CBLA. Given the NRC findings, a New Jersey appeals court overturned Behm’s conviction in March 2005. The results of the NRC study have obvious implications for many other cases as well.
In an article titled “Forensic Significance of Bullet Lead Compositions,” which appeared in the Journal of Forensic Sciences in March 2005, FBI research chemists Robert D. Koons and JoAnn Buscaglia argued that “compositional comparison of bullet lead provides a reliable, highly significant point of evidentiary comparison of potential sources of crime-related bullets.” In September of that year, however, the FBI announced that it would no longer use CBLA. (In a curious subsequent development, Tobin and a member of the NRC committee, Clifford Spiegelman, suggested that a reanalysis of the bullet fragments from the Kennedy assassination might be in order.)
An article published in New Scientist in April 2002, quoted Tobin as saying of the interpretation of bullet data based on CBLA, “It offended me as a scientist.” In fact, Tobin has a long record as a critic of FBI procedures he regards as bad science and of testimonial practices he regards as unwarranted by the scientific data. To complain about testimony that unreasonably goes beyond what the data can support is to respond equally to the demands of science and the demands of ethics. It is a feature of commonsense justice that the punishment should fit the crime, and a basic requirement of that, in turn, is that the people who are punished should be guilty. Violating that requirement is both bad science and bad ethics.
Joyce Gilchrist’s Tainted Evidence
Is it enough that the accused be guilty of some crime, or does it have to be the one in question? If the accused is guilty of the crime in question, does it matter whether the evidence actually shows that? The belief that one can convict the guilty by tweaking the evidence a little, or shading one’s testimony a bit, is among the most common sources of unethical (and, often enough, criminal) behavior on the part of forensic scientists. The cautionary tale of former Oklahoma City police Department forensic scientist Joyce Gilchrist probably falls into this category.
In May 2007, Curtis Edward McCarty, who was facing his third trial for a 1982 murder, was freed as the result of the improper handling and representation of hair evidence by Gilchrist, who apparently had tried to frame McCarty. The judge dismissed the charge despite her belief that McCarty was probably not completely innocent. This was merely the latest in a series of episodes involving Gilchrist.
Questions about the integrity of Gilchrist’s work began as early as January 1987, when a Kansas City colleague, John Wilson, complained about her to the Southwestern Association of Forensic Scientists, without result. In 1998, Robert Miller was exonerated after he had been convicted a decade earlier based in part on Gilchrist’s testimony regarding blood, semen, and hair evidence. In 1999, Gilchrist was criticized by a judge for having given false testimony (regarding semen evidence) in the rape/murder trial of Alfred Brian Mitchell in 1992. In the spring of 2000, Jeffrey Todd Pierce was ordered released after he had served a decade and a half for a rape he did not commit; he had been convicted based on Gilchrist’s testimony. In January 2001, Gilchrist was criticized for the various judicial reprimands and professional critiques her work had received. In August 2001, doubts were raised about the guilt of Malcolm Rent Johnson, who had been executed for a 1981 rape and murder; Johnson was convicted based on Gilchrist’s testimony.
A month later, in September 2001, Gilchrist was finally fired, after years of reputedly shoddy forensics work, including both mishandling and misrepresentation of evidence, on many cases in addition to those noted above. The world of criminal justice contains innumerable isolated instances of perverse idealism, self-serving cynicism, and simple incompetence, but Gilchrist is one of the most striking cases of flagrant disregard for ethics in the forensics community. Was she genuinely convinced of the guilt of those against whom she testified? (She was certainly persuasive to juries.) Was she cynically distorting her testimony, and the evidence, to help prosecutors gain convictions, or was she just incompetent?
Ethics of Competence
One may well agree with forensics ethicist Peter D. Barnett’s remark that “there is a certain baseline level of competence that every criminalist is expected to understand, and there are certain procedures and protocols that are so fundamental to the practice of criminalistics that failure to follow them is evidence of gross incompetence or malfeasance, which is unethical.” As Barnett himself notes, however, “in the practice of forensic science, the disparate educational and experiential backgrounds of workers in the field make determination of a baseline level of competence relatively difficult.”
This is a problem throughout the American criminal justice system. In June 2007, all sergeants in the New Orleans Police Department were required to attend a four-day seminar to learn how to improve their (and their subordinates’) writing of police reports. This was part of an attempt to smooth out conflicts between the department and the New Orleans district attorney’s office, which claimed that part of its difficulty in prosecuting criminals stemmed from “incomplete or vague reports” by officers. More generally, criminalists frequently lament that frontline officers are not more skilled in observing, protecting, collecting, and preserving crime scene evidence.
One certainly can (in theory) impose reasonable expectations about competence and development in forensic science. However, that is not made easy by the variety of educational backgrounds and practical experience of the people who actually work in the field. In an unflattering assessment published in 2005, Jane Campbell Moriarty and Michael J. Saks bluntly asserted that “in the forensic sciences . . . 96 percent of practitioners hold bachelor’s degrees or less.” They went on to note:
Most forensic “scientists” have little understanding of scientific methodology, do not design or conduct research (and do not know how to), often have not read the serious scientific literature beginning to emerge in their fields. . . . Scientific findings relevant to a given forensic science often are ignored in the conduct of everyday casework.
Moreover, as with the difficulty in defining the qualifications for expert testimony, the fact that crime fighting is not a natural kind of expertise has an impact. Almost any expert might be relevant to a criminal case, depending on circumstances. Given the diverse forms of knowledge relevant to the application of science to crime solving, and to the providing of suitable expert testimony, it may be that the only truly unifying factor is the application of the so-called scientific method, broadly understood as intellectual integrity—the determined effort, as physicist Richard P. Feynman put it, not to fool oneself (or others).
What is impressive about the case of William Tobin is his determination to ensure that his colleagues (or former colleagues) not testify to more than the data warrant, both out of scientific integrity and out of fairness to those whose lives are affected by what scientists say. What is appalling about the case of Joyce Gilchrist is the stubbornness of her effort to resist correction by colleagues or even by the seemingly obvious limits of the evidence itself. Sometimes the individual needs to correct the group, by exposing a bogus or complacent consensus; sometimes the group needs to correct the individual, by identifying willful deception or self-centered fantasy. Unfortunately, no formula exists to guarantee the right result, and that is why ethics remains a constant challenge to conscientious souls.
Ethical dilemmas in forensics
- American Academy of Forensic Sciences (AAFS)
- American Society of Crime Laboratory Directors (ASCLD)
- Brain-wave scanners
- Criminal personality profiling
- DNA database controversies
- Ethics of DNA analysis
- Expert witnesses in trials
- Forensic journalism
- Innocence Project
- Interrogation in criminal investigations
- Training and licensing of forensic professionals
- Truth serum in interrogation
Last reviewed: October 2016
Barnett, Peter D. Ethics in Forensic Science: Professional Standards for the Practice of Criminalistics. Boca Raton: CRC, 2001. Print.
Inman, Keith, and Norah Rudin. Principles and Practice of Criminalistics: The Profession of Forensic Science. Boca Raton: CRC, 2001. Print.
Lucas, Douglas M. “The Ethical Responsibilities of the Forensic Scientist: Exploring the Limits.” Journal of Forensic Sciences 34 (1989): 719–29. Print.
Macklin, Ruth. “Ethics and Value Bias in the Forensic Sciences.” Journal of Forensic Sciences 42 (1997): 1203–206. Print.
Moriarty, Jane Campbell, and Michael J. Saks. “Forensic Science: Grand Goals, Tragic Flaws, and Judicial Gatekeeping.” Judges’ Journal 44.4 (2005): 16–33. Print.
Peterson, Joseph L., and John E. Murdock. “Forensic Science Ethics: Developing an Integrated System of Support and Enforcement.” Journal of Forensic Sciences 34 (1989): 749–62. Print.
Derived from: “Ethics.” Forensic Science. Salem Press. 2009.
Since the end of the 19th Century until the current time, law enforcement has been facing a rapid increase in computer-related crimes. In the present time, digital forensics has become an important aspect of not only law enforcement investigations, but also; counter-terrorism investigations, civil litigations, and investigating cyber-incidents. Due to rapid developing and evolving technology, these types of forensic investigations can become complex and intricate. However, creating a general framework for digital forensic professionals to follow during those investigations would lead to a successful retrieval of relevant digital evidence. The digital forensic framework or methodologies should be able to highlight all the proper phases that a digital forensic investigation would endure to ensure accurate and reliable results. One of the challenges that digital forensic professionals have been facing in the recent years is the volume of data submitted for analysis. Few digital forensic methodologies have been developed to provide a framework for the entire process and also offer techniques that would assist digital forensic professionals to reduce the amount of analyzed data. This paper proposes a methodology that focuses mostly on fulfilling the forensic aspect of digital forensic investigations, while also including techniques that can assist digital forensic practitioners in solving the data volume issue.
Focused Digital Forensic Methodology
Modern society has become very dependent on computers and technology to run all aspects of their lives. Technology has had a very positive impact on humanity, which can be easily proven with a short visit to any hospital and witnessing how computers and technology have become tools used to treat and save lives. However, computers have an indisputable disadvantage of being used as a tool to facilitate criminal activities. For instance, the sexual exploitation of children can be performed using the Internet, which would allow criminals to remain anonymous while preying on innocent children. The number of digital related crimes are increasing, which makes law enforcement agencies engaged in a constant battle against criminals who use this technology to commit crimes. As a result, digital forensics has become an important part of law enforcement investigations. Digital forensics is not only performed during law enforcement investigations but can also be conducted during the course of civil matters.
The fact that the information obtained from digital forensic investigations would and can be used as evidence during legal proceedings means that the entire process must be performed according to the legal standards. Perumal (2009) explained that the legal system requires digital forensic processes to be standardized and consistent. One of the issues that Perumal highlighted was the fact that digital evidence is very fragile and the use of improper methods could potentially alter or eliminate that evidence. There are a huge number of methodologies that have been developed all over the world, many of them were designed to target a specific type of technology (Selamat, Yusof, and Sahib, 2008). Also, many methodologies were developed to address requirements imposed by the legal system in certain jurisdictions. One of the methodologies that did not base their theory on technology or the law is the Integrated Digital Investigation Process (IDIP) Model. Carrier and Spafford (2003) explained that the IDIP Model is based on the Locard Exchange Principle, which is used to retrieve evidence from physical crime scenes. The IDIP Model uses the idea that when software programs were being executed in an electronic environment, electronic artifacts would most likely be created on the underlying device. Those artifacts can be retrieved and analyzed to obtain information about a certain incident or event.
This paper examines different literatures that present different types of digital forensic methodologies. Some of these methodologies have taken the focus away from the forensic aspect of digital forensic investigations. Instead, these methodologies have addressed crime scenes and other processes that are not related to the digital forensic field. Also, much research has been focused on creating solutions to the challenges that digital forensic practitioners are facing when conducting digital forensic investigations. One of the main challenges that multiple literatures have addressed is the constant increase in the volume of data that practitioners are acquiring and examining during investigations. This paper proposes a digital forensic methodology that would allow forensic practitioners to overcome the data volume issue and eliminate the lack of focus found in many methodologies.
As it was mentioned above, there are a large number of digital forensic methodologies that were developed all over the world. One of the first serious attempts to develop a standardized methodology that could be used during digital forensic investigations was in 1995. Pollitt (1995) utilized processes that were originally developed to handle physical evidence as a framework to create a methodology for digital forensic investigations. The author’s approach to creating different phases being conducted during an investigation was inspired by many factors that the legal system considers when evaluating any type of evidence. The court would evaluate; whether the seizure was properly conducted, was there any alteration that occurred with the evidence, and what methods were used to examine the evidence. The proper performance of all these steps would allow any type of evidence to be admitted by the court. Pollitt (1995) developed a methodology that consisted of four phases; acquisition, identification, evaluation, and admission as evidence. This methodology focused mostly on the forensic aspect of the investigation and did not extend to other processes that other researchers have considered in their methodology; preparation, planning, and the search of physical crime scene.
Many digital forensic methodologies were developed after Pollitt’s methodology. Some of those methodologies have used different terminology and sequencing of the phases that a forensic practitioner would have to perform throughout their investigations. Most of the methodologies have agreed on certain processes that are related to the forensic aspect of digital forensic investigations (Selamat et al., 2008). Selamat et al. studied 11 different digital forensic methodologies and concluded that all of them had common phases; preservation, collection, examination, analysis, and reporting. This means that all these researchers have agreed on these specific phases and any newly developed methodology would have to include similar phases.
According to Ruibin, Yun, and Gaertner (2005), the proper completion of any digital forensic investigation is directly related to conducting the processes that are similar to the ones highlighted by Selamat et al. (2008). Ruibin, Yun, and Gaertner also explained that in order to properly perform these phases, proper technique must be used. These techniques would ensure that the authenticity and reliability of the evidence is acceptable by the legal standards of the jurisdiction where the methodologies are being implemented. The use of the term legal is not foreign to digital forensic methodologies, as many researchers have addressed the importance of performing digital forensic investigations using the proper legal authorizations. For instance, the Integrated Digital Investigation Process (IDIP) Model has included obtaining legal authorization to perform a digital forensic investigation as one of the sub-phases of the Deployment Phase (Baryamureeba and Tushabe, 2004).
The IDIP Model was revised by Baryamureeba and Tushabe (2004) after noticing some practicality and clarity problems. Baryamureeba and Tushabe named the revised version as the Enhanced Integrated Digital Investigation Process (EIDIP) Model. In the EIDIP Model, the authors focused on two aspects, the sequence of the phases and clarity in regard to processing multiple crime scenes. The EIDIP proposed a modification to the deployment phase of the IDIP Model and included investigation of the physical and digital scenes as a way to arrive at the confirmation phase. Also, Baryamureeba and Tushabe added the Traceback phase, which aims at using information obtained from the victim’s machine to trace back other machines used to initiate the incident.
Challenges Facing Current Methodologies
Technology has been developing on two aspects; hardware and software. For instance, a mobile device hardware has seen great advances that allows these devices to be able to perform complex operations efficiently. At the same time, the software operates on mobile devices has also undergone great advancements to be able to support the consumers’ demand. However, these enormous changes in technology have affected the digital forensic methodologies, especially the ones that evolved around a certain set of technologies. Selamat et al. (2008) explained that there are many digital forensic methodologies that have been developed to target certain devices or a specific type of technology. The main problem with this type of methodology is the rapid changes of the underlying technology, which render these methodologies obsolete.
The legal system in different jurisdictions view digital forensics differently, which reflects on the methodologies used by forensic practitioners in those jurisdictions. According to Perumal (2009), “As computer forensic is a new regulation in Malaysia, there is a little consistency and standardization in the court and industry sector. As a result, it is not yet recognized as a formal ‘Scientific’ discipline in Malaysia” (p. 40). This means that the most affected jurisdictions are ones that have not developed a full understanding of computer forensics and the processes used to preserve and analyze electronic evidence. Also, the lack of understanding by the legal system would reduce the influence that the legal system has on shaping methodologies and making them acceptable by legal standards.
One of the other challenges related to digital forensic methodologies had been caused by the researchers who develop these methodologies. Some researchers have used titles and terminology in naming their methodologies that is not related to digital forensics. For instance, Baryamureeba and Tushabe (2004) named the fourth stage of the EIDIP Model as the Dynamite Phase, which includes Reconstruction and Communication as sub-phases. This lack of clarity in terminology would have a negative impact on any effort to help groups outside the digital forensic arena to have a better understanding of the processes used in the digital forensic field. As it was explained above, some jurisdictions across the globe are still unsure about digital forensics and the evidence obtained from a digital forensic investigation. This means that there would be advantages for researchers to utilize more modest and familiar terminology when addressing digital forensic methodologies. This simplicity in the terminology would assist the legal system in having a better understanding of the digital forensic processes.
The same challenges that the digital forensic field is currently facing can also be seen as challenges to digital forensic methodologies. Some of those challenges are; volume of data, encryption, anti-forensic techniques, and lack of standards. The reason these factors are perceived as challenging is because they have been slowing down the progression of digital forensic investigations. For instance, a fully encrypted computer system and an undisclosed encryption key would prevent practitioners from gaining access to the data, which ultimately imposes a challenge on the entire digital forensic process. One of the challenges that has been addressed by many researchers is the volume of data examined by forensic practitioners during digital forensic investigation. Examining a large amount of information during a digital forensic investigation would slow down the entire forensic process and affect the flow of operations for forensic laboratories.
Volume of Data
The increase in the volume of data was caused by the increasing sizes of electronic storage devices and at the same time the significant decrease in prices of those devices. According to Quick and Choo (2014), digital forensic practitioners have seen a significant increase in the amount of data that is being analyzed in every digital forensic examination. The authors explained that three factors have contributed to the volume of data that has become an issue for digital forensic practitioners; increase in the number of electronic devices for each investigation, the increase in the size of memory for those devices, and the number of investigations that require digital forensic examinations.
The significant increase in the volume of data has caused a great burden on forensic laboratories. Digital forensic practitioners are forced to spend longer times examining and analyzing larger data sets, which cause huge delays in completing digital forensic investigations. Quick and Choo (2014) explained that forensic laboratories have backlogs of work caused by the amount of time spend on each single examination. Some may argue that the increased amount of data can be an advantage as consumers are not forced to delete any data. However, Quick and Choo (2014) highlighted the seriousness of the data volume issue and stated, “Serious implications relating to increasing backlogs include; reduced sentences for convicted defendants due to the length of time waiting for results of digital forensic analysis, suspects committing suicide whilst waiting for analysis, and suspects denied access to family and children whilst waiting for analysis” (p. 274).
Current Solutions to the Data Volume Issue
Ruibin et al. (2005) offered a way to reduce the amount of data for each case by determining the relevant data based on the preliminary investigation. The relevant data for a certain case can be determined through two sources; case information and the person investigating the matter. Combining the information from both sources would assist in building a case profile, which can then be used to determine what type of information would be relevant for each forensic examination. The solution proposed by Ruibin et al. (2005) did not focus on using the technology to reduce the amount of data; however, there were other proposed solutions that suggested using technology to eliminate the data volume issue.
Neuner, Mulazzani, Schrittwieser, and Weippl (2015) proposed a technique that depends on technology to reduce the data volume. Their technique is based on the idea of file deduplication and file whitelisting. Deduplication and file whitelisting techniques would allow digital forensic practitioners to reduce the number of files that would have to be reviewed. Reducing the number of files would reduce the amount of time needed to complete the entire process. The authors defined the whitelisting technique as a way to compare a known list of hash values to the evidence and exclude any matches, as the contents of those files have already been determined. This list of hash values can either represent safe files or known contraband. The method offered by Neuner et al. (2015) is completely automated, and it would not require any human intervention, which means that no manpower is required to perform this task. Both solutions explained above are not controversial, as they utilize standard methods to locate the evidence. However, that is not the case with all solutions proposed to solve the data volume issue.
Quick and Choo (2014) explained that many researchers have addressed the issue of data volume through the use of the term sufficiency of examination. Those researchers did not focus on reducing the amount of data to be examined. Instead, they proposed limiting the search for electronic information only to the amount that would answer the main questions of the investigation. In other words, if a digital forensic practitioner examining a computer system for any evidence related to a child pornography investigation, the examination can be stopped once enough evidence was found to charge the suspect. This means that not all of the computer system would be examined and practitioners would be able to complete more examinations in a shorter period of time.
Some may argue that conducting a partial examination of the data could potentially lead to missing valuable digital evidence. For instance, in the example above concerning conducting a partial examination of child pornography evidence, conducting a full exam may lead to discovering pictures and videos of minors that are being sexually exploited. Those victims can potentially be identified and rescued. This means that conducting a partial examination of the evidence could possibly prevent those victims from being rescued. Still, assuming a full exam could lead to identifying other victims does not stand when confronted by the facts presented by Quick and Choo’s (2014), which highlighted the fact that delaying the results of a forensic exam has grievous negative effects. These effects can be seen on suspects who are waiting a long time for forensic practitioners to conclude their exams. The same goes for the victims, as they are not able to get any closure during the investigation, as they all have to await the exam to be concluded.
The Proposed Methodology
Digital forensic practitioners continue to face a great deal of pressure from the challenges explained earlier. These challenges are placing obstacles in the way of examinations, in return causing practitioners to spend an extensive amount of time working on each investigation. However, it appears that researchers have not invested much effort into creating methodologies that would support practitioners to ease the pressure and eliminate the challenges. However, some may impose a compelling argument that not all challenges can be resolved in the context of the methodologies. For instance, data encryption is one of the issues that prevents practitioners from accessing the evidence and can be categorized as a technical challenge. This means that researchers would not be able to address this challenge in the context of a digital forensic methodology as methodologies are supposed to address higher level processes that occur during digital forensic investigations. Even though this argument seems compelling, practitioners can still create methodologies that are at least capable of reducing the workload pressure away from practitioners.
Selamat et al. (2008) described the Extended Model presented by Ciardhuain (2004) as a complete framework that provides clear stages for digital forensic investigations. Ciardhuain (2004) suggested that there are multiple steps that must be taken throughout digital forensic investigations; awareness, authorization, planning, notification, search and identification of evidence, collection, transport, storage, examination, hypotheses, presentation, proof/defense, and dissemination. Selamat et al. (2008) also explained that the Extended Model included all five phases that were agreed upon by many researchers. This means the remaining phases of the Extended Model are not related to digital forensics and removing them would not affect the authenticity or reliability of the electronic evidence. However, some may argue that the Extended Model was not intend to be conducted by only digital forensic practitioners, as many phases of the model can be performed by non-digital forensic practitioners. For instance, in a criminal investigation, completing the authorization phase can be performed by law enforcement personnel to grant forensic practitioners lawful access to the evidence. This means that the Extended Model, is a framework that is not only for practitioners, but is for all investigations that involve digital evidence.
This paper proposes a new methodology, Focused Digital Forensic Methodology (FDFM), that is capable of eliminating the data volume issue and the lack of focus with the current digital forensic methodologies. The FDFM is designed to be a reflection of the current workflow of law enforcement and civil investigations. The FDFM focuses on ensuring that digital forensic investigations are being conducted properly without overloading practitioners with unrelated activities. Further, the FDFM proposes techniques to reduce the volume of data to cut on time required to complete examinations of electronic evidence, especially on larger data sets. This paper proposes a new methodology, Focused Digital Forensic Methodology (FDFM), that is capable of eliminating the data volume issue and the lack of focus with the current digital forensic methodologies. The FDFM is designed to be a reflection of the current workflow of law enforcement and civil investigations. The FDFM focuses on ensuring that digital forensic investigations are being conducted properly without overloading practitioners with unrelated activities. Further, the FDFM proposes techniques to reduce the volume of data to cut on time required to complete examinations of electronic evidence, especially on larger data sets.
A quick review of the Extended Model would lead to a conclusion that it focuses on activities related to evidence that is considered common knowledge in the law enforcement field and digital forensic field. For instance, the transportation and storage activities of the EIDIP are two parts of evidence handling that is being trained to law enforcement personnel and digital forensic practitioners. So, it would not be of value to emphasiz the idea that digital forensic practitioners must transport and store the evidence after it was collected. For that reason, the FDFM has excluded aspects that can be considered common knowledge in the digital forensic field, which are related to the proper handling of evidence.
Phases of the FDFM
The FDFM is designed in a way similar to the IDIP and EIDIP, as it is broken down into multiple phases, which are further broken down into sub-phases. The FDFM is designed in a way similar to the IDIP and EIDIP, as it is broken down into multiple phases, which are further broken down into sub-phases.
Preparation Phases. The preparation phase can be applied in two situations; when creating a new digital forensic team and after completing a digital forensic investigation. This phase is mainly aimed at ensuring that digital forensic teams, no matter what their mission might be, are capable of initiating and completing an investigation properly and without any problems. Just as the case for the EIDIP, the focus in this phase is ensuring that the forensic team is trained and equipped for their assigned mission.
Training. To ensure the forensic team is executing a forensic methodology properly, they must have all the training that would equip all team members with the knowledge needed. This training does not only focus on how to collect or search evidence properly but also trains the members on how to use the tools needed during those processes. For instance, the team should have the knowledge needed regarding the value of preserving data. They should also be able to utilize any software or hardware tools that are capable of preserving and acquiring data from different platforms.
Equipment. There are many tools that digital forensic professionals utilize during investigations that are capable of serving different purposes. One of the functions that those tools are able to accomplish is automating different processes during the investigation, and completing those jobs in a timely manner. Depending on the main mission of the forensic team, tools and equipment must be available to ensure a proper and fast completion of any digital investigation.
On-Scene Phases. This phase focuses on processes that would be accomplished in the event that a digital forensic team was called in to assist with executing search warrants or even preserving data for a civil litigation.
Physical Preservation. During this phase, digital forensic practitioners ensure that any item that may contain electronic information of value is protected. This protection would ensure that no damage would be inflected on any electronic device that can cause the loss of the information within. For instance, if a digital forensic practitioner found one of the items sought during the search of a residence, this item must be kept in a location under the control of the searching team. This means that any occupant in the residence would not be able to cause any damage to the device, which they would possibly attempt if they believed that it contained incriminating evidence.
Electronic Isolation. The other part of the preservation is ensuring that the information within the collected device is preserved by eliminating any electronic interference. An item like a mobile device would have to be isolated from the cellular network to ensure that no one is capable of damaging or changing the data remotely. The same case applies to computers or other items with network connectivity, as suspects could connect remotely to those devices and attempt to eliminate any information that is potentially damaging to their case. Electronic Isolation. The other part of the preservation is ensuring that the information within the collected device is preserved by eliminating any electronic interference. An item like a mobile device would have to be isolated from the cellular network to ensure that no one is capable of damaging or changing the data remotely. The same case applies to computers or other items with network connectivity, as suspects could connect remotely to those devices and attempt to eliminate any information that is potentially damaging to their case.
Electronic Preservation. The last part of preservation is creating a complete or partial duplicate of the targeted electronic information. The created copy would ensure that the electronic evidence is preserved in a forensic image. In the event that the devices were not supposed to be removed from the scene, forensic images would have to be created on scene, which can then be taken back to the laboratory for examination. A partial collection can also be conducted during this phase, which would be especially advantageous for civil litigation purposes. In those cases, the collection of an entire computer system is not recommended, and practitioners would have to search for and collect only the relevant data.
Laboratory Phases. These phases are conducted when the team is at the laboratory and have all the evidence or forensic images collected from the scene. Prior to interacting with the evidence, an attempt would be made to reduce the volume of data that an examiner would have to process and analyze. The reduction of data volume is crucial for cases that have a huge number of electronic devices and a large volume of storage for electronic information. As it was explained earlier, reducing the volume of data would expedite the examination of the evidence and allow for a better workflow. Once the reduction of data has been completed, examiners can begin processing and analyzing data.
Building Case Profile. This phase is somewhat similar to the creation of a case profile presented by Ruibin et al. (2005). These authors suggested using input from the investigation to identify the type of information that a digital forensic practitioner would need to focus on finding during the examination. For instance, if the case is related to pictures of evidentiary value, then a practitioner would need to be focused on reviewing all the pictures found on a device, without the need to focus on any other type of electronic information. For the FDFM, building a case profile would begin, just as the case for Ruibin et al. (2005), with obtaining information from the investigating agency regarding their investigation. The obtained information must focus specifically on the relationship between the electronic evidence and the incident under investigation. In other words, the investigating agency would have to provide information about what role the devices played in the incident. For instance, a mobile device would be submitted to the digital forensic team as part of a rape investigation. The investigating agency would have to prove how the mobile device was related to the rape incident. In these situations, the phone could have been used by the suspect to communicate with or lure the victim into a certain location. Building Case Profile. This phase is somewhat similar to the creation of a case profile presented by Ruibin et al. (2005). These authors suggested using input from the investigation to identify the type of information that a digital forensic practitioner would need to focus on finding during the examination. For instance, if the case is related to pictures of evidentiary value, then a practitioner would need to be focused on reviewing all the pictures found on a device, without the need to focus on any other type of electronic information. For the FDFM, building a case profile would begin, just as the case for Ruibin et al. (2005), with obtaining information from the investigating agency regarding their investigation. The obtained information must focus specifically on the relationship between the electronic evidence and the incident under investigation. In other words, the investigating agency would have to provide information about what role the devices played in the incident. For instance, a mobile device would be submitted to the digital forensic team as part of a rape investigation. The investigating agency would have to prove how the mobile device was related to the rape incident. In these situations, the phone could have been used by the suspect to communicate with or lure the victim into a certain location.
Once all the information was obtained regarding the investigation and the evidence, the type of data relevant to the investigation can be determined. In the example above regarding the rape case, the suspect may have used text messages to communicate with the suspect, which makes any text base communication between both parties relevant and must be reviewed and analyzed. In some cases, determining the approximate size of information sought can also be relevant and would allow for the examination to be even more focused. For instance, an investigating agency submitted one 500 gigabyte external drive and one 8 gigabyte thumb drive to the digital forensic team for examination. The investigation was seeking documents approximately 30 gigabytes in size relevant to a fraud case. In this situation, a forensic practitioner would focus on the external drive as it most likely contains the documents because it is more than 20 gigabytes in size. In the same sense, the 8-gigabyte thumb drive most likely does not have the documents because of its small size.
The next step of the case profile is determining the most relevant type of devices based on the information obtained thus far about the investigation. If the examination is seeking text-based messages sent to the victim, then it likely means that a mobile device was used to send those messages. However, this does not mean that a computer cannot be used to send such messages, but a mobile device is the most likely suspect in this situation. Then, the profile can provide further emphasis on the possibility of a mobile device being the source of the relevant messages. It is apparent from the above that building a case profile would require an experienced digital forensic practitioner, using prior experience and knowledge while building the case profile would result in a more accurate and realistic conclusion. The next step of the case profile is determining the most relevant type of devices based on the information obtained thus far about the investigation. If the examination is seeking text-based messages sent to the victim, then it likely means that a mobile device was used to send those messages. However, this does not mean that a computer cannot be used to send such messages, but a mobile device is the most likely suspect in this situation. Then, the profile can provide further emphasis on the possibility of a mobile device being the source of the relevant messages. It is apparent from the above that building a case profile would require an experienced digital forensic practitioner, using prior experience and knowledge while building the case profile would result in a more accurate and realistic conclusion.
The final step of the case profile is setting goals that the investigation is hoping to accomplish at the end of the examination. These goals are based on all the information gathered thus far and how the evidence was going to prove or disprove an incident or event. For instance, in the example of the rape investigation mentioned above, retrieving the text messages from the mobile device is the first goal to show that the suspect had lured the victim to the crime scene. The timeline of those messages is another piece of information that would corroborate the victim’s statement regarding different events prior to the incident. While examining the mobile device, these goals would make it clear to the digital forensic practitioner what the expected results following the examination of the device. The final step of the case profile is setting goals that the investigation is hoping to accomplish at the end of the examination. These goals are based on all the information gathered thus far and how the evidence was going to prove or disprove an incident or event. For instance, in the example of the rape investigation mentioned above, retrieving the text messages from the mobile device is the first goal to show that the suspect had lured the victim to the crime scene. The timeline of those messages is another piece of information that would corroborate the victim’s statement regarding different events prior to the incident. While examining the mobile device, these goals would make it clear to the digital forensic practitioner what the expected results following the examination of the device.
In civil litigation cases and even criminal cases, it would be beneficial during this phase to generate a list of keyword searches that could be used during the examination process. Those keywords must be related directly to the evidence and based on the information obtained about the evidence. When dealing with civil litigation cases, the keyword list would become the main method used to locate any responsive and relevant electronic information. The words added to the list must also be related to the goals setup to be accomplished at the end of the investigation. For instance, if the civil case is related to patent infringement, then the keywords would reveal documents or data that could prove or disprove the allegations submitted by one party against the other.
Evaluation of the Evidence. This evaluation of the evidence would have to occur based on the case profile that was created during the previous phase. The main goal of this phase is to exclude any device that does not match the case profile and most likely would not hold any information sought during the examination of the evidence. Excluding items that do not have relevant information from being examined would reduce the amount of information that a digital forensic practitioner examines in each case. This would ultimately cut on the amount of time needed to complete the examinations and give the investigators faster results. As it was explained above, the case profile would determine the type of electronic information, the timeline related to the information, the size of the electronic information, and other aspects of the evidence. Using the generated profile, the items of evidence can be listed in a way that the device most likely containing the information would be placed on the top of the list. The rest of the devices would be placed in the same way, which is based on the likelihood that they would contain the targeted data. Based on this list, the last item of the list would be least likely to contain information relevant to the investigation.
Forensic Acquisition. During this phase, any data that was not acquired or preserved on scene would be imaged. The imaging process would focus on the items that are on the top of the list that was generated during the Evaluation of the Evidence phase. This means that only items that are believed to hold the relevant data would be acquired and not the entire list of items. This would save on the storage space required to save all the forensic images to and also reduce the time required for imaging.
Partial Examination/Analysis. Many digital forensic models separate the examination phase from the analysis phase, just as the case for the Abstract Digital Forensic Model (Reith, Carr, and Gunsch, 2002). Those models assume that a digital forensic practitioner would search the evidence for any relevant data during the examination phase. Then, an analysis would be conducted during the analysis phase to determine the significance of the information found during the examination phase. In the FDFM, those two phases are combined in one phase as they cannot be separated since they occur at the exact same moment during the search for the evidence. The FDFM proposes creating a case profile prior to examining the evidence, which means that a digital forensic practitioner would know what to search for and the significance of the information beforehand. So, while searching for the evidence, examiners would be able to determine the significance of the information as it is being located.
An example of the situation explained above is the rape incident that was mentioned earlier and the mobile device that holds the information related to the incident. At the time when the practitioner they had prior knowledge that the targeted information related to luring the victim to the incident location was in the text messages. While reviewing the text messages, the digital forensic practitioner found a message that showed how the suspect and victim had met. Then, on the day of the incident, other messages were found showing how the victim was lured to the location of the incident. As the forensic practitioner is reviewing the messages, they are evaluating the information simultaneously and weighing the significance of the pieces of information as it is being found. As each text message is going to add another piece of information related to the incident. At the end of this phase, the forensic practitioner will have a full picture of what happened on that day and any other information related to the incident. It would not be practical for the forensic practitioner to go back and reanalyze information that was previously analyzed as the information was being found. It is also worth mentioning that digital forensic practitioners usually mark the relevant data as it is being found to ensure that it is being included in the final report generated by the tool used to review the evidence. An example of the situation explained above is the rape incident that was mentioned earlier and the mobile device that holds the information related to the incident. At the time when the practitioner they had prior knowledge that the targeted information related to luring the victim to the incident location was in the text messages. While reviewing the text messages, the digital forensic practitioner found a message that showed how the suspect and victim had met. Then, on the day of the incident, other messages were found showing how the victim was lured to the location of the incident. As the forensic practitioner is reviewing the messages, they are evaluating the information simultaneously and weighing the significance of the pieces of information as it is being found. As each text message is going to add another piece of information related to the incident. At the end of this phase, the forensic practitioner will have a full picture of what happened on that day and any other information related to the incident. It would not be practical for the forensic practitioner to go back and reanalyze information that was previously analyzed as the information was being found. It is also worth mentioning that digital forensic practitioners usually mark the relevant data as it is being found to ensure that it is being included in the final report generated by the tool used to review the evidence.
The FDFM suggests the use of a method that was referenced by Quick et al. (2014), which aims at conducting a partial examination and analysis of the evidence as a solution for the data volume issue. According to Quick et al. (2014), the digital forensic practitioner would be, “doing enough examination to answer the required questions, and no more” (p. 282). The same concept can be applied using the FDFM, as digital forensic practitioners conduct examinations only to accomplish the goals setup during the Building Case Profile phase. The examination would be performed in the same sequence of relevance as determined during the Evaluation of the Evidence phase. This means that the examination would begin with the more relevant items and continue down the list until all the goals are accomplished. One of the greatest benefits of conducting partial examinations of the evidence is maintaining the privacy of the owners. For instance, mobile devices contain a vast amount of private information about the owners. This means that the more information to be reviewed, the more loss of privacy that would occur. Limiting the examination to the need of the investigation would assist in maintaining a certain limit of privacy for the owner of the data.
Reporting. During this phase, all the information found during the examination would be placed in a report to inform the investigating agency of the findings. There are many report formats used by different agencies. However, all formats have one thing in common, which is that they are all driven by the findings of the examination and not opinions. Reports that are supported by solid evidence are hard to dispute, as the evidence behind the information in those reports would suppress any arguments.
Conclusions and Future Research
There are many literatures that propose different types of methodologies that have different focuses. However, many of those methodologies have included phases that are not related to the forensic aspect of those methodologies. Many researchers also addressed the issue of the volume of data that is causing huge delays with digital forensic exams. The proposed methodology, FDFM, allows digital forensic professionals to be focused more on forensics during any digital forensic investigation. This methodology has excluded any phases that were included in other methodologies and are considered common knowledge within the digital forensic field. The proposed methodology also addressed one of the biggest challenges for digital forensic investigations, which is the volume of data. The FDFM proposed two methods that would allow for reduction in the volume of data, excluding devices that do not contain relevant information and conducting partial examinations. Both techniques can be applied only after a case profile has been generated based on the information obtained by the investigating agency. Future research can be conducted by focusing on integrating other techniques to the FDFM that would eliminate other challenges of digital forensic investigations.
Agarwal, A., Gupta, M., & Gupta, S. (2011). Systematic digital forensic investigation model. International Journal of Computer Science and Security (IJCSS), 5(1), 118-131.
Baryamureeba, V., & Tushabe, F. (2004). The Enhanced Digital Investigation Process Model. Proceedings of the Digital Forensic Research Conference. Baltimore, MD.
Carrier, B., & Spafford, E. H. (2003). Getting physical with the Digital Investigation Process. International Journal of Digital Evidence, 2(2), 1-20.Carrier, B., & Spafford, E. H. (2003). Getting physical with the Digital Investigation Process. International Journal of Digital Evidence, 2(2), 1-20.
Ciardhuain, S. O. (2004). An extended model of cybercrime investigations. International Journal of Digital Evidence, 3(1), 1-22.
Neuner, S., Mulazzani, M., Schrittwieser, S., & Weippl, E. (2015). Gradually improving the forensic process. In the 10th International Conference on Availability, Reliability and Security, 404-410. IEEE.
Perumal, S. (2009). Digital forensic model based on Malaysian investigation process. International Journal of Computer Science and Security (IJCSS), 9(8), 38-44.
Pollitt, M. M. (1995). Computer forensics: An approach to evidence in cyberspace. In the 18th National Information Systems Security Conference, 487-491. Baltimore, MD.
Quick, D., & Choo, K. R. (2014). Impact of increasing volume of digital forensic data: A survey and future research challenges. Digital Investigation, 11(4), 273-294.
Reith, M., Carr, C., & Gunsch, G. (2002). An examination of the digital forensic models. International Journal of Digital Evidence, 1(3), 1-12.
Ruibin, G., Yun, T., & Gaertner, M. (2005). Case-relevance information investigation: binding computer intelligence to the current computer forensic framework. International Journal of Digital Evidence, 4(1), 147-67.
Selamat, S. R., Yusof, R., & Sahib, S. (2008). Mapping process of digital forensic investigation framework. International Journal of Computer Science and Network Security, 8(10), 163-169.
About the Author
Haider Khaleel is a Digital Forensics Examiner with the US Army, previously a field agent with Army CID. Haider received a Master’s Degree in Digital Forensics Science from Champlain College. The ideas presented in this article do not reflect the polices, procedure, and regulations of the author’s agency.
Correspondence concerning this article should be addressed to Haider H. Khaleel, Champlain College, Burlington, VT 05402. email@example.com
by Haider H. Khaleel Abstract Since the end of the 19th Century until the current time, law enforcement has been facing a rapid increase in computer-related crimes. In the present time, digital forensics has become an important aspect of not only law enforcement investigations, but also; counter-terrorism investigations, civil litigations, and investigating cyber-incidents. Due to […]
Digital Forensics as a Big Data Challenge
Digital Forensics, as a science and part of the forensic sciences, is facing new challenges that may well render established models and practices obsolete. The dimensions of potential digital evidence supports has grown exponentially, be it hard disks in desktops and laptops or solid state memories in mobile devices like smartphones and tablets, even while latency times lag behind. Cloud services are now sources of potential evidence in a vast range of investigations and network traffic also follows a growing trend, and in cyber security the necessity of sifting through vast amount of data quickly is now paramount. On a higher level investigations – and intelligence analysis – can profit from sophisticated analysis of such datasets as social network structures, corpora of text to be analysed for authorship and attribution. All of the above highlights the convergence between so-called data science and digital forensics, to take the fundamental challenge of analysing vast amounts of data (“big data”) in actionable time while at the same time preserving forensic principles in order for the results to be presented in acourt of law. The paper, after introducing digital forensics and data science, explores the challenges above and proceeds to propose how techniques and algorithms used in big data analysis can be adapted to the unique context of digital forensics, ranging from the managing of evidence via Map-Reduce to machine learning techniques for triage and analysis of big forensic disk images and network traffic dumps. In the conclusion the paper proposes a model to integrate this new paradigm into established forensic standards and best practices and tries to foresee future trends.
1.1 Digital Forensics
What is digital forensics? We report here one of the most useful definitions of digital forensics formulated. It was developed during the first Digital Forensics Research Workshop (DFRWS) in 2001 and it is still very much relevant today:
Digital Forensics is the use of scientifically derived and proven methods toward the preservation, collection, validation, identification, analysis, interpretation, documentation and presentation of digital evidence derived from digital sources for the purpose of facilitating or furthering the reconstruction of events found to be criminal, or helping to anticipate unauthorized actions shown to be disruptive to planned operations. [Pear01]
This formulation stresses first and foremost the scientific nature of digital forensics methods, in a point in time when the discipline was transitioning from being a “craft” to an established field and rightful part of the forensic sciences. At that point digital forensics was also transitioning from being mainly practised in separated environments such as law enforcement bodies and enterprise audit offices to a unified field. Nowadays this process is very advanced and it can be said that digital forensics principles, procedures and methods are shared by a large part of its practitioners, coming from different backgrounds (criminal prosecution, defence consultants, corporate investigators and compliance officers). Applying scientifically valid methods implies important concepts and principles to be respected when dealing with digital evidence. Among others we can cite:
- Previous validation of tools and procedures. Tools and procedures should be validated by experiment prior to their application on actual evidence.
- Reliability. Processes should yield consistent results and tools should present consistent behaviour over time.
- Repeatability. Processes should generate the same results when applied to the same test environment.
- Documentation. Forensic activities should be well-documented, from the inception to the end of evidence life-cycle. On one hand strict chain-of-custody procedures should be enforced to assure evidence integrity and the other hand complete documentation of every activity is necessary to ensure repeatability by other analysts.
- Preservation of evidence – Digital evidence is easily altered and its integrity must be preserved at all times, from the very first stages of operations, to avoid spoliation and degradation. Both technical (e.g. hashing) and organizational (e.g. clear accountabilityfor operators) measures are to be taken.
These basic tenets are currently being challenged in many ways by the shifting technologicaland legal landscape practitioners have to contend with. While this paper shall not dwell much on the legal side of things, this is also obviously something that is always to be considered in forensics.
Regarding the phases that usually make up the forensic workflow, we refer here again to the only international standard available [ISO12] and describe them as follows:
- Identification. This process includes the search, recognition and documentation of the physical devices on the scene potentially containing digital evidence. [ISO12]
- Collection – Devices identified in the previous phase can be collected and transferred to an analysis facility or acquired (next step) on site.
- Acquisition – This process involves producing an image of a source of potential evidence, ideally identical to the original.
- Preservation – Evidence integrity, both physical and logical, must be ensured at all times.
- Analysis – Interpretation of the data from the evidence acquired. It usually depends onthe context, the aims or the focus of the investigation and can range from malware analysis to image forensics, database forensics, and a lot more of application-specific areas.On a higher level analysis could include content analysis via for instance forensics linguistics or sentiment analysis techniques.
- Reporting – Communication and/or dissemination of the results of the digital investigation to the parties concerned.
1.2 Data Science
Data Science is an emerging field basically growing at the intersection between statistical techniques and machine learning, completing this toolbox with domain specific knowledge, having as fuel big datasets. Hal Varian gave a concise definition of the field:
[Data science is] the ability to take data – to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it. [Vari09]
We can see here the complete cycle of data management and understand that data science in general is concerned with the collection, preparation, analysis, visualization, communication and preservation of large sets of information; this is a paraphrase of another insightful definition by Jeffrey Stanton of Syracuse University’s School of Information Studies. The parallels with the digital forensics workflow are clear but the mention in both definitions of visualization deserves to be stressed. Visualization is mostly never mentioned in digital forensics guidelines and standards but as the object of analysis moves towards “Big Data”, it will necessarily become one of the most useful tools in the analyst’s box, for instance in the prioritization phase but also for dissemination and reporting: visual communication is probably the most efficient way into a human’s brain but this channel is underused by most of today’s forensic practitioners.
If Data Science is concerned with “Big Data”, what is Big Data anyway? After all big is a relative concept and prone to change with time. Any data that is difficult to manage and work with, or in other words datasets so big that for them conventional tools – e.g. relational databases – are not practical or useful. [ISAC13] From the point of view of data science the challenges of managing big data can be summarized as three Vs: Volume (size), Velocity (needed for interactivity), Variety (different sources of data). In the next paragraph we shall see how these three challenges dovetail nicely with the digital forensics context.
“Golden Age” is a common definition for the period in the history of digital forensics that went roughly from the 1990s to the first decade of the twenty-first century. During that period the technological landscape was dominated by the personal computer, and mostly by a single architecture – x86 plus Windows – and data stored in hard drives represented the vast majority of evidence, so much so that “Computer Forensics” was the accepted term for the discipline. Also the storage size allowed for complete bitwise forensic copies of the evidence for subsequent analysis in the lab. The relative uniformity of the evidence nature facilitated the development of the digital forensic principles outlined above and enshrined in several guidelines and eventually in the ISO/IEC 27037 standard. Inevitably anyway they lagged behind the real-world developments: recent years brought many challenges to the “standard model”, first among them the explosion in the average size of the evidence examined for a single case. Historical motivations for this include:
- A dramatic drop in hard drive and solid state storage cost (currently estimated at $80 per Terabyte) and consequently an increase in storage size per computer or device;
- Substantial increase in magnetic storage density and diffusion of solid-state removable media (USB sticks, SD and other memory cards etc) in smartphones, notebooks, cameras and many other kinds of devices;
- Worldwide huge penetration of personal mobile devices like smartphones and tablets, not only in Europe and America, but also in Africa – where they constitute the main communication mode in many areas – and obviously in Asia;
- Introduction and increasing adoption by individuals and businesses of cloud services – infrastructure services (IAAS), platform services (PAAS) and applications (SAAS) – made possible in part by virtualization technology enabled in turn by the modern multi-core processors;
- Network traffic is ever more part of the evidence in cases and the sheer size of it has – again – obviously increased in the last decade, both on the Internet and on 3G-4G mobile networks, with practical but also ethical and political implications;
- Connectivity is rapidly becoming ubiquitous and the “Internet of things” is near, especially considering the transition to IPv6 in the near future. Even when not networked, sensors are everywhere, from appliances to security cameras, from GPS receivers to embedded systems in cars, from smart meters to Industrial Control Systems.
To give a few quantitative examples of the trend, in 2008 the FBI Regional Computer Forensics Laboratories (RCFLs) Annual Report [FBI08] explained that the agency’s RCFLs processed 27 percent more data than they did during the preceding year; the 2010 Report gavean average case size of 0.4 Terabytes. According to a recent (2013) informal survey among forensic professionals on Forensic Focus, half of the cases involve more than on Terabyte of data, with one in five over five Terabytes in size.
The simple quantity of evidence associated to a case is not the only measure of its complexity and the growing in size is not the only challenge that digital forensics is facing: evidence is becoming more and more heterogeneous in nature and provenance, following the evolving trends in computing. The workflow phase impacted by this new aspect is clearly analysis where, even when proper prioritization is applied, it is necessary to sort through diverse categories and source of evidence, structured and unstructured. Data sources themselves are much more differentiated than in the past: it is common now for a case to include evidence originating from personal computers, servers, cloud services, phones and other mobile devices, digital cameras, even embedded systems and industrial control systems.
3 Rethinking Digital Forensics
In order to face the many challenges but also to leverage the opportunities it is encountering, the discipline of digital forensics will have to rethink in some ways established principles and reorganize well-known workflows, even include and use tools not previously considered viable for forensic use – concerns regarding the security of some machine learning algorithms has been voiced, for instance in [BBC+08]. On the other hand forensic analysts’ skills need to be rounded up to make better use of these new tools in the first place, but also to help integrate them in forensic best practices and validate them. The dissemination of “big data” skills will have to include all actors in the evidence lifecycle, starting with Digital Evidence First Responders (DEFRs), as identification and prioritization will see their importance increased and skilled operators will be needed from the very first steps of the investigation.
Well-established principles shall need to undergo at least a partial extension and rethinking because of the challenges of Big Data.
- Validation and reliability of tools and methods gain even more relevance in a big data scenarios because of the size and variety of datasets, coupled with the use of cutting-edge algorithms that still need validation efforts, including a body of test work first on methods and then on tools in controlled environments and on test datasets before their use in court.
- Repeatability has long been a basic tenet in digital forensics but most probably we will be forced to abandon it, at least in its strictest sense, for a significant part of evidence acquisition and analysis. Already repeatability stricto sensu is impossible to achieve in nearly all instances of forensic acquisition of mobile devices, and the same applies to cloud forensics. When Machine Learning tools and methods become widespread, reliance on previous validation will be paramount. As an aside, this stresses once more the importance of using open methods and tools that can be independently and scientifically validated as opposed to black box tools or – worse – LE-reserved ones.
- As for documentation, its importance for a sound investigation is even greater when we see non-repeatable operations and live analysis routinely be part of the investigation process. Published data about validation results of tools and methods used – or at least pointers to it – should be an integral part of the investigation report.
Keeping in mind how the forensic principles may need to evolve, we present here a brief summary of the forensics workflow and how each phase may have to adapt to big data scenarios. ISO/IEC 27037 International Standard covers the identification, collection, acquisition and preservation of digital evidence (or, literally, “potential” evidence). Analysis and disposal are not covered by this standard, but will be in future – in development – guidelines in the 27xxx series.
Identification and collection
Here the challenge is selecting evidence in a timely manner, right on the scene. Guidelines for proper prioritization of evidence should be further developed, abandoning the copy-all paradigm and strict evidence integrity in favour of appropriate triage procedures: this implies skimming through all the (potential) evidence right at the beginning and selecting relevant parts. First responders’ skills will be even more critical that they currently are and, in corporate environments, also preparation procedures.
When classic bitwise imaging is not feasible due to the evidence size, prioritization procedures or “triage” can be conducted, properly justified and documented because integrity is not absolute anymore and the original source has been modified, if only by selecting what to acquire. Visualization can be a very useful tool, both for low-level filesystem analysis and higher level content analysis. Volume of evidence is a challenge because dedicated hardware is required for acquisition – be it storage or online traffic – while in the not so distant past an acquisition machine could be built with off-the-shelf hardware and software. Variety poses achallenge of a slightly different kind, especially when acquiring mobile devices, due to the huge number of physical connectors and platforms.
Again, preservation of all evidence in a secure way and complying with legal requirements calls for quite a substantial investment for forensic labs working on a significant number of cases.
Integrating methods and tools from data science implies surpassing the “sausage factory” forensics still widespread today, where under-skilled operators rely heavily on point and click all-in-one tools to perform the analysis. Analysts shall need to include a plurality of tools in their panoply and not only that, but understand and evaluate the algorithms and implementations they are based upon. The absolute need for highly skilled analysts and operators is clear, and suitable professional qualifications will develop to certify this.
The final report for an analysis conducted using data science concepts should contain accurate evaluations of tools, methods used, including data from the validation process and accurate documentation is even more fundamental as strict repeatability becomes very hard to uphold.
3.3 Some tools for tackling the Big Data Challenge
At this stage, due also to the fast-changing landscape in data science, it is hard to systematically categorize its tools and techniques. We review here some of them.
Map-Reduce is a framework used for massive parallel tasks. This works well when the data-sets do not involve a lot of internal correlation. This does not seem to be the case for digital evidence in general but a task like file fragment classification is suited to be modelled in aMap-Reduce paradigm. Attribution of file fragments – coming from a filesystem image or from unallocated space – to specific file types is a common task in forensics: machine learning classification algorithms – e.g. logistic regression, support vector machines – can be adapted toM-R if the analyst forgoes the possible correlations among single fragments. A combined approach where a classification algorithm is combined for instance with a decision tree method probably would yeld higher accuracy.
Decision trees and random forests are fruitfully brought to bear in fraud detection software, where the objective is to find in a vast dataset the statistical outliers – in this case anomalous transactions, or in another application, anomalous browsing behaviour.
In audio forensics unsupervised learning techniques under the general definition of “blind signal separation” give good results in separating two superimposed speakers or a voice from background noise. They rely on mathematical underpinning to find, among possible solutions, the least correlated signals.
In image forensics again classification techniques are useful to automatically review big sets of hundreds or thousands of image files, for instance to separate suspect images from the rest.
Neural Networks are suited for complex patter recognition in network forensics. A supervised approach is used, where successive snapshots of the file system are used to train the network to recognize normal behaviour of an application. After the event the system can be used to automatically build an execution timeline on a forensic image of a filesystem. [KhCY07] Neural Networks have also been used to analyse network traffic but in this case the results still do not present high levels of accuracy.
Natural Language Processing (NLP) techniques, including Bayesian classifiers and unsupervised algorithms for clustering like k-means, has been successfully employed for authorship verification or classification of large bodies of unstructured texts, emails in particular.
The challenges of big data evidence already at present highlight the necessity of revising tenets and procedures firmly established in digital forensics. New validation procedures, analysts’ training, and analysis workflow shall be needed in order to confront the mutated landscape. Furthermore, few forensic tools implement for instance machine learning algorithms or, from the other side, most machine learning tools and libraries are not suitable and/or validated for forensic work, so there still exists a wide space for development of innovative tools leveraging machine learning methods.
[BBC+08] Barreno, M. et al.: “Open Problems in the Security of Learning”. In: D. Balfanzand J. Staddon, eds., AISec, ACM, 2008, p.19-26
[FBI08] FBI: “RCFL Program Annual Report for Fiscal Year 2008”, FBI 2008. http://www.fbi.gov/news/stories/2009/august/rcfls_081809
[FBI10] FBI: “RCFL Program Annual Report fir Fiscal Year 2010”, FBI 2010.
[ISAC13] ISACA: “What Is Big Data and What Does It Have to Do with IT Audit?”,ISACA Journal, 2013, p.23-25
[ISO12] ISO/IEC 27037 International Standard
[KhCY07] Khan, M. and Chatwin, C. and Young, R.: “A framework for post-event timelinereconstruction using neural networks” Digital Investigation 4, 2007
[Pear01] Pearson, G.: “A Road Map for Digital Forensic Research”. In: Report fromDFRWS 2001, First Digital Forensic Research Workshop, 2001.
[Vari09] Varian, Hal in: “The McKinsey Quarterly”, Jan 2009
About the Author
Alessandro Guarino is a senior Information Security professional and independent researcher. He is the founder and principal consultant of StudioAG, a consultancy firm based in Italy and active since 2000, serving clients both in the private and public sector and providing cybersecurity, data protection and compliance consulting services. He is also a digital forensics analyst and consultant, as well as expert witness in Court. He holds an M.Sc in Industrial Engineering and a B.Sc. in economics, with a focus on Information Security Economics. He is an ISO active expert in JTC 1/SC 27 (IT Security Techniques committee) and contributed in particular to the development of cybersecurity and digital investigation standards. He represents Italy in the CEN-CENELEC Cybersecurity Focus Group and ETSI TC CYBER. He is the chair of the recently formed CEN/CENELEC TC 8 “Privacy management in products and services”. As an independent researcher, he delivered presentations at international conferences and published several peer-reviewed papers.
Find out more and get in touch with the author at StudioAG.
by Alessandro Guarino, StudioAG Abstract Digital Forensics, as a science and part of the forensic sciences, is facing new challenges that may well render established models and practices obsolete. The dimensions of potential digital evidence supports has grown exponentially, be it hard disks in desktops and laptops or solid state memories in mobile devices like smartphones […]
Cracking crime just got a lot more innovative.
Police and biometrics researchers at Michigan State University have successfully unlocked the smartphone of a murder victim by using a digitally enhanced print-out of his fingerprint.
Officers from the digital forensics and cyber-crime unit at MSU’s police department approached the college’s biometrics research lab last month, having become aware of the team’s research (pdf) on how printed fingerprints can spoof mobile-phone sensors.
Police had the fingerprints of the murder victim from a previous arrest, which they gave to the lab to 3D print in a bid to unlock the device—a Samsung Galaxy S6.
Unsure which finger was paired to the phone, the lab printed 2D and 3D replicas of all 10 of the slain man’s fingerprints. None of them unlocked the device, so the team then digitally enhanced the quality of prints by filling in the broken ridges and valleys. Rather than opting for a more expensive 3D model, they printed new 2D versions using a special conductive ink that would create an electrical circuit needed to spoof the phone sensor.
After multiple attempts—thanks to the device not requiring a passcode after a certain number of efforts—the team successfully unlocked the phone with one of the digitally enhanced 2D prints.
An MSU spokesperson told Quartz there were plans to print 3D models to test on other devices—there was no need to do so for the victim’s phone, as the 2D print was successful.
Professor Anil Jain, who led the research team at MSU, says the unlocking demonstrates “a weakness” in smartphones’ fingerprint authentication systems, and that he hoped it would “motivate phone developers to create advanced security measures for fingerprint liveness detection.” He added:
This shows that we need to understand what types of attacks are possible on fingerprint sensors, and biometrics in general, and how to fix them. If we don’t, the public will have less confidence in using biometrics. After all, biometric authentication was introduced in consumer devices to improve security.
According to MSU, this is the first time law enforcement has used such technology as part of an ongoing investigation. A spokesperson said the lead detective “even contacted the company that was asked to help with [unlocking] the San Bernardino shooter’s phone and he kept getting the same answer: can’t do it, the tech doesn’t exist. Well, the tech exists now!”
In a statement, Samsung said:
We are aware of the research from Michigan State University, but would like to remind users that it takes special equipment, supplies and conditions to simulate a person’s fingerprint, including actual possession of the fingerprint owner’s phone, to unlock the device. If there is a potential vulnerability or a new method that challenges our efforts to ensure security at any time, we will respond to issues as quickly as possible to investigate and resolve the issue
Cracking crime just got a lot more innovative. Police and biometrics researchers at Michigan State University have successfully unlocked the smartphone of a murder victim by using a digitally enhanced print-out of his fingerprint. Officers from the digital forensics and cyber-crime unit at MSU’s police department approached the college’s biometrics research lab last month, having become […]