Angus Marshall, Digital Forensic Scientist
Where to begin? I have a lot of different roles these days, but by day I’m a Lecturer in Cybersecurity – currently at the University of York, and also run my own digital forensic consultancy business. I drifted into the forensic world almost by accident back in 2001 when a server I managed was hacked. I presented a paper on the investigation of that incident at a forensic science conference and a few weeks later found myself asked to help investigate a missing person case that turned out to be a murder. There’s been a steady stream of casework ever since.
I’m registered as an expert adviser and most of my recent casework seems to deal with difficult to explain or analyse material. Alongside that, I’ve spent a lot of time (some might say too much) working on standards during my time on the Forensic Science Regulator’s working group on digital evidence and as a member of BSI’s IST/033 information security group and the UK’s digital evidence rep. on ISO/IEC JTC1 SC27 WG4, where I led the work to develop ISO/IEC 27041 and 27042, and contributed to the other investigative and eDiscovery standards.
You’ve recently published some research into verification and validation in digital forensics. What was the goal of the study?
It grew out of a proposition in ISO/IEC 27041 that tool verification (i.e. evidence that a tool conforms to its specification) can be used to support method validation (i.e. showing that a particular method can be made to work in a lab). The idea of the 27041 proposal is that if tool vendors can provide evidence from their own development processes and testing, the tool users shouldn’t need to repeat that. We wanted to explore the reality of that by looking at accredited lab processes and real tools. In practice, we found that it currently won’t work because the requirement definitions for the methods don’t seem to exist and the tool vendors either can’t or won’t disclose data about their internal quality assurance.
The effect of it is that it looks like there may be a gap in the accreditation process. Rather than having a selection of methods that are known to work correctly (as we see in calibration houses, metallurgical and chemical labs etc. – where the ISO 17025 standard originated) which can be chosen to meet a specific customer requirement, we have methods which satisfy much fuzzier customer requirements which are almost always non-technical in nature because the customers are CJS practitioners who simply don’t express things in a technical way.
We’re not saying that anyone is necessarily doing anything wrong, by the way, just that we think they’ll struggle to provide evidence that they’re doing the right things in the right way.
Where do we stand with standardisation in the UK at the moment?
Standardization is a tricky word. It can mean that we all do things the same way, but I think you’re asking about progress towards compliance with the regulations. In that respect, it looks like we’re on the way. It’s slower than the regulator would like. However, our research at York suggests that even the accreditations awarded so far may not be quite as good as they could be. They probably satisfy the letter of the regulator’s documents, but not the spirit of the underlying standard. The technical correctness evidence is missing.
ISO 17025 has faced a lot of controversy since it has been rolled out as the standard for digital forensics in the UK. Could you briefly outline the main reasons why?
Most of the controversy is around cost and complexity. With accreditation costing upwards of £10k for even a small lab, it makes big holes in budgets. For the private sector, where turnover for a small lab can be under £100k per annum, that’s a huge issue. The cost has to be passed on. Then there’s the time and disruption involved in producing the necessary documents, and then maintaining them and providing evidence that they’re being followed for each and every examination.
A lot of that criticism is justified, but adoption of any standard also creates an opportunity to take a step back and review what’s going on in the lab. It’s a chance to find a better way to do things and improve confidence in what you’re doing.
In your opinion, what is the biggest stumbling block either for ISO 17025 specifically, or for standardizing digital forensics in general?
Two things – as our research suggests, the lack of requirements makes the whole verification and validation process harder, and there’s the confusion about exactly what validation means. In ISO terms, it’s proof that you can make a process work for you and your customers. People still seem to think it’s about proving that tools are correct. Even a broken tool can be used in a valid process, if the process accounts for the errors the tool makes.
I guess I’ve had the benefit of seeing how standards are produced and learning how to use the ISO online browsing platform to find the definitions that apply. Standards writers are a lot like Humpty Dumpty. When we use a word it means exactly what we choose it to mean. Is there a way to properly standardise tools and methods in digital forensics?
It’s not just a UK problem – it’s global. There’s an opportunity for the industry to review the situation, now, and create its own set of standard requirements for methods. If these are used correctly, we can tell the tool makers what we need from them and enable proper objective testing to show that the tools are doing what we need them to. They’ll also allow us to devise proper tests for methods to show that they really are valid, and to learn where the boundaries of those methods are.
Your study also looked at some existing projects in the area: can you tell us about some of these? Do any of them present a potential solution?
NIST and SWGDE both have projects in this space, but specifically looking at tool testing. The guidance and methods look sound, but they have some limitations. Firstly, because they’re only testing tools, they don’t address some of the wider non-technical requirements that we need to satisfy in methods (things like legal considerations, specific local operational constraints etc.).
Secondly, the NIST project in particular lacks a bit of transparency about how they’re establishing requirements and choosing which functions to test. If the industry worked together we could provide some more guidance to help them deal with the most common or highest priority functions.
Both projects, however, could serve as a good foundation for further work and I’d love to see them participating in a community project around requirements definition, test development and sharing of validation information.
Is there anything else you’d like to share about the results?
We need to get away from thinking solely in terms of customer requirements and method scope. These concepts work in other disciplines because there’s a solid base of fundamental science behind the methods. Digital forensics relies on reverse-engineering and trying to understand the mind of a developer in order to work out how extract and interpret data. That means we have a potentially higher burden of proof for any method we develop. We also need to remember that we deal with a rate of change caused by human ingenuity and marketing, instead of evolution.
Things move pretty fast in DF, if we don’t stop and look at what we’re doing once in a while, we’ll miss something important.
Read Angus Marshall’s paper on requirements in digital forensics method definition here.
The hottest topic in digital forensics at the moment, standardisation is on the tip of everyone’s tongues. Following various think pieces on the subject and a plethora of meetings at conferences, I spoke to Angus Marshall about his latest paper and what he thinks the future holds for this area of the industry. You can […]
[Case Study] Computer Forensics: Data Recovery & Extraction From Platter Scratched Hard Drives. COMPUTER FORENSICS:
Editor’s note: As a forensic data recovery expert, Salvation DATA receives different data recovery cases every day. Our forensic customers usually turn to us for help when they run into a case they are not able to handle. And among all the data lost situations, platter scratch is one of the most difficult kinds of problem to deal with. So in this issue, let’s see what is the correct forensic process for a platter scratched hard drive.
What is platter scratch?
When platters are damaged it is usually in the form of scratching caused by debris and or the read/write heads when they come in contact during the reading-writing process.
This is also known commonly as a head crash, although that term is often mistakenly used by inexperienced individuals to relate to clicking drives or hard drives that need a read/write head replacement.
Once the platters are scratched to a certain degree this will, in turn, damaged the read/write heads and will render the drive unreadable. Oftentimes this results in a clicking, scratching, chirping, or screeching sounds. However, these sounds don’t automatically mean the platters are scratched.
When the platters are scratched in this manner the drive will not be able to be recovered, the files and data contained on the drive will be lost forever. This is known as a catastrophic head crash and most hard drive failure recovery cannot fix this.
SalvationDATA Computer Forensics Scratched Platters
How to work with a hard drive with platter scratch?
Is platter scratch truly unrecoverable? Actually sometimes if the scratches to the platter surface are not too severe, there is still the possibility to recover and extract the data as long as we strictly follow operating procedures.
Stop attempting to read data immediately to avoid further unrecoverable damage.
Open the hard drive in a dust-free environment and inspect for damage.
Remove the damaged read/write head, and replace it with a healthy head. Donor head must be selected by strict matching rule. For example, for Western Digital head replacement, donor drive must match the model number, batch ID, FW version and PN.
After repairing physical damages, we can continue to forensically recover and extract the data from this hard drive with SalvationDATA’s DRS (Data Recovery System).
What tools do you need for this process?
HPE Pro is a hard drive repair tool Head Platter Exchange it is the unarguable and the only equipment built to handle head stack and drive motor issues, in case the drive corruption is not caused by firmware but head stack or drive spin motor. With the pioneer platter exchanger, it can prevent the head from further damage or misalignment due to incorrect operations to maintain the user data intact.
DRS (Data Recovery System) is our next generation intelligent all-in-one forensic data recovery tool that can acquire and recover data from both good and damaged storage media like HDD simply and easily.
How do we know if the hard drive is fixed, and can continue to the next step? DRS’s disk diagnostics feature perfectly helps to solve the problem. DRS is able to scan the source disk in advance. With fairly new FastCheck technology, it allows rapid check within 5 seconds, avoiding the risk of second damage made to an important evidentiary storage device.
Insert the hard disk in DRS, and simply click the one-key Diagnose function to complete the process. DRS will tell you the detailed disk health status in no time!
After repairing the physical damages, the hard drive could still be fragile and easy to fail again. If not handled with care, we may permanently lose the opportunity to recover and extract the data. Therefore, it is crucial to first secure data stored on the hard drive. DRS also provides the solution. The forensic imaging function of DRS secures the evidentiary digital data by creating a physical level sector-by-sector duplication of the damaged hard drive. Once finished, a forensic image will be exactly the same as the source data and can be stored safely and analyzed at any time appropriate.
When dealing with a defective hard drive as in this case, it is recommended to use the Advanced Imaging mode in DRS to help bypass bad sectors and extract as much data as possible. Also, remember to set transmission mode as PIO (low speed) to safely extract the data from such damaged storage device.
Before imaging, we can also check the raw hexadecimal data view in DRS Sector View to make sure data on this damaged hard drive is accessible. Professional data recovery engineers can even acquire more information from this sector view.
SalvationDATA Computer Forensics Scratched Platters
Now with all the problems dealt with, we have one final step to make: recover and extract valuable evidentiary data. Use DRS’s File Recovery & File Carving function to locate and extract important digital files, and generate a forensic report at the end of the process. With DRS’s intelligent recovery technology, investigators can deal with deleted files, formatted partitions, corrupted file system and many other digital data lost situations without any professional skill requirements!
Platter scratch is the nightmare for data recovery engineers. However, it is not impossible to recover data from scratched platters. In this issue, we discussed the standard operating procedure to deal with a hard drive with platter scratch to maximize the possibility to recover and extract valuable evidentiary data. We hope the instructions we provide can help you with your work!
You can also visit our official YouTube channel for more videos: https://www.youtube.com/user/SalvationDataOfficia/featured
Editor’s note: As a forensic data recovery expert, SalvationDATA receives different data recovery cases every day. Our forensic customers usually turn to us for help when they run into a case they are not able to handle. And among all the data lost situations, platter scratch is one of the most difficult kinds of problem […]
Since the end of the 19th Century until the current time, law enforcement has been facing a rapid increase in computer-related crimes. In the present time, digital forensics has become an important aspect of not only law enforcement investigations, but also; counter-terrorism investigations, civil litigations, and investigating cyber-incidents. Due to rapid developing and evolving technology, these types of forensic investigations can become complex and intricate. However, creating a general framework for digital forensic professionals to follow during those investigations would lead to a successful retrieval of relevant digital evidence. The digital forensic framework or methodologies should be able to highlight all the proper phases that a digital forensic investigation would endure to ensure accurate and reliable results. One of the challenges that digital forensic professionals have been facing in the recent years is the volume of data submitted for analysis. Few digital forensic methodologies have been developed to provide a framework for the entire process and also offer techniques that would assist digital forensic professionals to reduce the amount of analyzed data. This paper proposes a methodology that focuses mostly on fulfilling the forensic aspect of digital forensic investigations, while also including techniques that can assist digital forensic practitioners in solving the data volume issue.
Focused Digital Forensic Methodology
Modern society has become very dependent on computers and technology to run all aspects of their lives. Technology has had a very positive impact on humanity, which can be easily proven with a short visit to any hospital and witnessing how computers and technology have become tools used to treat and save lives. However, computers have an indisputable disadvantage of being used as a tool to facilitate criminal activities. For instance, the sexual exploitation of children can be performed using the Internet, which would allow criminals to remain anonymous while preying on innocent children. The number of digital related crimes are increasing, which makes law enforcement agencies engaged in a constant battle against criminals who use this technology to commit crimes. As a result, digital forensics has become an important part of law enforcement investigations. Digital forensics is not only performed during law enforcement investigations but can also be conducted during the course of civil matters.
The fact that the information obtained from digital forensic investigations would and can be used as evidence during legal proceedings means that the entire process must be performed according to the legal standards. Perumal (2009) explained that the legal system requires digital forensic processes to be standardized and consistent. One of the issues that Perumal highlighted was the fact that digital evidence is very fragile and the use of improper methods could potentially alter or eliminate that evidence. There are a huge number of methodologies that have been developed all over the world, many of them were designed to target a specific type of technology (Selamat, Yusof, and Sahib, 2008). Also, many methodologies were developed to address requirements imposed by the legal system in certain jurisdictions. One of the methodologies that did not base their theory on technology or the law is the Integrated Digital Investigation Process (IDIP) Model. Carrier and Spafford (2003) explained that the IDIP Model is based on the Locard Exchange Principle, which is used to retrieve evidence from physical crime scenes. The IDIP Model uses the idea that when software programs were being executed in an electronic environment, electronic artifacts would most likely be created on the underlying device. Those artifacts can be retrieved and analyzed to obtain information about a certain incident or event.
This paper examines different literatures that present different types of digital forensic methodologies. Some of these methodologies have taken the focus away from the forensic aspect of digital forensic investigations. Instead, these methodologies have addressed crime scenes and other processes that are not related to the digital forensic field. Also, much research has been focused on creating solutions to the challenges that digital forensic practitioners are facing when conducting digital forensic investigations. One of the main challenges that multiple literatures have addressed is the constant increase in the volume of data that practitioners are acquiring and examining during investigations. This paper proposes a digital forensic methodology that would allow forensic practitioners to overcome the data volume issue and eliminate the lack of focus found in many methodologies.
As it was mentioned above, there are a large number of digital forensic methodologies that were developed all over the world. One of the first serious attempts to develop a standardized methodology that could be used during digital forensic investigations was in 1995. Pollitt (1995) utilized processes that were originally developed to handle physical evidence as a framework to create a methodology for digital forensic investigations. The author’s approach to creating different phases being conducted during an investigation was inspired by many factors that the legal system considers when evaluating any type of evidence. The court would evaluate; whether the seizure was properly conducted, was there any alteration that occurred with the evidence, and what methods were used to examine the evidence. The proper performance of all these steps would allow any type of evidence to be admitted by the court. Pollitt (1995) developed a methodology that consisted of four phases; acquisition, identification, evaluation, and admission as evidence. This methodology focused mostly on the forensic aspect of the investigation and did not extend to other processes that other researchers have considered in their methodology; preparation, planning, and the search of physical crime scene.
Many digital forensic methodologies were developed after Pollitt’s methodology. Some of those methodologies have used different terminology and sequencing of the phases that a forensic practitioner would have to perform throughout their investigations. Most of the methodologies have agreed on certain processes that are related to the forensic aspect of digital forensic investigations (Selamat et al., 2008). Selamat et al. studied 11 different digital forensic methodologies and concluded that all of them had common phases; preservation, collection, examination, analysis, and reporting. This means that all these researchers have agreed on these specific phases and any newly developed methodology would have to include similar phases.
According to Ruibin, Yun, and Gaertner (2005), the proper completion of any digital forensic investigation is directly related to conducting the processes that are similar to the ones highlighted by Selamat et al. (2008). Ruibin, Yun, and Gaertner also explained that in order to properly perform these phases, proper technique must be used. These techniques would ensure that the authenticity and reliability of the evidence is acceptable by the legal standards of the jurisdiction where the methodologies are being implemented. The use of the term legal is not foreign to digital forensic methodologies, as many researchers have addressed the importance of performing digital forensic investigations using the proper legal authorizations. For instance, the Integrated Digital Investigation Process (IDIP) Model has included obtaining legal authorization to perform a digital forensic investigation as one of the sub-phases of the Deployment Phase (Baryamureeba and Tushabe, 2004).
The IDIP Model was revised by Baryamureeba and Tushabe (2004) after noticing some practicality and clarity problems. Baryamureeba and Tushabe named the revised version as the Enhanced Integrated Digital Investigation Process (EIDIP) Model. In the EIDIP Model, the authors focused on two aspects, the sequence of the phases and clarity in regard to processing multiple crime scenes. The EIDIP proposed a modification to the deployment phase of the IDIP Model and included investigation of the physical and digital scenes as a way to arrive at the confirmation phase. Also, Baryamureeba and Tushabe added the Traceback phase, which aims at using information obtained from the victim’s machine to trace back other machines used to initiate the incident.
Challenges Facing Current Methodologies
Technology has been developing on two aspects; hardware and software. For instance, a mobile device hardware has seen great advances that allows these devices to be able to perform complex operations efficiently. At the same time, the software operates on mobile devices has also undergone great advancements to be able to support the consumers’ demand. However, these enormous changes in technology have affected the digital forensic methodologies, especially the ones that evolved around a certain set of technologies. Selamat et al. (2008) explained that there are many digital forensic methodologies that have been developed to target certain devices or a specific type of technology. The main problem with this type of methodology is the rapid changes of the underlying technology, which render these methodologies obsolete.
The legal system in different jurisdictions view digital forensics differently, which reflects on the methodologies used by forensic practitioners in those jurisdictions. According to Perumal (2009), “As computer forensic is a new regulation in Malaysia, there is a little consistency and standardization in the court and industry sector. As a result, it is not yet recognized as a formal ‘Scientific’ discipline in Malaysia” (p. 40). This means that the most affected jurisdictions are ones that have not developed a full understanding of computer forensics and the processes used to preserve and analyze electronic evidence. Also, the lack of understanding by the legal system would reduce the influence that the legal system has on shaping methodologies and making them acceptable by legal standards.
One of the other challenges related to digital forensic methodologies had been caused by the researchers who develop these methodologies. Some researchers have used titles and terminology in naming their methodologies that is not related to digital forensics. For instance, Baryamureeba and Tushabe (2004) named the fourth stage of the EIDIP Model as the Dynamite Phase, which includes Reconstruction and Communication as sub-phases. This lack of clarity in terminology would have a negative impact on any effort to help groups outside the digital forensic arena to have a better understanding of the processes used in the digital forensic field. As it was explained above, some jurisdictions across the globe are still unsure about digital forensics and the evidence obtained from a digital forensic investigation. This means that there would be advantages for researchers to utilize more modest and familiar terminology when addressing digital forensic methodologies. This simplicity in the terminology would assist the legal system in having a better understanding of the digital forensic processes.
The same challenges that the digital forensic field is currently facing can also be seen as challenges to digital forensic methodologies. Some of those challenges are; volume of data, encryption, anti-forensic techniques, and lack of standards. The reason these factors are perceived as challenging is because they have been slowing down the progression of digital forensic investigations. For instance, a fully encrypted computer system and an undisclosed encryption key would prevent practitioners from gaining access to the data, which ultimately imposes a challenge on the entire digital forensic process. One of the challenges that has been addressed by many researchers is the volume of data examined by forensic practitioners during digital forensic investigation. Examining a large amount of information during a digital forensic investigation would slow down the entire forensic process and affect the flow of operations for forensic laboratories.
Volume of Data
The increase in the volume of data was caused by the increasing sizes of electronic storage devices and at the same time the significant decrease in prices of those devices. According to Quick and Choo (2014), digital forensic practitioners have seen a significant increase in the amount of data that is being analyzed in every digital forensic examination. The authors explained that three factors have contributed to the volume of data that has become an issue for digital forensic practitioners; increase in the number of electronic devices for each investigation, the increase in the size of memory for those devices, and the number of investigations that require digital forensic examinations.
The significant increase in the volume of data has caused a great burden on forensic laboratories. Digital forensic practitioners are forced to spend longer times examining and analyzing larger data sets, which cause huge delays in completing digital forensic investigations. Quick and Choo (2014) explained that forensic laboratories have backlogs of work caused by the amount of time spend on each single examination. Some may argue that the increased amount of data can be an advantage as consumers are not forced to delete any data. However, Quick and Choo (2014) highlighted the seriousness of the data volume issue and stated, “Serious implications relating to increasing backlogs include; reduced sentences for convicted defendants due to the length of time waiting for results of digital forensic analysis, suspects committing suicide whilst waiting for analysis, and suspects denied access to family and children whilst waiting for analysis” (p. 274).
Current Solutions to the Data Volume Issue
Ruibin et al. (2005) offered a way to reduce the amount of data for each case by determining the relevant data based on the preliminary investigation. The relevant data for a certain case can be determined through two sources; case information and the person investigating the matter. Combining the information from both sources would assist in building a case profile, which can then be used to determine what type of information would be relevant for each forensic examination. The solution proposed by Ruibin et al. (2005) did not focus on using the technology to reduce the amount of data; however, there were other proposed solutions that suggested using technology to eliminate the data volume issue.
Neuner, Mulazzani, Schrittwieser, and Weippl (2015) proposed a technique that depends on technology to reduce the data volume. Their technique is based on the idea of file deduplication and file whitelisting. Deduplication and file whitelisting techniques would allow digital forensic practitioners to reduce the number of files that would have to be reviewed. Reducing the number of files would reduce the amount of time needed to complete the entire process. The authors defined the whitelisting technique as a way to compare a known list of hash values to the evidence and exclude any matches, as the contents of those files have already been determined. This list of hash values can either represent safe files or known contraband. The method offered by Neuner et al. (2015) is completely automated, and it would not require any human intervention, which means that no manpower is required to perform this task. Both solutions explained above are not controversial, as they utilize standard methods to locate the evidence. However, that is not the case with all solutions proposed to solve the data volume issue.
Quick and Choo (2014) explained that many researchers have addressed the issue of data volume through the use of the term sufficiency of examination. Those researchers did not focus on reducing the amount of data to be examined. Instead, they proposed limiting the search for electronic information only to the amount that would answer the main questions of the investigation. In other words, if a digital forensic practitioner examining a computer system for any evidence related to a child pornography investigation, the examination can be stopped once enough evidence was found to charge the suspect. This means that not all of the computer system would be examined and practitioners would be able to complete more examinations in a shorter period of time.
Some may argue that conducting a partial examination of the data could potentially lead to missing valuable digital evidence. For instance, in the example above concerning conducting a partial examination of child pornography evidence, conducting a full exam may lead to discovering pictures and videos of minors that are being sexually exploited. Those victims can potentially be identified and rescued. This means that conducting a partial examination of the evidence could possibly prevent those victims from being rescued. Still, assuming a full exam could lead to identifying other victims does not stand when confronted by the facts presented by Quick and Choo’s (2014), which highlighted the fact that delaying the results of a forensic exam has grievous negative effects. These effects can be seen on suspects who are waiting a long time for forensic practitioners to conclude their exams. The same goes for the victims, as they are not able to get any closure during the investigation, as they all have to await the exam to be concluded.
The Proposed Methodology
Digital forensic practitioners continue to face a great deal of pressure from the challenges explained earlier. These challenges are placing obstacles in the way of examinations, in return causing practitioners to spend an extensive amount of time working on each investigation. However, it appears that researchers have not invested much effort into creating methodologies that would support practitioners to ease the pressure and eliminate the challenges. However, some may impose a compelling argument that not all challenges can be resolved in the context of the methodologies. For instance, data encryption is one of the issues that prevents practitioners from accessing the evidence and can be categorized as a technical challenge. This means that researchers would not be able to address this challenge in the context of a digital forensic methodology as methodologies are supposed to address higher level processes that occur during digital forensic investigations. Even though this argument seems compelling, practitioners can still create methodologies that are at least capable of reducing the workload pressure away from practitioners.
Selamat et al. (2008) described the Extended Model presented by Ciardhuain (2004) as a complete framework that provides clear stages for digital forensic investigations. Ciardhuain (2004) suggested that there are multiple steps that must be taken throughout digital forensic investigations; awareness, authorization, planning, notification, search and identification of evidence, collection, transport, storage, examination, hypotheses, presentation, proof/defense, and dissemination. Selamat et al. (2008) also explained that the Extended Model included all five phases that were agreed upon by many researchers. This means the remaining phases of the Extended Model are not related to digital forensics and removing them would not affect the authenticity or reliability of the electronic evidence. However, some may argue that the Extended Model was not intend to be conducted by only digital forensic practitioners, as many phases of the model can be performed by non-digital forensic practitioners. For instance, in a criminal investigation, completing the authorization phase can be performed by law enforcement personnel to grant forensic practitioners lawful access to the evidence. This means that the Extended Model, is a framework that is not only for practitioners, but is for all investigations that involve digital evidence.
This paper proposes a new methodology, Focused Digital Forensic Methodology (FDFM), that is capable of eliminating the data volume issue and the lack of focus with the current digital forensic methodologies. The FDFM is designed to be a reflection of the current workflow of law enforcement and civil investigations. The FDFM focuses on ensuring that digital forensic investigations are being conducted properly without overloading practitioners with unrelated activities. Further, the FDFM proposes techniques to reduce the volume of data to cut on time required to complete examinations of electronic evidence, especially on larger data sets. This paper proposes a new methodology, Focused Digital Forensic Methodology (FDFM), that is capable of eliminating the data volume issue and the lack of focus with the current digital forensic methodologies. The FDFM is designed to be a reflection of the current workflow of law enforcement and civil investigations. The FDFM focuses on ensuring that digital forensic investigations are being conducted properly without overloading practitioners with unrelated activities. Further, the FDFM proposes techniques to reduce the volume of data to cut on time required to complete examinations of electronic evidence, especially on larger data sets.
A quick review of the Extended Model would lead to a conclusion that it focuses on activities related to evidence that is considered common knowledge in the law enforcement field and digital forensic field. For instance, the transportation and storage activities of the EIDIP are two parts of evidence handling that is being trained to law enforcement personnel and digital forensic practitioners. So, it would not be of value to emphasiz the idea that digital forensic practitioners must transport and store the evidence after it was collected. For that reason, the FDFM has excluded aspects that can be considered common knowledge in the digital forensic field, which are related to the proper handling of evidence.
Phases of the FDFM
The FDFM is designed in a way similar to the IDIP and EIDIP, as it is broken down into multiple phases, which are further broken down into sub-phases. The FDFM is designed in a way similar to the IDIP and EIDIP, as it is broken down into multiple phases, which are further broken down into sub-phases.
Preparation Phases. The preparation phase can be applied in two situations; when creating a new digital forensic team and after completing a digital forensic investigation. This phase is mainly aimed at ensuring that digital forensic teams, no matter what their mission might be, are capable of initiating and completing an investigation properly and without any problems. Just as the case for the EIDIP, the focus in this phase is ensuring that the forensic team is trained and equipped for their assigned mission.
Training. To ensure the forensic team is executing a forensic methodology properly, they must have all the training that would equip all team members with the knowledge needed. This training does not only focus on how to collect or search evidence properly but also trains the members on how to use the tools needed during those processes. For instance, the team should have the knowledge needed regarding the value of preserving data. They should also be able to utilize any software or hardware tools that are capable of preserving and acquiring data from different platforms.
Equipment. There are many tools that digital forensic professionals utilize during investigations that are capable of serving different purposes. One of the functions that those tools are able to accomplish is automating different processes during the investigation, and completing those jobs in a timely manner. Depending on the main mission of the forensic team, tools and equipment must be available to ensure a proper and fast completion of any digital investigation.
On-Scene Phases. This phase focuses on processes that would be accomplished in the event that a digital forensic team was called in to assist with executing search warrants or even preserving data for a civil litigation.
Physical Preservation. During this phase, digital forensic practitioners ensure that any item that may contain electronic information of value is protected. This protection would ensure that no damage would be inflected on any electronic device that can cause the loss of the information within. For instance, if a digital forensic practitioner found one of the items sought during the search of a residence, this item must be kept in a location under the control of the searching team. This means that any occupant in the residence would not be able to cause any damage to the device, which they would possibly attempt if they believed that it contained incriminating evidence.
Electronic Isolation. The other part of the preservation is ensuring that the information within the collected device is preserved by eliminating any electronic interference. An item like a mobile device would have to be isolated from the cellular network to ensure that no one is capable of damaging or changing the data remotely. The same case applies to computers or other items with network connectivity, as suspects could connect remotely to those devices and attempt to eliminate any information that is potentially damaging to their case. Electronic Isolation. The other part of the preservation is ensuring that the information within the collected device is preserved by eliminating any electronic interference. An item like a mobile device would have to be isolated from the cellular network to ensure that no one is capable of damaging or changing the data remotely. The same case applies to computers or other items with network connectivity, as suspects could connect remotely to those devices and attempt to eliminate any information that is potentially damaging to their case.
Electronic Preservation. The last part of preservation is creating a complete or partial duplicate of the targeted electronic information. The created copy would ensure that the electronic evidence is preserved in a forensic image. In the event that the devices were not supposed to be removed from the scene, forensic images would have to be created on scene, which can then be taken back to the laboratory for examination. A partial collection can also be conducted during this phase, which would be especially advantageous for civil litigation purposes. In those cases, the collection of an entire computer system is not recommended, and practitioners would have to search for and collect only the relevant data.
Laboratory Phases. These phases are conducted when the team is at the laboratory and have all the evidence or forensic images collected from the scene. Prior to interacting with the evidence, an attempt would be made to reduce the volume of data that an examiner would have to process and analyze. The reduction of data volume is crucial for cases that have a huge number of electronic devices and a large volume of storage for electronic information. As it was explained earlier, reducing the volume of data would expedite the examination of the evidence and allow for a better workflow. Once the reduction of data has been completed, examiners can begin processing and analyzing data.
Building Case Profile. This phase is somewhat similar to the creation of a case profile presented by Ruibin et al. (2005). These authors suggested using input from the investigation to identify the type of information that a digital forensic practitioner would need to focus on finding during the examination. For instance, if the case is related to pictures of evidentiary value, then a practitioner would need to be focused on reviewing all the pictures found on a device, without the need to focus on any other type of electronic information. For the FDFM, building a case profile would begin, just as the case for Ruibin et al. (2005), with obtaining information from the investigating agency regarding their investigation. The obtained information must focus specifically on the relationship between the electronic evidence and the incident under investigation. In other words, the investigating agency would have to provide information about what role the devices played in the incident. For instance, a mobile device would be submitted to the digital forensic team as part of a rape investigation. The investigating agency would have to prove how the mobile device was related to the rape incident. In these situations, the phone could have been used by the suspect to communicate with or lure the victim into a certain location. Building Case Profile. This phase is somewhat similar to the creation of a case profile presented by Ruibin et al. (2005). These authors suggested using input from the investigation to identify the type of information that a digital forensic practitioner would need to focus on finding during the examination. For instance, if the case is related to pictures of evidentiary value, then a practitioner would need to be focused on reviewing all the pictures found on a device, without the need to focus on any other type of electronic information. For the FDFM, building a case profile would begin, just as the case for Ruibin et al. (2005), with obtaining information from the investigating agency regarding their investigation. The obtained information must focus specifically on the relationship between the electronic evidence and the incident under investigation. In other words, the investigating agency would have to provide information about what role the devices played in the incident. For instance, a mobile device would be submitted to the digital forensic team as part of a rape investigation. The investigating agency would have to prove how the mobile device was related to the rape incident. In these situations, the phone could have been used by the suspect to communicate with or lure the victim into a certain location.
Once all the information was obtained regarding the investigation and the evidence, the type of data relevant to the investigation can be determined. In the example above regarding the rape case, the suspect may have used text messages to communicate with the suspect, which makes any text base communication between both parties relevant and must be reviewed and analyzed. In some cases, determining the approximate size of information sought can also be relevant and would allow for the examination to be even more focused. For instance, an investigating agency submitted one 500 gigabyte external drive and one 8 gigabyte thumb drive to the digital forensic team for examination. The investigation was seeking documents approximately 30 gigabytes in size relevant to a fraud case. In this situation, a forensic practitioner would focus on the external drive as it most likely contains the documents because it is more than 20 gigabytes in size. In the same sense, the 8-gigabyte thumb drive most likely does not have the documents because of its small size.
The next step of the case profile is determining the most relevant type of devices based on the information obtained thus far about the investigation. If the examination is seeking text-based messages sent to the victim, then it likely means that a mobile device was used to send those messages. However, this does not mean that a computer cannot be used to send such messages, but a mobile device is the most likely suspect in this situation. Then, the profile can provide further emphasis on the possibility of a mobile device being the source of the relevant messages. It is apparent from the above that building a case profile would require an experienced digital forensic practitioner, using prior experience and knowledge while building the case profile would result in a more accurate and realistic conclusion. The next step of the case profile is determining the most relevant type of devices based on the information obtained thus far about the investigation. If the examination is seeking text-based messages sent to the victim, then it likely means that a mobile device was used to send those messages. However, this does not mean that a computer cannot be used to send such messages, but a mobile device is the most likely suspect in this situation. Then, the profile can provide further emphasis on the possibility of a mobile device being the source of the relevant messages. It is apparent from the above that building a case profile would require an experienced digital forensic practitioner, using prior experience and knowledge while building the case profile would result in a more accurate and realistic conclusion.
The final step of the case profile is setting goals that the investigation is hoping to accomplish at the end of the examination. These goals are based on all the information gathered thus far and how the evidence was going to prove or disprove an incident or event. For instance, in the example of the rape investigation mentioned above, retrieving the text messages from the mobile device is the first goal to show that the suspect had lured the victim to the crime scene. The timeline of those messages is another piece of information that would corroborate the victim’s statement regarding different events prior to the incident. While examining the mobile device, these goals would make it clear to the digital forensic practitioner what the expected results following the examination of the device. The final step of the case profile is setting goals that the investigation is hoping to accomplish at the end of the examination. These goals are based on all the information gathered thus far and how the evidence was going to prove or disprove an incident or event. For instance, in the example of the rape investigation mentioned above, retrieving the text messages from the mobile device is the first goal to show that the suspect had lured the victim to the crime scene. The timeline of those messages is another piece of information that would corroborate the victim’s statement regarding different events prior to the incident. While examining the mobile device, these goals would make it clear to the digital forensic practitioner what the expected results following the examination of the device.
In civil litigation cases and even criminal cases, it would be beneficial during this phase to generate a list of keyword searches that could be used during the examination process. Those keywords must be related directly to the evidence and based on the information obtained about the evidence. When dealing with civil litigation cases, the keyword list would become the main method used to locate any responsive and relevant electronic information. The words added to the list must also be related to the goals setup to be accomplished at the end of the investigation. For instance, if the civil case is related to patent infringement, then the keywords would reveal documents or data that could prove or disprove the allegations submitted by one party against the other.
Evaluation of the Evidence. This evaluation of the evidence would have to occur based on the case profile that was created during the previous phase. The main goal of this phase is to exclude any device that does not match the case profile and most likely would not hold any information sought during the examination of the evidence. Excluding items that do not have relevant information from being examined would reduce the amount of information that a digital forensic practitioner examines in each case. This would ultimately cut on the amount of time needed to complete the examinations and give the investigators faster results. As it was explained above, the case profile would determine the type of electronic information, the timeline related to the information, the size of the electronic information, and other aspects of the evidence. Using the generated profile, the items of evidence can be listed in a way that the device most likely containing the information would be placed on the top of the list. The rest of the devices would be placed in the same way, which is based on the likelihood that they would contain the targeted data. Based on this list, the last item of the list would be least likely to contain information relevant to the investigation.
Forensic Acquisition. During this phase, any data that was not acquired or preserved on scene would be imaged. The imaging process would focus on the items that are on the top of the list that was generated during the Evaluation of the Evidence phase. This means that only items that are believed to hold the relevant data would be acquired and not the entire list of items. This would save on the storage space required to save all the forensic images to and also reduce the time required for imaging.
Partial Examination/Analysis. Many digital forensic models separate the examination phase from the analysis phase, just as the case for the Abstract Digital Forensic Model (Reith, Carr, and Gunsch, 2002). Those models assume that a digital forensic practitioner would search the evidence for any relevant data during the examination phase. Then, an analysis would be conducted during the analysis phase to determine the significance of the information found during the examination phase. In the FDFM, those two phases are combined in one phase as they cannot be separated since they occur at the exact same moment during the search for the evidence. The FDFM proposes creating a case profile prior to examining the evidence, which means that a digital forensic practitioner would know what to search for and the significance of the information beforehand. So, while searching for the evidence, examiners would be able to determine the significance of the information as it is being located.
An example of the situation explained above is the rape incident that was mentioned earlier and the mobile device that holds the information related to the incident. At the time when the practitioner they had prior knowledge that the targeted information related to luring the victim to the incident location was in the text messages. While reviewing the text messages, the digital forensic practitioner found a message that showed how the suspect and victim had met. Then, on the day of the incident, other messages were found showing how the victim was lured to the location of the incident. As the forensic practitioner is reviewing the messages, they are evaluating the information simultaneously and weighing the significance of the pieces of information as it is being found. As each text message is going to add another piece of information related to the incident. At the end of this phase, the forensic practitioner will have a full picture of what happened on that day and any other information related to the incident. It would not be practical for the forensic practitioner to go back and reanalyze information that was previously analyzed as the information was being found. It is also worth mentioning that digital forensic practitioners usually mark the relevant data as it is being found to ensure that it is being included in the final report generated by the tool used to review the evidence. An example of the situation explained above is the rape incident that was mentioned earlier and the mobile device that holds the information related to the incident. At the time when the practitioner they had prior knowledge that the targeted information related to luring the victim to the incident location was in the text messages. While reviewing the text messages, the digital forensic practitioner found a message that showed how the suspect and victim had met. Then, on the day of the incident, other messages were found showing how the victim was lured to the location of the incident. As the forensic practitioner is reviewing the messages, they are evaluating the information simultaneously and weighing the significance of the pieces of information as it is being found. As each text message is going to add another piece of information related to the incident. At the end of this phase, the forensic practitioner will have a full picture of what happened on that day and any other information related to the incident. It would not be practical for the forensic practitioner to go back and reanalyze information that was previously analyzed as the information was being found. It is also worth mentioning that digital forensic practitioners usually mark the relevant data as it is being found to ensure that it is being included in the final report generated by the tool used to review the evidence.
The FDFM suggests the use of a method that was referenced by Quick et al. (2014), which aims at conducting a partial examination and analysis of the evidence as a solution for the data volume issue. According to Quick et al. (2014), the digital forensic practitioner would be, “doing enough examination to answer the required questions, and no more” (p. 282). The same concept can be applied using the FDFM, as digital forensic practitioners conduct examinations only to accomplish the goals setup during the Building Case Profile phase. The examination would be performed in the same sequence of relevance as determined during the Evaluation of the Evidence phase. This means that the examination would begin with the more relevant items and continue down the list until all the goals are accomplished. One of the greatest benefits of conducting partial examinations of the evidence is maintaining the privacy of the owners. For instance, mobile devices contain a vast amount of private information about the owners. This means that the more information to be reviewed, the more loss of privacy that would occur. Limiting the examination to the need of the investigation would assist in maintaining a certain limit of privacy for the owner of the data.
Reporting. During this phase, all the information found during the examination would be placed in a report to inform the investigating agency of the findings. There are many report formats used by different agencies. However, all formats have one thing in common, which is that they are all driven by the findings of the examination and not opinions. Reports that are supported by solid evidence are hard to dispute, as the evidence behind the information in those reports would suppress any arguments.
Conclusions and Future Research
There are many literatures that propose different types of methodologies that have different focuses. However, many of those methodologies have included phases that are not related to the forensic aspect of those methodologies. Many researchers also addressed the issue of the volume of data that is causing huge delays with digital forensic exams. The proposed methodology, FDFM, allows digital forensic professionals to be focused more on forensics during any digital forensic investigation. This methodology has excluded any phases that were included in other methodologies and are considered common knowledge within the digital forensic field. The proposed methodology also addressed one of the biggest challenges for digital forensic investigations, which is the volume of data. The FDFM proposed two methods that would allow for reduction in the volume of data, excluding devices that do not contain relevant information and conducting partial examinations. Both techniques can be applied only after a case profile has been generated based on the information obtained by the investigating agency. Future research can be conducted by focusing on integrating other techniques to the FDFM that would eliminate other challenges of digital forensic investigations.
Agarwal, A., Gupta, M., & Gupta, S. (2011). Systematic digital forensic investigation model. International Journal of Computer Science and Security (IJCSS), 5(1), 118-131.
Baryamureeba, V., & Tushabe, F. (2004). The Enhanced Digital Investigation Process Model. Proceedings of the Digital Forensic Research Conference. Baltimore, MD.
Carrier, B., & Spafford, E. H. (2003). Getting physical with the Digital Investigation Process. International Journal of Digital Evidence, 2(2), 1-20.Carrier, B., & Spafford, E. H. (2003). Getting physical with the Digital Investigation Process. International Journal of Digital Evidence, 2(2), 1-20.
Ciardhuain, S. O. (2004). An extended model of cybercrime investigations. International Journal of Digital Evidence, 3(1), 1-22.
Neuner, S., Mulazzani, M., Schrittwieser, S., & Weippl, E. (2015). Gradually improving the forensic process. In the 10th International Conference on Availability, Reliability and Security, 404-410. IEEE.
Perumal, S. (2009). Digital forensic model based on Malaysian investigation process. International Journal of Computer Science and Security (IJCSS), 9(8), 38-44.
Pollitt, M. M. (1995). Computer forensics: An approach to evidence in cyberspace. In the 18th National Information Systems Security Conference, 487-491. Baltimore, MD.
Quick, D., & Choo, K. R. (2014). Impact of increasing volume of digital forensic data: A survey and future research challenges. Digital Investigation, 11(4), 273-294.
Reith, M., Carr, C., & Gunsch, G. (2002). An examination of the digital forensic models. International Journal of Digital Evidence, 1(3), 1-12.
Ruibin, G., Yun, T., & Gaertner, M. (2005). Case-relevance information investigation: binding computer intelligence to the current computer forensic framework. International Journal of Digital Evidence, 4(1), 147-67.
Selamat, S. R., Yusof, R., & Sahib, S. (2008). Mapping process of digital forensic investigation framework. International Journal of Computer Science and Network Security, 8(10), 163-169.
About the Author
Haider Khaleel is a Digital Forensics Examiner with the US Army, previously a field agent with Army CID. Haider received a Master’s Degree in Digital Forensics Science from Champlain College. The ideas presented in this article do not reflect the polices, procedure, and regulations of the author’s agency.
Correspondence concerning this article should be addressed to Haider H. Khaleel, Champlain College, Burlington, VT 05402. firstname.lastname@example.org
by Haider H. Khaleel Abstract Since the end of the 19th Century until the current time, law enforcement has been facing a rapid increase in computer-related crimes. In the present time, digital forensics has become an important aspect of not only law enforcement investigations, but also; counter-terrorism investigations, civil litigations, and investigating cyber-incidents. Due to […]
Digital Forensics as a Big Data Challenge
Digital Forensics, as a science and part of the forensic sciences, is facing new challenges that may well render established models and practices obsolete. The dimensions of potential digital evidence supports has grown exponentially, be it hard disks in desktops and laptops or solid state memories in mobile devices like smartphones and tablets, even while latency times lag behind. Cloud services are now sources of potential evidence in a vast range of investigations and network traffic also follows a growing trend, and in cyber security the necessity of sifting through vast amount of data quickly is now paramount. On a higher level investigations – and intelligence analysis – can profit from sophisticated analysis of such datasets as social network structures, corpora of text to be analysed for authorship and attribution. All of the above highlights the convergence between so-called data science and digital forensics, to take the fundamental challenge of analysing vast amounts of data (“big data”) in actionable time while at the same time preserving forensic principles in order for the results to be presented in acourt of law. The paper, after introducing digital forensics and data science, explores the challenges above and proceeds to propose how techniques and algorithms used in big data analysis can be adapted to the unique context of digital forensics, ranging from the managing of evidence via Map-Reduce to machine learning techniques for triage and analysis of big forensic disk images and network traffic dumps. In the conclusion the paper proposes a model to integrate this new paradigm into established forensic standards and best practices and tries to foresee future trends.
1.1 Digital Forensics
What is digital forensics? We report here one of the most useful definitions of digital forensics formulated. It was developed during the first Digital Forensics Research Workshop (DFRWS) in 2001 and it is still very much relevant today:
Digital Forensics is the use of scientifically derived and proven methods toward the preservation, collection, validation, identification, analysis, interpretation, documentation and presentation of digital evidence derived from digital sources for the purpose of facilitating or furthering the reconstruction of events found to be criminal, or helping to anticipate unauthorized actions shown to be disruptive to planned operations. [Pear01]
This formulation stresses first and foremost the scientific nature of digital forensics methods, in a point in time when the discipline was transitioning from being a “craft” to an established field and rightful part of the forensic sciences. At that point digital forensics was also transitioning from being mainly practised in separated environments such as law enforcement bodies and enterprise audit offices to a unified field. Nowadays this process is very advanced and it can be said that digital forensics principles, procedures and methods are shared by a large part of its practitioners, coming from different backgrounds (criminal prosecution, defence consultants, corporate investigators and compliance officers). Applying scientifically valid methods implies important concepts and principles to be respected when dealing with digital evidence. Among others we can cite:
- Previous validation of tools and procedures. Tools and procedures should be validated by experiment prior to their application on actual evidence.
- Reliability. Processes should yield consistent results and tools should present consistent behaviour over time.
- Repeatability. Processes should generate the same results when applied to the same test environment.
- Documentation. Forensic activities should be well-documented, from the inception to the end of evidence life-cycle. On one hand strict chain-of-custody procedures should be enforced to assure evidence integrity and the other hand complete documentation of every activity is necessary to ensure repeatability by other analysts.
- Preservation of evidence – Digital evidence is easily altered and its integrity must be preserved at all times, from the very first stages of operations, to avoid spoliation and degradation. Both technical (e.g. hashing) and organizational (e.g. clear accountabilityfor operators) measures are to be taken.
These basic tenets are currently being challenged in many ways by the shifting technologicaland legal landscape practitioners have to contend with. While this paper shall not dwell much on the legal side of things, this is also obviously something that is always to be considered in forensics.
Regarding the phases that usually make up the forensic workflow, we refer here again to the only international standard available [ISO12] and describe them as follows:
- Identification. This process includes the search, recognition and documentation of the physical devices on the scene potentially containing digital evidence. [ISO12]
- Collection – Devices identified in the previous phase can be collected and transferred to an analysis facility or acquired (next step) on site.
- Acquisition – This process involves producing an image of a source of potential evidence, ideally identical to the original.
- Preservation – Evidence integrity, both physical and logical, must be ensured at all times.
- Analysis – Interpretation of the data from the evidence acquired. It usually depends onthe context, the aims or the focus of the investigation and can range from malware analysis to image forensics, database forensics, and a lot more of application-specific areas.On a higher level analysis could include content analysis via for instance forensics linguistics or sentiment analysis techniques.
- Reporting – Communication and/or dissemination of the results of the digital investigation to the parties concerned.
1.2 Data Science
Data Science is an emerging field basically growing at the intersection between statistical techniques and machine learning, completing this toolbox with domain specific knowledge, having as fuel big datasets. Hal Varian gave a concise definition of the field:
[Data science is] the ability to take data – to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it. [Vari09]
We can see here the complete cycle of data management and understand that data science in general is concerned with the collection, preparation, analysis, visualization, communication and preservation of large sets of information; this is a paraphrase of another insightful definition by Jeffrey Stanton of Syracuse University’s School of Information Studies. The parallels with the digital forensics workflow are clear but the mention in both definitions of visualization deserves to be stressed. Visualization is mostly never mentioned in digital forensics guidelines and standards but as the object of analysis moves towards “Big Data”, it will necessarily become one of the most useful tools in the analyst’s box, for instance in the prioritization phase but also for dissemination and reporting: visual communication is probably the most efficient way into a human’s brain but this channel is underused by most of today’s forensic practitioners.
If Data Science is concerned with “Big Data”, what is Big Data anyway? After all big is a relative concept and prone to change with time. Any data that is difficult to manage and work with, or in other words datasets so big that for them conventional tools – e.g. relational databases – are not practical or useful. [ISAC13] From the point of view of data science the challenges of managing big data can be summarized as three Vs: Volume (size), Velocity (needed for interactivity), Variety (different sources of data). In the next paragraph we shall see how these three challenges dovetail nicely with the digital forensics context.
“Golden Age” is a common definition for the period in the history of digital forensics that went roughly from the 1990s to the first decade of the twenty-first century. During that period the technological landscape was dominated by the personal computer, and mostly by a single architecture – x86 plus Windows – and data stored in hard drives represented the vast majority of evidence, so much so that “Computer Forensics” was the accepted term for the discipline. Also the storage size allowed for complete bitwise forensic copies of the evidence for subsequent analysis in the lab. The relative uniformity of the evidence nature facilitated the development of the digital forensic principles outlined above and enshrined in several guidelines and eventually in the ISO/IEC 27037 standard. Inevitably anyway they lagged behind the real-world developments: recent years brought many challenges to the “standard model”, first among them the explosion in the average size of the evidence examined for a single case. Historical motivations for this include:
- A dramatic drop in hard drive and solid state storage cost (currently estimated at $80 per Terabyte) and consequently an increase in storage size per computer or device;
- Substantial increase in magnetic storage density and diffusion of solid-state removable media (USB sticks, SD and other memory cards etc) in smartphones, notebooks, cameras and many other kinds of devices;
- Worldwide huge penetration of personal mobile devices like smartphones and tablets, not only in Europe and America, but also in Africa – where they constitute the main communication mode in many areas – and obviously in Asia;
- Introduction and increasing adoption by individuals and businesses of cloud services – infrastructure services (IAAS), platform services (PAAS) and applications (SAAS) – made possible in part by virtualization technology enabled in turn by the modern multi-core processors;
- Network traffic is ever more part of the evidence in cases and the sheer size of it has – again – obviously increased in the last decade, both on the Internet and on 3G-4G mobile networks, with practical but also ethical and political implications;
- Connectivity is rapidly becoming ubiquitous and the “Internet of things” is near, especially considering the transition to IPv6 in the near future. Even when not networked, sensors are everywhere, from appliances to security cameras, from GPS receivers to embedded systems in cars, from smart meters to Industrial Control Systems.
To give a few quantitative examples of the trend, in 2008 the FBI Regional Computer Forensics Laboratories (RCFLs) Annual Report [FBI08] explained that the agency’s RCFLs processed 27 percent more data than they did during the preceding year; the 2010 Report gavean average case size of 0.4 Terabytes. According to a recent (2013) informal survey among forensic professionals on Forensic Focus, half of the cases involve more than on Terabyte of data, with one in five over five Terabytes in size.
The simple quantity of evidence associated to a case is not the only measure of its complexity and the growing in size is not the only challenge that digital forensics is facing: evidence is becoming more and more heterogeneous in nature and provenance, following the evolving trends in computing. The workflow phase impacted by this new aspect is clearly analysis where, even when proper prioritization is applied, it is necessary to sort through diverse categories and source of evidence, structured and unstructured. Data sources themselves are much more differentiated than in the past: it is common now for a case to include evidence originating from personal computers, servers, cloud services, phones and other mobile devices, digital cameras, even embedded systems and industrial control systems.
3 Rethinking Digital Forensics
In order to face the many challenges but also to leverage the opportunities it is encountering, the discipline of digital forensics will have to rethink in some ways established principles and reorganize well-known workflows, even include and use tools not previously considered viable for forensic use – concerns regarding the security of some machine learning algorithms has been voiced, for instance in [BBC+08]. On the other hand forensic analysts’ skills need to be rounded up to make better use of these new tools in the first place, but also to help integrate them in forensic best practices and validate them. The dissemination of “big data” skills will have to include all actors in the evidence lifecycle, starting with Digital Evidence First Responders (DEFRs), as identification and prioritization will see their importance increased and skilled operators will be needed from the very first steps of the investigation.
Well-established principles shall need to undergo at least a partial extension and rethinking because of the challenges of Big Data.
- Validation and reliability of tools and methods gain even more relevance in a big data scenarios because of the size and variety of datasets, coupled with the use of cutting-edge algorithms that still need validation efforts, including a body of test work first on methods and then on tools in controlled environments and on test datasets before their use in court.
- Repeatability has long been a basic tenet in digital forensics but most probably we will be forced to abandon it, at least in its strictest sense, for a significant part of evidence acquisition and analysis. Already repeatability stricto sensu is impossible to achieve in nearly all instances of forensic acquisition of mobile devices, and the same applies to cloud forensics. When Machine Learning tools and methods become widespread, reliance on previous validation will be paramount. As an aside, this stresses once more the importance of using open methods and tools that can be independently and scientifically validated as opposed to black box tools or – worse – LE-reserved ones.
- As for documentation, its importance for a sound investigation is even greater when we see non-repeatable operations and live analysis routinely be part of the investigation process. Published data about validation results of tools and methods used – or at least pointers to it – should be an integral part of the investigation report.
Keeping in mind how the forensic principles may need to evolve, we present here a brief summary of the forensics workflow and how each phase may have to adapt to big data scenarios. ISO/IEC 27037 International Standard covers the identification, collection, acquisition and preservation of digital evidence (or, literally, “potential” evidence). Analysis and disposal are not covered by this standard, but will be in future – in development – guidelines in the 27xxx series.
Identification and collection
Here the challenge is selecting evidence in a timely manner, right on the scene. Guidelines for proper prioritization of evidence should be further developed, abandoning the copy-all paradigm and strict evidence integrity in favour of appropriate triage procedures: this implies skimming through all the (potential) evidence right at the beginning and selecting relevant parts. First responders’ skills will be even more critical that they currently are and, in corporate environments, also preparation procedures.
When classic bitwise imaging is not feasible due to the evidence size, prioritization procedures or “triage” can be conducted, properly justified and documented because integrity is not absolute anymore and the original source has been modified, if only by selecting what to acquire. Visualization can be a very useful tool, both for low-level filesystem analysis and higher level content analysis. Volume of evidence is a challenge because dedicated hardware is required for acquisition – be it storage or online traffic – while in the not so distant past an acquisition machine could be built with off-the-shelf hardware and software. Variety poses achallenge of a slightly different kind, especially when acquiring mobile devices, due to the huge number of physical connectors and platforms.
Again, preservation of all evidence in a secure way and complying with legal requirements calls for quite a substantial investment for forensic labs working on a significant number of cases.
Integrating methods and tools from data science implies surpassing the “sausage factory” forensics still widespread today, where under-skilled operators rely heavily on point and click all-in-one tools to perform the analysis. Analysts shall need to include a plurality of tools in their panoply and not only that, but understand and evaluate the algorithms and implementations they are based upon. The absolute need for highly skilled analysts and operators is clear, and suitable professional qualifications will develop to certify this.
The final report for an analysis conducted using data science concepts should contain accurate evaluations of tools, methods used, including data from the validation process and accurate documentation is even more fundamental as strict repeatability becomes very hard to uphold.
3.3 Some tools for tackling the Big Data Challenge
At this stage, due also to the fast-changing landscape in data science, it is hard to systematically categorize its tools and techniques. We review here some of them.
Map-Reduce is a framework used for massive parallel tasks. This works well when the data-sets do not involve a lot of internal correlation. This does not seem to be the case for digital evidence in general but a task like file fragment classification is suited to be modelled in aMap-Reduce paradigm. Attribution of file fragments – coming from a filesystem image or from unallocated space – to specific file types is a common task in forensics: machine learning classification algorithms – e.g. logistic regression, support vector machines – can be adapted toM-R if the analyst forgoes the possible correlations among single fragments. A combined approach where a classification algorithm is combined for instance with a decision tree method probably would yeld higher accuracy.
Decision trees and random forests are fruitfully brought to bear in fraud detection software, where the objective is to find in a vast dataset the statistical outliers – in this case anomalous transactions, or in another application, anomalous browsing behaviour.
In audio forensics unsupervised learning techniques under the general definition of “blind signal separation” give good results in separating two superimposed speakers or a voice from background noise. They rely on mathematical underpinning to find, among possible solutions, the least correlated signals.
In image forensics again classification techniques are useful to automatically review big sets of hundreds or thousands of image files, for instance to separate suspect images from the rest.
Neural Networks are suited for complex patter recognition in network forensics. A supervised approach is used, where successive snapshots of the file system are used to train the network to recognize normal behaviour of an application. After the event the system can be used to automatically build an execution timeline on a forensic image of a filesystem. [KhCY07] Neural Networks have also been used to analyse network traffic but in this case the results still do not present high levels of accuracy.
Natural Language Processing (NLP) techniques, including Bayesian classifiers and unsupervised algorithms for clustering like k-means, has been successfully employed for authorship verification or classification of large bodies of unstructured texts, emails in particular.
The challenges of big data evidence already at present highlight the necessity of revising tenets and procedures firmly established in digital forensics. New validation procedures, analysts’ training, and analysis workflow shall be needed in order to confront the mutated landscape. Furthermore, few forensic tools implement for instance machine learning algorithms or, from the other side, most machine learning tools and libraries are not suitable and/or validated for forensic work, so there still exists a wide space for development of innovative tools leveraging machine learning methods.
[BBC+08] Barreno, M. et al.: “Open Problems in the Security of Learning”. In: D. Balfanzand J. Staddon, eds., AISec, ACM, 2008, p.19-26
[FBI08] FBI: “RCFL Program Annual Report for Fiscal Year 2008”, FBI 2008. http://www.fbi.gov/news/stories/2009/august/rcfls_081809
[FBI10] FBI: “RCFL Program Annual Report fir Fiscal Year 2010”, FBI 2010.
[ISAC13] ISACA: “What Is Big Data and What Does It Have to Do with IT Audit?”,ISACA Journal, 2013, p.23-25
[ISO12] ISO/IEC 27037 International Standard
[KhCY07] Khan, M. and Chatwin, C. and Young, R.: “A framework for post-event timelinereconstruction using neural networks” Digital Investigation 4, 2007
[Pear01] Pearson, G.: “A Road Map for Digital Forensic Research”. In: Report fromDFRWS 2001, First Digital Forensic Research Workshop, 2001.
[Vari09] Varian, Hal in: “The McKinsey Quarterly”, Jan 2009
About the Author
Alessandro Guarino is a senior Information Security professional and independent researcher. He is the founder and principal consultant of StudioAG, a consultancy firm based in Italy and active since 2000, serving clients both in the private and public sector and providing cybersecurity, data protection and compliance consulting services. He is also a digital forensics analyst and consultant, as well as expert witness in Court. He holds an M.Sc in Industrial Engineering and a B.Sc. in economics, with a focus on Information Security Economics. He is an ISO active expert in JTC 1/SC 27 (IT Security Techniques committee) and contributed in particular to the development of cybersecurity and digital investigation standards. He represents Italy in the CEN-CENELEC Cybersecurity Focus Group and ETSI TC CYBER. He is the chair of the recently formed CEN/CENELEC TC 8 “Privacy management in products and services”. As an independent researcher, he delivered presentations at international conferences and published several peer-reviewed papers.
Find out more and get in touch with the author at StudioAG.
by Alessandro Guarino, StudioAG Abstract Digital Forensics, as a science and part of the forensic sciences, is facing new challenges that may well render established models and practices obsolete. The dimensions of potential digital evidence supports has grown exponentially, be it hard disks in desktops and laptops or solid state memories in mobile devices like smartphones […]