Wednesday, 3 April 2013

reliability of digital evidence


Familiarity with Terminology

Appreciation of digital evidence requires an understanding of both the technology and

the vernacular (Kerr, 2009). Consider the single most important step in computer

forensics, that of making a forensically correct copy of the evidentiary medium (Brown,

2010). To ensure that the original evidence is not compromised in any way, a copy of the

evidence medium is created, and the forensic examination and analysis are performed on

the copy .

The forensic copy of the evidence medium was historically called a mirror image


 This term is generally understood by a computer scientist or computer

forensic examiner to mean an exact copy but can be misunderstood by lay audiences to

mean a reverse copy because mirrors reflect an opposite image (Brown, 2010). The term

mirror image was so confusing to courts that the process is now called a bit-for-bit

forensic copy to avoid any such ambiguity (Casey, 2011).

To safeguard the integrity of the original data, the imaging process often copies the

original evidence in fixed-sized blocks to the examination medium and each block is

individually validated. The imaging process can be shown to produce a faithful replica of

the original, but the copy may not necessarily look exactly the same as the original.

 Further, some imaging formats employ compression so that

the examination copy is smaller than the original evidence (Common Digital Evidence

Storage Format Working Group, 2006). Explaining that the forensically correct

examination copy is not a bit-for-bit identical copy of the original evidence can cause

confusion for some audiences .

According to Brown (2010), creating the forensic copy of the evidence medium is the

only actual science that occurs in the computer forensics process. For imaging

procedures to be accepted by the court as offering valid information of evidentiary value,

these procedures must meet the Daubert reliability test (Daubert, 1993; Kerr, 2005a) and

must be shown to be reproducible, so that two qualified, competent technicians using the

same hardware and software are able to create identical forensic copies given the same

original (Brown; Casey, 2011). This might not actually be the case, however. Suppose,

for example, that the original evidence disk drive has a sector capable of being read just

one more time. After the first technician makes an image of the evidence disk, that sector

is no longer readable. When the second technician makes a forensic copy of the original,

the second copy will not be identical to the first as a consequence of the bad sector, even

though the same process was followed (Brown). Although the impact of this difference

is minimal, only the most astute fact-finder will understand how to make a decision about

the acceptability of this evidence . Further, a procedure for

precisely determining how different the two copies are and whether that difference

actually affects the reliability of the evidence is not available 

The disk imaging process is normally performed on a hard drive that has been

removed from a computer system. In some circumstances, however, it is necessary to

image a disk drive in a running computer such as when information is needed from an

encrypted drive that may become unrecoverable if the system is shut down or from an

organization's server that can unduly disrupt a business if the system is shut down.

 Imaging a running computer system may cause some files

associated with the imaging application to be written to the hard drive, thus altering the

original evidence prior to the completion of the imaging process . Imaging a live system also provides an opportunity to make a forensic

copy of the system's random access memory (RAM). Since the imaging program has to

be loaded into RAM to execute, some of the original contents of RAM are overwritten

prior to the copy being made (Brown; van Baar, Alink, & van Ballegooij, 2008). In both

of these instances, the court must be assured that whatever information is lost due to the

live imaging process will not contain a sufficient amount of incriminating or exculpatory

evidence to make a difference in reaching a just outcome of the case at hand (Kenneally

& Brown, 2005).

Another emerging digital investigative procedure is network forensics, whereby data

packets are read directly from the network itself using packet sniffing hardware or

software. Typically, 100% of the packets will not be captured because the packet sniffing

equipment may be unable to keep up with the volume of traffic on the network (Casey,

2011; Kessler & Fasulo, 2007). Offering an incomplete record of activity into evidence

at trial must be accompanied with a clear, yet necessarily technical, explanation of the

reasons why there is no bias to any missing data and, therefore, why such evidence

should be admitted (Dinat, 2004; Kenneally, 2005).

While a lack of familiarity with digital technology and the resultant impact on court

cases might suggest that new rules of evidence or specialist judges are necessary, the fact

is that no such movement is currently underway within the judicial community (H. B.

Dixon, personal communication, August 1, 2009; Shaw, 2006; N. L. Waters, personal

communication, November 20, 2008). This lack of technical understanding could

possibly inhibit judges from critically evaluating the evidence presented to them as they

perform their gatekeeper role and apply the Daubert test to presented evidence (Losavio,

Adams, & Rogers, 2006; Losavio, Wilson, & Elmaghraby, 2006; Van Buskirk & Liu,

2006).

Reliability of Digital Evidence

The challenge of proving the accuracy and reliability of digital evidence is

exacerbated by the fact that this type of evidence is sometimes neither. According to Van

Buskirk and Liu (2006), a perception exists among many in the legal community that

digital evidence, if accepted and admitted in court, is reliable and correct. However, the

variability in forensics software, errors in the imaging process, and differences in

examiners’ knowledge affect the reliability, accuracy, and integrity of digital evidence

(Casey, 2002; Cohen, 2008, 2010). In fact, Oppliger and Rytz (2003) and Van Buskirk

and Liu make serious arguments that digital evidence is inherently unreliable largely

because completeness cannot be verified and proven.

An example of the unreliability of digital evidence includes the timestamps commonly

associated with files. Timestamps are metadata associated with a file and maintained by

a digital device’s operating system that indicates the date and time that the file was

created, last accessed, and/or last modified (Casey, 2011). File timestamps can be

important evidence because they allow the computer forensics examiner to build a

timeline of activities. The order in which a set of events occurs can dramatically affect

the interpretation of those events (Cohen, 2008, 2010). If the forensics software

inaccurately reports the timestamp information for any reason, the veracity of all of the

information is suspect (Van Buskirk & Liu, 2006). In addition, not all programs update

all instances of file timestamps in a consistent fashion, and even normal file system

operations can provide seemingly contradictory timestamp information, such as when a

file’s reported last access time precedes the file’s creation time (Brown, 2010; Casey,

2002, 2011).

As part of their gatekeeper role, judges must determine the reliability of reports and

analysis gathered from forensics software (Jones, 2009; Kenneally, 2001b; Kerr, 2005a).

While several well-known commercial and open source computer forensics software such

as AccessData’s Forensic Toolkit (FTK), Brian Carrier’s Autopsy, Guidance Software’s

EnCase, and X-Ways Forensics are generally accepted by the courts, judges can

rightfully question whether a given version of a particular application is as reliable,

verifiable, error-free, and thorough as a previous version that has already been accepted

by the court (Brown, 2010). Competent computer forensics laboratories will validate

software as new versions are released, but the validation process is something that judges

need to understand to properly apply the Daubert criteria to offered evidence (Brown;

Kenneally; Van Buskirk & Liu, 2006).

An additional factor complicating the reliability of digital evidence is that of specific

attacks on the computer forensics process and the forensics software. The Metasploit

Anti-Forensic Investigation Arsenal (MAFIA), for example, is an open-source toolkit

specifically designed to exploit known vulnerabilities and limitations in computer

forensics software applications. The MAFIA toolkit includes applications that can

change a file's timestamp metadata, hide information in the empty space in a data file,

and alter the metadata that identifies the format of the content in the file (Metasploit LLC,

2010). These tools work at a very low level and require a significant understanding of

the underlying ICTs to appreciate their operation and the reasons why the digital

evidence that was gathered might still yield plenty of information with evidentiary value

(Harris, 2006; Newsham, Palmer, Stamos, & Burns, 2007).

Finally, not all data gathering is performed by trained investigators and computer

forensics examiners. Numerous criminal investigations are initiated after a routine

review at a private company reveals evidence of criminal wrongdoing that is then turned

over to law enforcement (Casey, 2011). Meanwhile, the initial evidence gathering may

be haphazard, and, therefore, it can be difficult to prove completeness and reliability of

the data .
Print Page

No comments:

Post a Comment