Jessica Hyde - Direct
263 linesMR. BRENNAN: The Commonwealth calls Jessica Hyde.
COURT OFFICER: Just wait. Follow that officer right there. And follow me, please.
COURT CLERK: Raise your right hand, please. You swear the evidence given to the court and jury in the case now being heard, the truth, the whole truth, and nothing but the truth, so help you?
MR. ALESSI: Good morning, your honor.
COURT CLERK: Good morning.
JUDGE CANNONE: All right, Mr. Brennan, whenever you're ready.
MR. BRENNAN: Thank you. Good afternoon.
MR. BRENNAN: Could you please introduce yourself to the jury and spell your last name for the record?
MR. BRENNAN: What do you do for a living?
MS. HYDE: I'm a digital forensics examiner. I'm also an adjunct professor at George Mason University where I teach mobile device forensics in their graduate program. I also own a digital forensics company that does training services and research.
MR. BRENNAN: In addition to teaching at Georgetown University, do you teach in other ways?
MS. HYDE: Correction — I teach at George Mason University, not Georgetown. But thank you. I also teach — my company writes and develops training courses. I teach a mobile forensic analysis course and a data structures course there to law enforcement, civilians, et cetera.
MR. BRENNAN: Do you engage in any other type of academics or research?
MS. HYDE: I do other research as part of some volunteer activities I do in the digital forensics field. So I'm the chair of DFIR Review, which is a body that reviews practitioner-created research so that it can go through peer review. I often do reviews as a member — as a reviewer for Forensic Science International: Digital Investigation journal, which I was a previous associate editor of.
MR. BRENNAN: What does peer review — oh, I'm so sorry. What does peer review mean?
MS. HYDE: Peer review means that when a study is done or a research paper is done on a topic, that an assessment is done of it. There are three types of assessments that can be done. One is a methodological assessment, where you read through the methodology of the academic or practitioner-created paper and state if it is acceptable or not. The second is you conduct your own testing using the same methodology to conduct a peer review with your own generated data sets. And the third is, if the original author has shared their data sets, reviewing their data sets and checking that against their work.
MR. BRENNAN: Has any of your work been peer-reviewed in the past?
MS. HYDE: I've had two peer-reviewed journals. I've had an article that —was peer-reviewed and published in Forensic Science International: Digital Investigation regarding the standardization of file classification of recovered items. The layman's term of that would be talking about what "deleted" means. And then I have another paper that's been accepted that I was a co-author on in terms of timelines and correlations of forensic evidence.
MR. BRENNAN: You mentioned you work in forensic data analysis generally. What does that mean?
MS. HYDE: It means that I look at evidence of digital forensics devices, be them mobile phones, computers, internet of things devices, and do analysis to provide meaning and context to the data that's recovered from them, as well as performing data recovery of those same computer-related devices.
MR. BRENNAN: Did you have any specific education or training to learn about forensic data?
MS. HYDE: Absolutely. I have professional training both from courses I've taken, as well as I have a master's in digital forensics from George Mason University, a master's degree in that. I've taken multiple courses — advanced mobile analysis, iOS forensics courses from SANS, computer forensics courses from SANS. I have multiple certifications. I hold the GIAC GCFE, which is the GIAC Certified Forensic Examiner. I also hold the NW3C CCE, which is a certified forensics examiner.
MR. BRENNAN: Do you, through your company, teach other forensic examiners how to study forensic data?
MS. HYDE: Correct. We teach government agencies — both federal, state, and local — how to do digital forensic analysis, as well as private sector digital forensics examiners.
MR. BRENNAN: How do you teach private digital forensic examiners? What's the method?
MS. HYDE: I've developed an online virtual course that we teach virtually and live, so I'm always there when I'm teaching, but I also have a team of instructors. So for example, the mobile forensic analysis course that I developed — right now, this week — is being taught to a police department in Texas right this moment.
MR. BRENNAN: Forensic data analysis seems like a broad field. Are there certain parts that you focus in?
MR. BRENNAN: I want you to just give us some very basic, very general and brief background. When you're looking at a forensic device and you're analyzing it, do you rely on any tools?
MS. HYDE: That is a really interesting question. Reliance on tools is not precisely what we do. We utilize tools to extract data from a device, and then we utilize a variety of tools to attempt to parse results from those devices. But the forensics examiner has to take the additional step to both understand, validate, and provide meaning to those results.
MR. BRENNAN: Is there any shortcoming or danger in simply relying on a forensic tool for a result or an opinion?
MS. HYDE: There is absolutely a danger in just relying on parsed results. When an algorithm is used to determine what data is stored as, it doesn't tell you what that data means. And that's where you need a forensics examiner. Tools don't necessarily understand how the data got there and what causes the data to exist. And that takes deeper and further human analysis. As well as — you know, if you think about it — if on your phone you can go to the Google Play Store or the App Store, you can actually download millions of apps. There's like six million apps between the two stores. Digital forensics tools maybe support, generously, a thousand applications.
MS. HYDE: So in order to be able to parse and understand the data from applications that the tools don't know how to support, you need a forensics examiner who knows how to dig into that data, conduct testing, and determine the meaning of the data.
MR. BRENNAN: Were you asked by the Norfolk County District Attorney's Office to look at and analyze a phone that was related to a person by the name of Jennifer McCabe?
MR. BRENNAN: Were you asked during that scope and date range to analyze it and provide an opinion about whether there were any user-initiated deletions of any web searches or Safari-type searches?
MS. HYDE: Yes, I was asked to look for deletions of Safari searches within the scope of the time frame that was dictated by the purpose of the exam.
MR. BRENNAN: Were you asked to look at the phone device and determine whether or not there were any user-initiated deletions of phone calls on the device?
MS. HYDE: Yes, I was asked to look for deletions of phone calls in that same date range. And did you do both of those tasks? Yes, that's correct.
MR. BRENNAN: Did you ultimately have opinions on both of those issues?
MR. BRENNAN: And in engaging in the efforts to answer those two questions, did you necessarily have to review and analyze a web history that included searches on January 29th, 2022 at 2:27 a.m.?
MS. HYDE: I did analyze web searches that occurred at 2:29 a.m. local time here in Norfolk on January 29th, 2022, from the device that was provided. Yes.
MR. BRENNAN: Not the device — the forensic image, for clarity. So before we get to the two opinions that you were asked to consider — or what your opinions were after analyzing the data — I want to begin by giving the background and having you share with us your analysis, your observations, and your opinions about the search on Safari relative to this 2:27 time frame. Before we begin that, let me just ask you one question. Did you do anything to ensure the integrity of the data that you were looking at?
MS. HYDE: Upon receiving the forensic image, the first thing I did was what's called a hash of the image. A hash is an algorithmic representation of a file. So a forensic image — all the data from the phone — it comes in one file that can then be extracted and have the rest of the data. That singular file that is received — there are actually three, but focusing on the one that has all of the data — that archive file, you can run what's called a hash algorithm against it. There are actually multiple hash algorithms. And I hashed it, and then I validated that hash to the document that was provided along with the image file — that those hash values matched. Now, just for clarity: if you change one bit in a file, it will not have the same hash. So the hash provides integrity.
MS. HYDE: And having run both hashes, which are available in that report, using two different algorithms — they both matched. Additionally, I reviewed the PDF that was received for signs of alteration. The PDF that comes with a GrayKey report is not signed by Adobe Acrobat. However, I was able to take that PDF and examine it just like I would a phone. I'm using the same techniques on the PDF using a tool called ExifTool. I was able to learn a lot about the creation of that particular PDF. What I was able to learn was that it was created — it's labeled as being produced by Grayshift — using a particular code-based application to create the PDF called PyPDF, standing for Python PDF framework.
MS. HYDE: It is a program that you can download, and developers who create tools like Magnet Forensics, who owns GrayKey, utilize that script — which is why it says it's produced by it — to create the actual PDF. That's what the evidence there in the data tells me. It also contains a time of when it was created, and that time was — I'm sorry — February 2nd, 2022, at global UTC — that's the global constant time, think what's in the UK — at 22:49, which local time would be 5:49 p.m. So that says that it was created approximately 2 hours and 45 minutes after the document said the image started, and it correlated with the document information. So that provided some level of certainty that that document was factual.
MS. HYDE: And on top of that, I further opened the file of that PDF in what's called a hex editor, which lets me see every byte, every one and zero. And from doing that, I looked for tag markers that are made sometimes in some modifications of PDFs called XMP, which is Adobe's format of markup. And I did not find any evidence of those tags.
MR. BRENNAN: When you're trying to determine whether there's integrity in the data, are you able to tell whether or not there's been any alterations or tampering before the item is put on the GrayKey machine to make the extraction?
MS. HYDE: That would be when you're talking about if the physical device had been manipulated. Is that correct?
MR. BRENNAN: Yes.
MS. HYDE: I cannot tell if the physical device has been manipulated prior to it being imaged. I can only validate if the image was correct. In theory, all of the data of anything that you did on that device would still show in the evidence of device logs and whatever locations were touched or communicated with. But at that point, it's the same as any user interacting with the device itself.
MR. BRENNAN: After the phone's extracted, or at the time it's extracted, does it have the history of the device on that extraction?
MR. BRENNAN: Yes. At the time the phone is extracted and the copy is made — in the PDF and the digital copy — does the digital copy and the PDF contain the history of searches and calls and other items on that phone?
MS. HYDE: The forensic image itself contains a variety of different artifacts or traces that are left behind by activities that a user does on a phone. Yes.
MR. BRENNAN: You use the term "artifact." What does that mean in a basic sense?
MS. HYDE: An artifact is any type of evidence that is left behind on your phone of an action. So for example, if you receive a call, there are multiple places in the phone that show that a call was received. That can be in the call log, that could be in a notification, that could be in a unified log that tracks the activity that the phone's doing. There's multiple locations, and each one of those — residue of the fact that the call came in — that's what I would refer to as an artifact. So you could have an artifact not just about a call, about a web search, about taking a photo. Any action you take on your phone may create an artifact.
MR. BRENNAN: Before we get into those two questions about any alleged deletions from Safari, a Google search or internet search and any alleged deletions of phone calls, I want to start with your explanation, your sharing how the timestamp works relative to that search beginning at 2:27 a.m. on January 29th, 2022.
MS. HYDE: So for clarity, there's more than one search that happens actually at 2:27. There is some activity that's looking at — and I don't want to mispronounce this — the name of the town. I'll try — Hakamok sports — looking at activities for the sporting events at that school system, I assume, or county location. There is a timestamp for the search "how long to die in cold." However, that timestamp isn't about active searches. It's about the time that a tab was either opened or moved to the background. So "how long to die in cold" is the most current search in the tab that was opened at 2:27. If you're using your phone and you go to your browser, you have some choices. You can just open an existing tab or you can open a new tab. A new tab was opened at 2:27 a.m. and that search was done there.
MS. HYDE: Another tab actually at 2:27 was moved to the background and its last search at the time it was moved to the background was — I believe it's "It's Raining Men" — the YouTube video. So those two searches both exist as what's called browser state searches, but that browser state isn't about the time that it was searched. It's about the time that the browser tab that you opened either went to the background or, if it's never been moved to the background, the current search. So the time — in the instance of the "It's Raining Men" video — is the time that that video was moved to the background and the new tab took over as the tab that's active.
MS. HYDE: And that tab — the last search done in that tab is "how long to die in the cold," because that database holds the current search, it constantly gets updated, and the time that the tab was either opened if it's the first time it's opened, or moved to the background if it's an existing tab.
MR. BRENNAN: Did you put together an exhibit that would help explain some of the —
MR. BRENNAN: May I approach?
JUDGE CANNONE: Yes.
MR. BRENNAN: Do you recognize that?
MS. HYDE: I do. This is table one from the first report I created on this case on the phone that I was told was for Jen McCabe.
MR. BRENNAN: Will that help you explain?
MR. BRENNAN: I move this into evidence.
JUDGE CANNONE: Before we look at any chalk, I want you to explain the entire basis for that chalk and your ultimate opinions that are reflected in the chalk. So let's start from the beginning.
JUDGE CANNONE: Why don't I just ask you, see if you can answer the questions, and if you can —
JUDGE CANNONE: When you looked at the phone to analyze it, were you trying to determine when a particular search was made? Specifically, the search "how long to die in cold."
MS. HYDE: Correct. I was looking for two searches, both "how long to die in cold" and "how long to die in cold." [unintelligible variants]
MR. BRENNAN: So as a forensic examiner, how do you begin the process to try to identify and determine when that search is actually made?
MS. HYDE: So the first thing we would want to identify is what application was being used at the time to make the search. So looking at that date timestamp, it was able to — on that date — — determine that Safari was the application that was being used as the Google browser. And then you would begin looking at the artifacts for Safari, both those that are parsed by the forensics tools, but then further looking into the data structures that hold that data itself, and then ultimately conducting testing to determine why a certain artifact exists.
MR. BRENNAN: So you mentioned the beginning step is the forensic tool or tools. Did you use forensic tools to get information or parse any data regarding this potential search at 2:27?
MR. BRENNAN: Can you share with us the multiple forensic tools that you used?
MS. HYDE: Absolutely. I used Cellebrite Physical Analyzer. I used Magnet AXIOM. I used — just the tools I used at first. There are further tools I used later, but those were the three that I started with.
MR. BRENNAN: What tools did you use later?
MS. HYDE: Later I used specifically — I used Sanderson's forensics toolkit, the forensics browser in Sanderson's forensics toolkit, which helps examine SQLite databases, and then I further reviewed in a tool called Rabbit Hole. To begin with, Cellebrite — is that a tool that you're familiar with?
MR. BRENNAN: Yes. Is it a commonly used tool in the industry?
MR. BRENNAN: Why did you use Cellebrite?
MS. HYDE: I use Cellebrite because on any mobile exam, I'm going to validate what is captured by Cellebrite and Magnet AXIOM, because they have the most robust parsing of what is in my tool arsenal. And then I would also follow up with iLEAPP closely behind, because it has a lot of artifacts that aren't supported by either of those tools. Using a variety of tools is going to give you different coverage and each tool has a different perspective on how they view and display data.
MR. BRENNAN: You said you also used Magnet AXIOM. Is that correct?
MR. BRENNAN: Is that a leading forensic tool as well?
MR. BRENNAN: Are you familiar with that company?
MS. HYDE: I'm very familiar with Magnet Forensics. I was actually their director of forensics for five years, then continued to consult for them for an additional two years under my company, Hexordia, and my company continues to be a reseller of Magnet's products.
MR. BRENNAN: You mentioned different tools may present information a different way. Do those tools change the underlying data at all?
MS. HYDE: That's a great question. So when we're talking about the results that a tool shows, the tools maintain the integrity of that original forensic image, and that's actually something we verify as we continue through our exams to make sure that the underlying data hasn't been changed. There are some tools — I apologize, I did use some other tools — but I also used ARTX and M-H-Y. ARTX is capital A, lowercase R, lowercase T, capital X, and then M-H-Y is M, H, Y. I'm sorry — can I repeat the question? The spelling threw me off.
MR. BRENNAN: I asked you about using multiple tools and whether or not the use of different tools changes the actual underlying data.
MS. HYDE: So the actual underlying data is always maintained and we do validate that. Some tools allow you to see parsed results and then allow you to dig deeper into looking at the actual files and data structures. Those would be tools like Cellebrite Physical Analyzer and Magnet AXIOM. Some tools show you the parsed results and tell you where they got them — they show you the parsed results and tell you the file location, and then you would open that up in an external tool. And then some of the tools I used are not tools that parse the results but are tools that allow me to manually look at the data structures. So they directly allow me to look at the data structures, and all of the tools I use in this instance are tools that do not change the underlying data.
MR. BRENNAN: Is it important or critical to go beyond the tool's presentations and actually analyze the data yourself?
MS. HYDE: It is absolutely critical. Actually, in the NIST scientific foundations paper from the National Institute of Standards and Technology that states the foundations of digital forensic science, it states that it is the examiner's duty to not only verify and validate tool results but also to provide meaning to forensic parsed results.
MR. BRENNAN: Does software update regularly?
MS. HYDE: The software updates all the time. Typically we get an update about once a month. And to be honest, it's not enough to keep up with how many new apps are developed and how many new phones come out and how many of your applications get updates, new features — for example, maybe in Instagram or Facebook. So the tools update very, very regularly.
MR. BRENNAN: When you used the Cellebrite tool to analyze the 2:27 search, did the search "how long to die in the cold" have a timestamp on it?
MS. HYDE: There are actually multiple artifacts for "how long to die in cold." Remember earlier we defined an artifact, and we spoke about it being one of the types of traces. So there are multiple traces for "how long to die in cold." Only one of them is associated with the 2:27 timestamp, but there are other instances of evidence of that that was parsed by Cellebrite.
MR. BRENNAN: Is there any danger for an untrained eye to rely simply on the software when looking at a search like "how long to die in the cold" and seeing the 2:27 timestamp?
MS. HYDE: Absolutely. There's a real danger that an examiner who has not dug into the artifact and tested to see what it means may assume erroneously that that 2:27 timestamp is the time that the search was made. The search in that field of that artifact is going to always be the most recent search in the tab. But that timestamp actually means either the time that that tab was backgrounded, or if it's the first time the tab's been opened, when it was opened. So you could erroneously implicate a search was done hours, or some time period, or even days before it actually occurred.
MR. BRENNAN: Some of us leave our tabs open forever. In your teaching examiners and students, do you teach about this specific concern and concept?
MS. HYDE: Yes. I teach about this concern both in my mobile forensic analysis class, my mobile forensics course that I teach at George Mason University in their digital forensics masters program, and in the data structures course where we specifically learn how to analyze these databases in question.
MR. BRENNAN: In your experience, how common does an untrained examiner make this mistake?
MS. HYDE: I wouldn't be able to speak to how common an untrained examiner does it, but regularly my students get those questions wrong on earlier examples I give them in class, to kind of indicate to them that they could easily make mistakes without understanding meaning, and I use it as a teaching aid. So teaching, you know, 90 to 100 students a year, I regularly see that in untrained examiners that I'm teaching in my class, but I wouldn't be able to say that across the discipline.
MR. BRENNAN: You mentioned that Cellebrite showed the search on the report as "how long to die in the cold" with a timestamp of 2:27. Is it 2:27:40?
MR. BRENNAN: Do you know whether or not Cellebrite has updated its software?
MS. HYDE: In May of last year, Cellebrite made an update to their software, actually to remove this artifact because of its ambiguity and the risk that an examiner may overstate or misstate what it is.
MR. BRENNAN: Is this ambiguity reflected in other types of software?
MR. BRENNAN: Sure. For example, when you did a report in Axiom, did it have a similar reflection in the report?
JUDGE CANNONE: Ask it differently.
MR. BRENNAN: Did you run a report regarding this time frame through Axiom software?
MR. BRENNAN: When you did that, was it the same or a different result?
MS. HYDE: Magnet AXIOM and Cellebrite really show their data very, very differently. So Cellebrite takes a perspective of alerting people to possible deletions by annotating either a question mark or a red X next to artifacts that come from different areas. Magnet AXIOM instead marks if the artifact was parsed or carved. I feel like now I need to explain to you parsed or carved. So "parsed" means that the data was where it was expected to be found and the algorithm was able to cleanly see that this is there. "Carve" means that it had to run that algorithm against the data structure as a whole and found it as a partial result. So it carved it out — went and found it — instead of it just sitting there exactly where it was expected to be, and shows you both the results.
MS. HYDE: So it'll show all of the results of — in this instance — the tab sessions. But the DataBrowser.db tab state — it's going to, instead of telling you this has a red X, say it was carved. So it's a different presentation of the same data. And so I want to talk about deletions in a little bit, but I want to get back to the 2:27 search. Sure.
MR. BRENNAN: When you produced — or you looked at the report for Cellebrite and it showed this timestamp as well as the phrase "how long to die in the cold" — where did you look? How did you make an analysis to determine whether or not that search was actually made at 2:27 or another time? What were your next steps?
MS. HYDE: So in looking at that timestamp, the first thing I did was actually go and look at what literature exists on the artifact — if anybody had reviewed this artifact before. I also created my own data set and tested how we could determine what that timestamp means. So if I make a search in a tab and then I make another search in a tab, which timestamp is it? If I make a search in a tab and then open a different tab, what changes? I also looked at the documentation in the artifact reference guide for Magnet AXIOM, which does state — it describes that timestamp as the time in which the tab is backgrounded, and it gives a stipulation that in certain circumstances the timestamp can be earlier than the search.
MR. BRENNAN: Did you specifically look in any databases or for artifacts in the data to help make an analysis of when the search happened?
MS. HYDE: Of when the search happened as a whole? Absolutely. So there are multiple databases that I looked at that show search information for Safari. I looked at the mobile Safari plist. I looked at the BrowserState.db database. I looked for the history database. I also looked at the KnowledgeC.db. The KnowledgeC.db is a little bit different. That isn't a database for Safari — it's a database for all of Apple, so it can predict your behavior. So it takes in information that Apple wants to take in, including browser history, and it saves that information so it can make predictive information. I also looked at the caches in Safari, which are the things that Safari is trying to make quick reference to. This would include suggested terms for commonly searched items.
MR. BRENNAN: When a URL or a search through Safari is typed in, or if it connects, does it leave a trace in different parts of the computer or the phone?
MS. HYDE: Yeah. So when you actually do any search, you're going to leave traces. Now, it does depend if you are in private browsing or non-private browsing. Non-private browsing is what most of us use regularly. Private browsing might be what you elect to use — some people would elect to use it if they're doing searches they wouldn't want their spouse to see, like maybe they're looking at porn, or maybe you're doing something secure like looking at your banking information. You may intentionally use a private browser. For Google users, this would be the equivalent of incognito in Google Chrome. And you may intentionally use that. And so the artifacts for that are more limited than the artifacts for non-private searches.
MS. HYDE: So some of these artifacts only exist for non-private searches and some exist for private searches.
MR. BRENNAN: And in this case, this phone — was it on non-private or private?
MS. HYDE: So specifically the two searches we're talking about, "how long to die in cold" and "how long to die in cold" — excuse my differentiation there. Both of those searches were done in non-private browsing, so they weren't hidden. They were — I don't understand the question.
MR. BRENNAN: Was there any application to mask those searches?
MR. BRENNAN: We began with the 2:27:40 timestamp. The next recording on that tab — do you recall what time that was?
MS. HYDE: I believe the next — and I'm trying to remember the chart from memory — so I believe the next thing we're looking at is the 6:23 suggestion from Apple search terms for "how long to digest food." Am I matching what you're seeing on your chart?
MR. BRENNAN: Well, let me ask you — when you saw that, do you look on the same tab or a database? Where do you find that information?
MS. HYDE: Fair — understood. So all of those databases I was just mentioning, we're looking at all the parsed results from all of those, and I was looking in the entirety of the scope from midnight till noon — that was the time frame of my scope — of all of the content that was done in the tabs sessions: the tabs for BrowserState.db, cloud tabs artifacts, history DB, mobile Safari plist, KnowledgeC — each one of those — and then I was combining them to timeline out the activity.
MR. BRENNAN: At 6:23:51, did you look into "how long to die in cold"?
MS. HYDE: So that would be the next thing that we see after the Apple suggested term of "how long to digest food." That's the next thing we see. There are two entries immediately next to each other. I don't remember the precision on the seconds of which one's first off the top of my head, but one is in KnowledgeC and one is in the mobile Safari plist.
MR. BRENNAN: Can they appear in different places in the database?
MS. HYDE: The same searches appear in many different places. They can leave traces such as that KnowledgeC.db — I love that Apple calls that "knowledge"; it's like what it's trying to learn. So the KnowledgeC.db — what Apple's trying to learn about you — as well as in that mobile Safari plist, as well as at the moment of that search. Which — I don't have the device in that moment, but just so there's an understanding — that open tab, the URL would have been updated — or the URL, the website, would have been updated to that search at that moment. Had we imaged it at that moment, it would have been "how long to die in cold."
MR. BRENNAN: And finally at 6:24:47, did you identify any artifacts for that search — the one that appears with a timestamp of 2:27:40? Did you find the same?
MS. HYDE: Correct. At 6:24, we have two timestamps in the same two locations for "how long to die in cold": the KnowledgeC and the mobile Safari plist.
MR. BRENNAN: So based on your analysis, did you come to an opinion whether or not the phrase "how long to die in cold" was actually searched at 2:27 a.m. on January 29th, 2022?
JUDGE CANNONE: There are no artifacts. So the answer is yes or no.
MR. BRENNAN: So based on your analysis of the phone and the data using all of those softwares, did you come to an opinion whether or not the phrase "how long to die in cold" was searched at that timestamp of 2:27:40?
MR. BRENNAN: And what is your opinion?
MR. ALESSI: Objection, your honor. Rephrase the question as to the opinion. I'm sorry.
JUDGE CANNONE: I'll see you at sidebar. unint.
MR. BRENNAN: You've explained to us the analysis you engage in — the process you engaged in in forensic data analysis. Is there a methodology that you use, not just looking at the printout of the programs? A methodology you use that is accepted in the industry for experts to analyze this data and come to your opinion?
MS. HYDE: Absolutely. Beyond looking at what was in the tool results, I further looked at those databases independently and conducted testing of the artifacts.
MR. BRENNAN: And in addition to what you did, can you share with us how and why this methodology is accepted in your practice as a forensic analyst?
MS. HYDE: This methodology is accepted by NIST and by organizations like the Scientific Working Group on Digital Evidence. I am intimately familiar, as both a member of the Scientific Working Group on Digital Evidence who helps work towards the building and development of these consensus-based documents of procedures to be followed in digital forensics analysis, and as a member of the National Institute of Standards and Technology NIST Organizational Scientific Area Committees OSAC Digital Evidence Subcommittee, where we also produce guidelines such as the data set generation guidelines — which is how you do your actual testing and creation of data sets. And I was also a part of the subcommittee that drafted that.
MS. HYDE: So I'm intimately familiar with the accepted policies and procedures because I am also a member of the groups that help author these, as well as review them and use them.
MR. BRENNAN: Are there articles and guidance in the industry about this particular practice of analysis?
MS. HYDE: For analysis — just for clarification, you're talking about analysis of Safari artifacts? Of mobile forensics? Of BrowserState.db? Just for clarity on the question.
MR. BRENNAN: Let's work our way through it. How about through Safari?
MS. HYDE: For Safari, there are multiple different blogs that talk about Safari, as well as I have analysis of Safari through classes I took, such as SANS FOR518, which is the Mac and iOS analysis course, which I took.
MR. BRENNAN: How about the specific guidance regarding searching the different databases for artifacts in terms of methodology for digging into databases? Is that correct?
MS. HYDE: There are multiple — I've taken courses throughout my master's program on how to analyze digital forensics artifacts, including mobile device forensics. Throughout my GIAC certifications, I've taken multiple instructor-led courses on how to do these as well, specifically covering mobile Safari history in mobile forensics, at GMU in her master's program, as part of the SANS FOR518 class. And in terms of data structures, I've taken study on how to analyze databases. SQLite and plists are covered in those courses. There are books on this methodology.
MS. HYDE: There are multiple books on digital forensics, many of which I've read, including books specifically on analysis of SQLite databases, such as Sanderson's SQLite Forensics, which is probably the most authoritative — because not only was it written by Sanderson, who makes a tool, it was tech-edited by three renowned forensics examiners as well as Dr. Richard Hipp, who created SQLite itself.
MR. BRENNAN: The methodology, the technique you use to look past the reports and actually look at the data to make your determination and arrive at your opinions — is that methodology regularly accepted within the forensic data community?
MS. HYDE: Absolutely. Again, using the methodology from the consensus-based documents from the community — from the Scientific Working Group on Digital Evidence on best practices for analysis — as well as specifically for the SQLite databases, I was doing that deeply, verifying and in accordance with the understandings from Sanderson's book.
MR. BRENNAN: Now, you shared with us that you had an opinion regarding the time stamp and whether that was applicable to the search term "how long to die in cold."
MR. BRENNAN: Now I want to ask you, to a reasonable degree of forensic data scientific certainty, what is your opinion about whether the phrase "how long to die in cold" was made at the time of the time stamp 2:27:40?
JUDGE CANNONE: It's the last time, Mr. Brennan, on this issue. Okay. All right, go right ahead.
MR. BRENNAN: I'm now going to ask you for your first opinion. So, you shared that you have an opinion about whether that time stamp 2:27:40 was — whether or not that was the time the search "how long to die in the cold" occurred. Can you tell us, to a reasonable degree of scientific certainty, your opinion about whether that search "how long to die in the cold" occurred at 2:27:40 a.m. on January 29th, 2022?
MR. JACKSON: Same objection.
JUDGE CANNONE: Okay. The objections are overruled.
MS. HYDE: What I can state to a scientific degree of certainty is that that search occurred at 6:24 a.m. and was the last search in the tab that had been opened at 2:27.
MR. BRENNAN: Do you have an opinion to a degree of scientific certainty whether there were any other searches similar to "how long to die in the cold" that evening on that tab?
MR. JACKSON: Your honor, I'm going to — may I first inquire? You meant morning — as I said, sorry — that morning.
MR. BRENNAN: Can you repeat? I apologize — I disrupted my own train of thought. You have an opinion to a reasonable degree of scientific certainty whether there were any other searches on that tab before the final search at 6:24:47 — "how long to die in cold"?
MS. HYDE: Just ensuring that I have the question correct. You're asking if what else occurred in that tab prior?
MR. BRENNAN: Yes.
MR. BRENNAN: Did you develop a chalk that I showed you earlier?
MR. BRENNAN: Like to approach?
JUDGE CANNONE: Yes.
MR. BRENNAN: Again, I show you the same document. You recognize it?
MR. BRENNAN: What is it?
MS. HYDE: It is the exhibit that I created that was labeled Table 1 in the first report I had delivered.
MR. BRENNAN: And does that provide information that will assist us in understanding your opinion?
MR. BRENNAN: I'd move, subject to redaction, that this be introduced into evidence.
MR. JACKSON: Your honor, I have no issue with regard to the redaction aspect, but consistent with my prior objections, I would object to that.
JUDGE CANNONE: Okay. I'm going to allow this into evidence. Your rights are safe.
MR. BRENNAN: Thank you. Exhibit 82.
JUDGE CANNONE: Madam clerk, 82.
COURT CLERK: Exhibit 82. Thank you.
MR. BRENNAN: With the court's permission, I'd like to show Exhibit 82 to the jury.
JUDGE CANNONE: Okay.
MR. BRENNAN: Could we enlarge the first two? And we're going to need to get the right column in if we can, please. Thank you, Miss Gilman. Can you see it from there?
MR. BRENNAN: If you can walk us through how this exhibit assists us in understanding your explanation and the basis for your opinion.
MS. HYDE: This exhibit shows the Safari artifacts and artifacts of Google searches in Safari related to the two searches in question — "how long to die in CKD" and "how long to die in cold" — that occurred on the morning of January 29th, 2022, on the device under examination.
MR. BRENNAN: If I look at this chart and I see in the top left it says 2:27:40 a.m., and then under the search term it says "how long to die in cold" — how do I understand your opinion that that search didn't happen at 2:27:40?
MS. HYDE: This document is showing you the data that is the parsed result. The source of that artifact — you see in the last column, it's a little bit cut off — the BrowserState.db WAL file. That database does store how the data is stored. The search term "how long to die in cold" has an associated Mac absolute epoch timestamp that translates to 2:27 a.m. That is what is physically stored. That does not annotate the meaning of that artifact. That was determined through testing, following the NIST OSAC data generation guidelines for testing.
MR. BRENNAN: In the second row, we see a time stamp on the left-hand column, 6:23:49, and it provides the artifact — the cache record. What does that mean under the search term?
MS. HYDE: Mm-hmm. iOS Safari cache records. A cache refers to something that a computer program wants quick and easy access to. When we search on our phones — anything, a lot of times, be it in Google or in Safari, depending on what type of phone you own — it'll suggest what it thinks you're going to type in. So you can click it. It's trying to be helpful. In this instance, at 6:23:49, we have a cache record that indicates that Apple suggested the phrase "How long does it take to digest food?" And then under that, 6:23:51, it appears it says a recent web search — and the spelling has changed: "how long to die in cold." So, this — in reference to these two search terms — this is the first actual search that occurs at 6:23:51 a.m.
MS. HYDE: Upon beginning the typing of that phrase, Apple provided the suggested search, and the person inputting into the phone at that time continued to type out a phrase and finished it as "how long to die in cold." That is available to us in the mobile Safari plist, which tracks recent web searches. And then we'll see six seconds later we get that reference in the KnowledgeC database that I mentioned before. That's the next line. So the KnowledgeC database is Apple's way of keeping knowledge about what the user's doing — it sees and holds your knowledge. That's an easy way to remember it. That's what I teach my students. So that KnowledgeC.db tracks a lot of things for predictive purposes. And so six seconds later, it's logging that same search. So that search is being tracked in two places.
MR. BRENNAN: And so one search can be left as artifacts in different locations on the database?
MS. HYDE: Correct. And there can be multiple traces or artifacts of the same user action in different places on a mobile device. And if we go down to the bottom, the last column at 6:24:47, we again see "how long to die in cold." And if you look in the first column, it's the same phrase.
MR. BRENNAN: Is that a coincidence that it appears at the bottom at 6:24:47 and also appears up at 2:27:40?
MS. HYDE: As mentioned before, through testing of the artifact for BrowserState.db, that artifact will hold the most recent search that happened in the tab. So it is logical that "how long to die in cold" was searched at 6:24:18, tracked in that mobile Safari plist, repeated in KnowledgeC — KnowledgeC made its tracking in KnowledgeC.db — and then that table for BrowserState.db, which again has the time that the tab was opened or the last time it was backgrounded and updated, just the website that was visited. So the website — that is why we get that 2:27 time. That's actually — as far as the data storage — the first thing I have in there, because I wanted to put it in chronological time order for exhibit purposes, is actually the last thing to happen, because it's the update to what's in the tab.
MR. BRENNAN: And you offered that "how long to die in cold" is the last thing to happen. Do you have an opinion to a reasonable degree of scientific certainty the time when that search was made?
MR. ALESSI: Objection, your honor.
MS. HYDE: To a reasonable degree of certainty, I can say that "how long to die in cold" was searched at approximately 6:24 a.m.
MR. BRENNAN: Now, moving on from the 2:27 search, there are two other issues that you were asked to analyze. We talked about them at the beginning. Let's begin with whether or not you've looked at the data and analyzed the data and came to any opinions about whether that phrase "how long to die in cold" was user-deleted on that device.
MS. HYDE: So there's two really key elements as to if that was user-deleted. If the question is did I come to an opinion? Yes.
MR. BRENNAN: And can you explain how you came to that — — an opinion that we'll ultimately share with the jury?
MS. HYDE: The first line in that exhibit, the ending of that — if you remember, it said BrowserState.db-WAL. That's important. So we'll talk about that in a second. But the first thing I want to bring up is kind of how the database works, because I didn't just look at the tool result. I extracted that database and I did a deeper analysis on it using Sanderson's tool, because that information was in that -WAL file. So when a SQLite database — which is a specific way of storing data, very common on mobile phones, actually one of the most common — when that database stores data, it has two versions. The most recent version — and the version in use here — uses what's called a write-ahead log. So before data is committed, it is written to a write-ahead log. I'll kind of explain it.
MS. HYDE: So let's say we're in a restaurant and we're sitting at a table and we're ordering food. Our table is the table. Data is going to come to the table, and that's going to be our food. So when we order food, let's say we've got a chicken sandwich, a burger, and a pizza coming to the table. The chicken sandwich, the burger, and the pizza — the kitchen puts it in the warming area. It's the write-ahead log. The server grabs it from there. It's where data sits before it goes to the table. Just like your food goes to that serving station before your server brings it to your table. Table — server brings it to the table. That table's got its burger, its chicken, and its pizza. Another table orders waffles and pancakes — they think it's breakfast. The waffles and pancakes are made by the kitchen.
MS. HYDE: They're put on the serving area. Waitress comes over. "This pizza has pepperoni. I don't eat pepperoni. Can you send it back?" Waitress picks up that pepperoni and sends it back. That's a deletion. Pepperoni pizza was just removed from the table, right? We're deleting it. We're moving — we said it's not what we wanted. The waitress is removing it. She brings it back to that serving area. At that moment, that serving area has the deleted pizza, but also the waffles and pancakes that are waiting to go out to the table. That's how a write-ahead log works. Any changes that are happening sit in the write-ahead log until the database is closed, and then when it's reopened, all those changes are made — both additions and deletions.
MS. HYDE: So often when a phone is imaged, when we make a forensic image, the applications are still open, so we have those WAL files. When we look in our Safari browser, the database knows to read it to you in its current state. So it tells you where the waffles and pancakes are going — they're going to that table. So it shows it as if it's in the table, but it's not in the table yet. So if when we parse the data and we look at the restaurant and we don't look at the back kitchen, we would only at that time see the chicken and the burger, right? Pizza got sent back. Waffles and pancakes haven't come out yet. So when we get the data, we get two files. We get the restaurant — that's the database — and we get the serving area — that's the WAL file.
MS. HYDE: Just because it's in the WAL file does not mean it's deleted. It means that that record wasn't where it naturally sits yet. It could be. That WAL file can contain deleted data — and food, or data — that hasn't yet been delivered to the database. So the WAL file consists of both. So an assumption that the data was deleted just because it's in the WAL file is actually not true. And the peer-reviewed paper I mentioned earlier that I had accepted in Forensic Science International actually has a section that explains this exact process — for SQLite databases — about how we refer to that as recovered and not deleted. Actually through that entire document we don't use the phrase "deleted." We refer to things in a statement of recovery, because we need to determine why it's there.
MS. HYDE: So we can't say that it's deleted just because it came from the WAL file. Some tools will automatically indicate to the examiner that they need to dig deeper into that. I did do further analysis by looking at the WAL file. That particular item — and it's a little bit more complex — actually exists on multiple pages, because it moves as the database is being created, but all of them have the same unique identifier. So it's the same entry. It's not more than one search of "how long to die in cold." If there was more than one search that had occurred where it became the top element in that database, we would see that.
MS. HYDE: There are, as I mentioned, other carved-from-the-database — from the WAL file — elements, such as the search for the YouTube video "It's Raining Men," because at some point that was the last viewed item in a tab when that tab was retired — right before, also at 2:27 a.m., but a couple of seconds before the "how long to die in cold" search. So we can see that that table is put to the rear. The newest tab that's open — which is why it's in the WAL file — it's the most recent thing, because it's like our waffles and pancakes, hasn't been delivered to the table yet. So it can sit in the WAL file for that reason. That doesn't indicate it's deleted. The second reason is there is no user interaction in the interface to delete a tab.
MS. HYDE: You can open and close a tab, and open and close tabs are tracked in that database, but you cannot delete a tab. There's no — if you were to pick up your iPhone, I know you don't have them right now, but if you were to pick up your iPhone and look, that wouldn't be a physical option you have in the interface. So it could not be deleted by a user through the interface, for that most basic reason.
MR. BRENNAN: So a user — could they delete a tab if they wanted to?
MS. HYDE: There's no actual option to delete a tab for your web history. You could clear the cache, for example. Most of us are familiar with clearing the cache in your web history. You might do it because you don't want your kids to see what you searched, or maybe your kids don't want you to see what they searched, so they'll delete it. But you can't delete the tab history. It's just what tabs are there. The device is tracking that.
MR. BRENNAN: When you looked at the software, whether it was AXIOM or Cellebrite, when you look just at the report, not the data itself, do either of those reports have any indication noting that that Safari search is characterized as deleted?
MS. HYDE: Cellebrite denotes it with a red X, which means that it is recovered. AXIOM indicates that it is carved. In the same paper that I referenced earlier that I authored, there's a chart in there that shows the distinction between how different tools — some will use red X's, some will use question marks, some will list "carved" or "parsed." So every tool chooses to do that differently. That does not mean deleted. It means it needs further analysis.
MR. BRENNAN: Should an examiner who looks just at the reports of the software assume that if it is marked as recovered in Cellebrite or carved in AXIOM —
MS. HYDE: An examiner should never assume something's deleted without doing a manual examination.
MR. BRENNAN: The manual examination — is that a method that's used, that's generally accepted in the forensic data field?
MS. HYDE: The NIST Scientific Foundation paper, as well as the data set generation guidelines, both speak to conducting testing to verify and validate, and it states clearly that an examiner is to verify and validate findings and to determine meaning.
MR. BRENNAN: So given that you've used multiple tools to look at reports regarding this phrase "how long to die in cold," and have seen indications or characterizations of recovered and carved, and in addition to the fact that you actually analyzed the data — did you arrive at an opinion to a reasonable degree of scientific certainty whether or not any user deleted the phrase "how long to die in cold"?
MR. BRENNAN: Sorry, trying to clarify. Pardon me. Did you come to an opinion to a reasonable degree of scientific certainty whether any user deleted the phrase "how long to die in cold"?
MR. BRENNAN: And what is your opinion?
MS. HYDE: My opinion is there was no deletion that occurred by the user because it is not something a user can delete.
MR. BRENNAN: Finally, I want to ask you about your analysis of the phone records on this phone attributed to Miss McCabe. Did you go through an analysis of the phone logs and the phone records?
MS. HYDE: I analyzed the call logs and the phone logs on the device that I was given that I was told belong to Miss McCabe.
MR. BRENNAN: Yes. Would a suggestion or a claim that a user regularly deleted a number of phone calls — would that be accurate or inaccurate in your experience?
MR. BRENNAN: Can you explain to us why and how you came to that conclusion?
MS. HYDE: Yeah, this is actually really interesting because of what appears in the forensics tools and additional artifacts that I looked at that are not parsed by the forensics tools. So the call logs — when you look at it, it appears — I don't remember if it's 8:57 or 8:59 a.m. in my mind at the moment, but approximately either 8:57 or 8:59 is the earliest phone call we see on the 29th of January. We see no call logs before, but we do see FaceTime logs before. So if an examiner again was making assumptions without testing or reviewing, the first assumption might be, well, things must have been deleted because there's data that exists before but not current data. So then you have to ask: how do phone logs work? And again create test data and review.
MS. HYDE: What the situation here is that there are three types of call logs on this particular phone. There's regular calls incoming and outgoing. There are FaceTime video chats incoming and outgoing, and FaceTime audio chats incoming and outgoing. The storage for that is actually 200 records. So you can only store up to 200 of each. Now, if you were a user of, let's say, WhatsApp or Signal or Telegram, those would each count too, and they'd get their own logs. So when we look at the database for the number of calls between 8:59 a.m. or 8:57 — again, I apologize, I'm not looking at that precise time — on the 29th of January in 2022, there are exactly 200 calls still in that record from then until the imaging of the phone. There are 199 FaceTime video calls and only 27 FaceTime audio calls.
MS. HYDE: So the question is, how do you validate that that's what's happening? We actually have call logs that we can pick up in other places for recent call logs in the last 7 to 30 days depending on the exact version of the device and how the biomes are running. We can actually see incoming call logs — the number they go to and if they're incoming or outgoing — in the biomes. So we actually can see the history going back for the entirety of that day on January 29th till midnight. Again, my scope was from midnight until noon. So we can actually see all of the calls. They're just not all in call history state.db. Now, call history state.db from a user perspective — that button on your phone doesn't say "call history." It actually says "Recents."
MS. HYDE: And what it's determining is: "Recents" is the most recent 200 of each category. Now how did I determine this further? There is a running log that exists on a phone — lasts about three days. It's called the unified log. The unified log tracks multiple things that are happening on the phone. In manual analysis of the unified log from this phone, I can clearly see each time a 201st call comes in, the 200th call gets deleted. So it's constantly just the last 200 calls. It may not be typical that we see 200 calls in three days, but on this device, we do see 200 regular phone calls in the three days between 8:59 a.m. on the 29th and when the phone was imaged.
MR. BRENNAN: So, do you have an opinion to a reasonable degree of scientific certainty whether there was any user deletion from the phone call log that morning?
MR. BRENNAN: And what is that opinion?
MS. HYDE: The opinion is that I can see that it's done by the device itself utilizing unified logs — that the system is deleting the 201st call every time a new call is received or outgoing.
MR. BRENNAN: Thank you. I have no further questions.
JUDGE CANNONE: All right. Can I see counsel regarding scheduling and a few other things? All right, Miss Hyde, I'm going to let the jurors have lunch. You can walk out after meeting recess 45.
JUDGE CANNONE: You are unmuted. All right, jurors. Um, as you know, I think I told you in the beginning, one of my functions is to make sure that a case is tried fairly and efficiently and not to waste anybody's time. So, if I get frustrated at lawyers about something, I try not to and I don't think I am, but if I seem to cut somebody short, the lawyers are just doing the job when they request sidebar conferences. They're just doing the job. I am going to try and keep all sidebar conferences to a minimum, but the lawyers are just— Okay. All right. Go ahead.
MR. ALESSI: May I, your honor.