Jessica Hyde - Direct/Cross
298 linesCOURT CLERK: This court is now in session — case 117, Commonwealth versus Karen Read.
JUDGE CANNONE: Good morning. I appreciate your patience. Let me ask you those three questions. Is everyone able to follow the instructions and refrain from discussing this case with anyone?
PARENTHETICAL: [Jurors responded yes or nodded affirmatively.]
JUDGE CANNONE: Were you also able to follow the instructions and refrain from doing any independent research or investigation into this case?
PARENTHETICAL: [Jurors responded yes and nodded affirmatively.]
JUDGE CANNONE: Did anyone happen to see, hear, or read anything about this case since we left yesterday?
PARENTHETICAL: [Jurors responded no or shook their heads.]
JUDGE CANNONE: So you'll see that there's a juror who is not with us at this point. There are good and sufficient reasons for that juror to be excused. It's personal to that juror. Do not concern yourselves with it any more, okay? And we will move on.
JUDGE CANNONE: All right, your first witness, please, Mr. Lally.
MR. LALLY: Yes, Your Honor. The Commonwealth calls Miss Jessica Hyde to the stand.
COURT OFFICER: Right this way. Watch your step, please.
COURT CLERK: Do you swear to tell the truth, the whole truth, and nothing but the truth?
COURT CLERK: Thank you.
JUDGE CANNONE: All right, Mr. Lally, whenever you're ready.
MR. LALLY: Ma'am, if you want, that microphone is adjustable. You can put it pretty much anywhere you're comfortable with.
MR. LALLY: Good morning.
MR. LALLY: Could you please state your name and spell your last name for the jury?
MR. LALLY: And what do you do for work, ma'am?
MS. HYDE: I'm a digital forensics examiner. I own a digital forensics firm that does training, services, and research.
MR. LALLY: And the digital forensics firm that you own — what's it called?
MR. LALLY: And how long have you had your own firm?
MR. LALLY: Now, Miss Hyde, if I could ask you to talk a little bit about your educational background and your work history, starting with undergraduate — where did you go, and what, if any, degree?
MS. HYDE: My undergraduate degree is a Bachelor of Science in Electronics Engineering Technologies. I was on active duty in the Marine Corps at the time, so I went to several schools. My final graduation was from EC—
MR. LALLY: I'm going to ask you to slow down your speech a little bit if you can.
MS. HYDE: Absolutely. So my final graduation was from ECPI College of Technology. However, I attended several universities because I was on active duty at the time.
MR. LALLY: And following your undergraduate, where did you go from there?
MS. HYDE: From there, I went and completed my tour on active duty. I went into work in the field, and eventually then did my Master's in computer forensics from George Mason University.
MR. LALLY: And when was it that you received your Master's in computer forensics from George Mason?
MR. LALLY: And as far as — you mentioned that you had started working a little bit post-military but before going to grad school. Is that right?
MR. LALLY: And where did you begin when you started working in 2010?
MS. HYDE: I was a contractor for a company called American Systems, working at the Terrorist Explosive Device Analytical Center, called TEDAC.
MR. LALLY: And following — sort of going forward to when you received your graduate degree — if you could speak a little bit about your work history since then.
MS. HYDE: Sure. So I continued to work at TEDAC while I had completed my degree. That was doing digital forensic analysis on phones that were connected to improvised explosive devices. So most of the phones I received were post-blast, and I would recover data from that and analyze that. From there, once I had completed my degree, I went and worked in the regular private sector. I worked for Ernst & Young, now called EY. There I was one of their mobile forensics experts, working a variety of cases — everything from insider threat and insider trading. Then I really missed doing more public service work, so I left to go to the National Media Exploitation Center, which is a — I worked there as a contractor under a company called Basis Technology.
MS. HYDE: That organization supports 22 government agencies in the intelligence community, supporting the things that are either the highest priority or the things that other organizations can't support. So we were the center of excellence for the US government in mobile exploitation, and I ran that team there.
MR. LALLY: And then from there?
MS. HYDE: From there I went and became the Director of Forensics for Magnet Forensics, which is a company that develops tools for digital forensics. There I was in a position where I did a large amount of research into new artifacts, unsupported access to devices, et cetera. I was there for five years before starting my own firm.
MR. LALLY: And then — no, I'm sorry, let me start again. As far as what, if any, teaching experience you have in relation to that?
MS. HYDE: That is an excellent question. I've actually been an adjunct professor at George Mason University since 2016. I've taught 20 terms in their graduate program. I teach the mobile forensic analysis course in their digital forensics master's program.
MR. LALLY: In addition to your work and your teaching and all your training, are there any professional organizations related to your field that you're a member of?
MS. HYDE: Actually, several. I am a member of the Organizational Scientific Area Committee on digital evidence, which is a part of NIST, the National Institute of Standards and Technology. I am a member of the Scientific Working Group on Digital Evidence, also called SWGDE. I am the chair of a project called DFIR Review that does peer review of academic work that's done by practitioners in the form of blogs. I've been chairing that project since 2018, when it started, with our first publications in 2019. I am the first VP at the international executive committee level of the High Technology Crime Investigation Association, also called HTCIA, which I've been a member of since 2012. And I've been elected — I'll ascend to president next September.
MS. HYDE: I also am an associate member of the American Academy of Forensic Sciences, AAFS. And I just recently finished my term as an associate editor for Forensic Science International: Digital Investigation, which is a peer-reviewed journal in the academic space of digital forensics. And I'm still a reviewer, but I stepped down from my duties as an associate editor.
MR. LALLY: Bringing me to my next question — what, if any, articles or publications have you published in relation to this?
MS. HYDE: Great question. I've published a paper on the standardization of data recovery. I worked on that with Dr. Owen Casey and Dr. Nelson. That was published in the Forensic Science International: Digital Investigation journal. I also was the second author on a paper related to the use of AI — artificial intelligence — in digital forensics. And I've published numerous articles, blogs, white papers. I probably have published about 20 of those. But just speaking to the peer-reviewed work, there's two.
MR. LALLY: Can you explain to the jury a little bit about what "peer-reviewed" means — what that process is?
MS. HYDE: Absolutely. So a peer-reviewed work is when you have a novel piece of research that you then submit — that novel, never-been-presented work — to a journal. It then gets reviewed by numerous academic peers. It goes through cycles of revision where they ask more questions. On the standardization of file recovery paper, that process took about 14 months for it to be peer-reviewed by experts in the field, and then it is published in a journal.
MR. LALLY: So you've had both articles that you've authored that have been peer-reviewed and published, as well as you've performed the role of reviewer of other people's work?
MS. HYDE: Correct. In addition to the publications I've done, I've reviewed at least 20 to 30 articles, in a combination of support for the Forensic Science International: Digital Investigation journal and DFIR Review.
MR. LALLY: Now, if you could explain to the jury — when you say the term "digital forensics," what do you understand that term to mean with relation to what you do?
MS. HYDE: I would describe digital forensics as the analysis of data from any storage medium that can contain data — be that a mobile phone, computer, or cloud — for the intention of that data being used in court.
MR. LALLY: And with reference to your firm, Hexordia — where is that located? What state?
MS. HYDE: We are headquartered in New York State, in Bridgeport, New York, which is outside of Syracuse. We also have an office location in the DC Metro Area in Tysons Corner. But we have employees in six states.
MR. LALLY: And with respect to your duties and responsibilities in regards to your firm, what is it that you do?
MS. HYDE: Great question. So I do perform digital forensic analysis on cases. I currently work on two US government contracts, supporting one digital — forensic analysis and novel research and exploitation of mobile devices. So I'm named personnel on two contracts and then I manage another three. I also develop training for mobile forensics. My courses have been taken by people all over the country and Hexordia also delivers that training. I also do research and presentations for journals, conferences, et cetera. So a combination of casework, education, and research.
MR. LALLY: And following — sort of going forward to when you received your graduate degree — if you could speak a little bit about your work history since then.
MS. HYDE: Sure. So I continued to work at TEDAC while I had completed my degree. That was doing digital forensic analysis on phones that were connected to improvised explosive devices. So most of the phones I received were post-blast and I would recover data from that and analyze that. From there, once I had completed my degree, I went and worked in the regular private sector. I worked for Ernst & Young, now called EY. There I was one of their mobile forensics experts, working a variety of cases — everything from insider threat and insider trading. Then I really missed doing more public service work, so I left to go to the National Media Exploitation Center. I worked there as a contractor under a company called Basis Technology.
MS. HYDE: That organization supports 22 government agencies in the intelligence community, supporting the things that are either the highest priority or the things that other organizations can't support. So we were the center of excellence for the U.S. government in mobile exploitation, and I ran that team there. And then from there —
MR. LALLY: From there?
MS. HYDE: From there I went and became the Director of Forensics for Magnet Forensics, which is a company that develops tools for digital forensics. There I was in a position where I did a large amount of research into new artifacts, unsupported access to devices, et cetera. I was there for five years before starting my own firm.
MR. LALLY: And then — no, I'm sorry, let me start again. As far as what, if any, teaching experience you have in relation to that —
MS. HYDE: That is an excellent question. I've actually been an adjunct professor at George Mason University since 2016. I've taught 20 terms in their graduate program. I teach the mobile forensic analysis course in their digital forensics master's program.
MR. LALLY: In addition to your work and your teaching and all your training, are there any professional organizations related to your field that you're a member of?
MS. HYDE: Actually, several. I am a member of the Organization of Scientific Area Committees on Digital Evidence, which is a part of NIST, the National Institute of Standards and Technology. I am a member of the Scientific Working Group on Digital Evidence, also called SWGDE. I am the chair of a project called DFIR Review that does peer review of academic work done by practitioners in the form of blogs. I've been chairing that project since 2018, when it started, with our first publications in 2019. I am the first VP at the international executive committee level of the High Tech Crime Investigation Association, also called HTCIA, which I've been a member of since 2012, and I will — I've been elected — I'll ascend to president next September.
MS. HYDE: I also am an associate member of the American Academy of Forensic Sciences, AAFS, and I just recently finished my term as an associate editor for Forensic Science International: Digital Investigation, which is a peer-reviewed journal in the academic space of digital forensics. And I'm still a reviewer, but I stepped down from my duties as an associate editor.
MR. LALLY: Bringing me to my next question — what, if any, articles or publications have you published in relation to this?
MS. HYDE: Great question. I've published a paper on the standardization of data recovery. I worked on that with Dr. Owen Casey and Dr. Nelson. That was published in the Forensic Science International: Digital Investigation journal. I also was the second author on a paper related to the use of AI — artificial intelligence — in digital forensics. And I've published numerous articles, blogs, white papers — I probably have published about 20 of those. But just speaking to the peer-reviewed work, there are two.
MR. LALLY: Can you explain to the jury a little bit about what that means — what that process is?
MS. HYDE: Absolutely. So a peer-reviewed work is when you have a novel piece of research that you then submit — that novel, never-been-presented work — to a journal. It then gets reviewed by numerous academic peers. It goes through cycles of revision where they ask more questions. On the standardization of file recovery paper, that process took about 14 months for it to be peer-reviewed by experts in the field, and then it is published in a journal.
MR. LALLY: So you've had both articles that you've authored that
MR. LALLY: Now Ms. Hyde, if I could turn your attention to May of 2023. At some point during that month, were you contacted by the Norfolk District Attorney's office in relation to this case?
MR. LALLY: And eventually was there a contract that was executed and you agreed to do some work in relation to one specific area of this case?
MR. LALLY: And as far as the specific information in relation to this case, what was it that you looked at? Sorry, let me start by saying — what was it that you were asked to —
MS. HYDE: I was asked to look at two specific Google Search terms that took place on January 29th.
MR. LALLY: And with regards to the question that was asked, was it a question that was posed or were you asked to find some sort of specific response?
MR. LALLY: Now with regard to what, if anything, was then provided to you, or what if anything did you review in the course of your analysis?
MS. HYDE: I received two reports. I received the affidavit from Richard Green and I received the report from Trooper — I don't want to butcher his name, but something along the lines of Nicholas Gino.
MR. LALLY: And as your understanding, Mr. Green was a person that was retained by the defense in this case?
MR. LALLY: So you essentially received reports from the state police and a witness for the defense. Correct?
MR. LALLY: In addition to those two reports, what if anything else did you receive in relation to your analysis here?
MS. HYDE: On May 10th I received via U.S. Postal mail a copy of the drive with data from a phone that was identified to me as belonging to Jennifer McCabe. It was a full file system extraction from a GrayKey.
MR. LALLY: And so when you receive that information, what if anything did you do?
MS. HYDE: The first thing I did with that data set, after having been retained, was I made a forensic copy. I used an ARCpoint Atrio to copy the data from that original disc, which was then placed in our safe, to our working copy drive.
MR. LALLY: And that ARCpoint Atrio — what is that?
MR. LALLY: And then once that process was completed, what did you do?
MS. HYDE: I immediately verified the hash values of my new image to make sure that it matched the hash value of the original image.
MR. LALLY: And what is the hash value?
MS. HYDE: A hash value is an algorithmic numerical representation of the data. So when you have a hash value, that value is unique to a certain set of data. So when you have that same hash value matching, you're ensuring that your copy that you made was correct and not in any way damaged or corrupt.
MR. LALLY: And as far as matching those hash values, were you able to do so?
MS. HYDE: Yes, ma'am. The hash values of both the copy I made to work from and the original matched. They also matched the document that came along with the drive stating the original hash value.
MR. LALLY: And then what did you do from there?
MS. HYDE: From there I processed the image in several forensics tools. I used Cellebrite Physical Analyzer, I used Magnet AXIOM, I used a tool called ARTX — A-R-T-X — I utilized a tool called iLeapp, and I later used a tool called Sanderson Forensics SQLite, but I did not use that tool at this point.
MR. LALLY: Now with reference to the tools that you used that you just went through — are those fairly common tools or are those tools that are commonly used within your industry?
MS. HYDE: Yes, they are commonly used digital forensics tools that are very standard for other forensics examiners to use on mobile exploitation.
MR. LALLY: And as far as those different types of tools — my question is, there were a variety of different tools that you used. Correct? And why were you using the different tools?
MS. HYDE: Different forensics tools have different capabilities and look at data a little bit differently. Phones have thousands of applications. If you look at the Apple Store or the Google Play Store, you can download a total of six million apps. Between the two commercial forensics tools, they can only support so many applications, so they each work and support different bits of data. So it's important to use multiple tools so you can see the results from different tables, different data sets, and be able to compare those results and enhance those with manual analysis.
MR. LALLY: And with respect to using those sort of different varieties of tools, from Cellebrite to AXIOM, ARTX, et cetera — is that something that you typically do as far as your normal process of conducting a forensic analysis?
MS. HYDE: Yeah, that's very typical for me to process with multiple tools, to ensure that I'm getting the most complete interpretations from forensics tools. Of course you go beyond that with your analysis, but it is absolutely pertinent to do that. I would run some of those tools before others because of speed, so I can begin analysis while other tools are running.
MR. LALLY: And if I could ask you just about the last one that you mentioned just briefly — as far as the Sanderson tool — can you explain to the jury what that is and how that interplays with the other tools and analysis?
MS. HYDE: The Sanderson tool is meant to look at a specific type of data structure called a SQLite database. SQLite databases are very nuanced and this particular tool allows you to take that database and explore it at a deeper level than the other forensics tools allow.
MR. LALLY: And just for the record, when you use that term as far as the SQLite database, how is that spelled?
MS. HYDE: It is capital S-Q-L, lowercase i-t-e. SQLite is also commonly pronounced as "eSite," so both pronunciations are acceptable.
MR. LALLY: Now once you received the extraction, made the copy, and then ran it through those variety of tools — what is it that you specifically were looking at, and what is it that you were trying to ascertain, or the question that you were trying to answer?
MS. HYDE: So in looking at that analysis, I was focused specifically on the data that took place on January 29th. So the first thing I did was I limited my search within those tools to that period, and I looked at the artifacts that they were parsing pertaining to Safari history, as Safari was the browser that had been in use at that time.
MR. LALLY: Now in addition to looking at those searches, what if any information were you looking at in relation to deletion?
MR. LALLY: In addition to looking at those materials, as far as those searches were concerned, what if anything else were you looking at regarding that data with respect to deletion?
MR. LALLY: Deletion.
MS. HYDE: I apologize. Thank you. Yeah, so one of the things that was asked — one of the things that was in the affidavit from Richard Green — was that it was stated that there was belief that one of the search terms had been deleted. So I was specifically looking at that search term to see what the reason was for the suspicion of deletion. The tools denote things based on how they are running as an automatic process. Tools are designed to parse through large amounts of data to make it easier, so the tools will often flag certain data as "recovered," and sometimes there is confusion between what "recovery" and "deletion" means. That actually was the subject of the paper I referenced earlier that I co-authored.
MS. HYDE: And the tools are sometimes misinterpreted — that that statement of "recovery" means deletion. So I was exploring that.
MR. LALLY: Now you mentioned specifically there were two searches of interest that you were exploring. Is that correct?
MR. LALLY: And what were those two searches, and what if any information did you have in relation to those?
MS. HYDE: I had the information from the two reports that had been written in terms of what they found. And the two search terms — one was "how long to die in cold" and the other was "how long to die in ckd."
MR. LALLY: And if you could walk the jury through, as far as your analysis, what you first observed and what your analysis consisted of as it evolved.
MS. HYDE: Absolutely. So when I first looked at it, some of the tools surfaced the data from a specific storage called — I'm just making sure I get the entire path for you — com.apple.MobileSafari.plist. It is very common in mobile devices for it to look like a reverse domain name. Like you normally would go to a site like CNN.com — typically those names are like the opposite. So com.apple is going to be representative of a native Apple application. So this particular plist was related to that. A plist — or called a property list — is a data structure that's unique to Apple devices. And there is data pertaining to Google search — not just Google searches, search history — that's stored there. There's also — I found evidence of these search terms in KnowledgeC.
MS. HYDE: The KnowledgeC database is an Apple database that is meant to store information about users, so it can determine future functionality — what you're looking for, et cetera. So that's the KnowledgeC DB. It's more of a system-level artifact. And then there was also — and these were the observed search terms — most of the tools picked up both of those search terms in there. Not all tools did pick up the one that was from the WAL, which was the instance of "how long to die in cold" — that's how I'm pronouncing the "how's" — if you would like me to clarify it a different way, that's fine. That particular search term was recovered from a SQLite database, the one I later explored. And Cellebrite specifically demonstrated and showed that particular search term as a suspended-state tab.
MS. HYDE: And that's a really intricate artifact. Whenever we're ready to explore that — that is the one that was marked as recovered. So those were my initial findings, just looking at what the automated tools parsed. However, Cellebrite was the tool that found that particular search term.
MR. LALLY: Now, you used a term in there — as far as a WAL, or a write-ahead log file — is that correct?
MR. LALLY: Can you please explain to the jury what your understanding of that term is, based on your training and experience?
MS. HYDE: Absolutely. This is a really interesting data structure, the way it works. So the way SQLite databases work is: the data, before being committed to the database — which is almost like an Excel spreadsheet, if you think about that; each table is like a page on an Excel spreadsheet — the data before going there goes kind of to almost like a text file, like if you were to have a doc of writing all the changes that need to happen to that Excel spreadsheet, and they're not made to that spreadsheet until it closes. So what happens is, as you're doing things — either adding a website by searching for it, or closing a tab, or deleting a text message — in a SQLite database in general, both things you add are written to that text document, or things you delete. So they all hang out there together.
MS. HYDE: So it's any changes that are going to be made since you started using that application, until it's closed. Then they change there, and then it reopens. So to make an analogy: if you're in a restaurant and you order some food — the food being the data, and you at the table being the table that the data is going to — when you request that data and you say "I'm adding it," it goes to the area where the waiter, or waitress, or server is going to grab it from, right? That warming station — that is almost like our write-ahead log. And so different tables are ordering food, and food is constantly being put there. If somebody — maybe their steak came out rare and it needs to be cooked more — when they send it back, it'll also go to that warming area.
MS. HYDE: So you can see there that you've got things waiting to go out to the tables and things being sent back to the kitchen. So simultaneously, that storage area contains both the newest stuff waiting to go out and the stuff coming back. And that's really what a WAL file winds up containing: your newest Google searches, your newest text messages, et cetera, as well as anything that you've deleted — that you've said was deleted — because the rest in there is going to be "delete this entry." And so then all of those don't happen until the application is closed and reopened. If the application's been closed when we do the extraction, we actually don't get the WAL file. We only get the WAL file if, at the time of extraction, that database had not been closed.
MS. HYDE: So that just means that Safari, in this instance, was still running when the extraction was done. Most of you, when you use your phones, you leave the applications up — so because of that, we have a WAL file. And so that WAL file is going to contain both data that's been requested to be deleted, and new data. Now, when I say "deleted" in this instance, I don't mean like you as a user saying "delete this old text message." What I mean when I say "delete it" in this instance is "removed from the database." So when we're talking about something like a Safari tab — deletion from the database can occur because you close the tab. We all open tabs when we use our browsers on our phone, and we all close them. And so closing, in that instance, would be a deletion in the database.
MS. HYDE: But what that's doing is not a user requesting deletion. So I just want to be clear on that term.
MR. LALLY: And to that point — so when you say something has been deleted, specifically with reference to the write-ahead log, or the WAL file, that's not something that's necessarily initiated by the user themselves?
MR. JACKSON: Your Honor, just in regard to a side issue, may we approach?
JUDGE CANNONE: Sure.
MR. LALLY: Now, Miss Hyde, you used the term — earlier in your testimony — regarding something within your field called an "artifact." Is that correct?
MR. LALLY: And can you explain to the jury what you understand that term to mean?
MR. LALLY: And can you give the jury an example of what an artifact might be, or what an artifact might look like?
MS. HYDE: Absolutely. An artifact of you sending a text message would be that we would find the database where text messages are stored, and inside that database we would see the time and date that you sent it, who received it and who it was sent to — it might not be their name; it might be a numerical representation that we have to tie to another database. Then we would have the message itself — it may be in plain text, or it may be encrypted or encoded. And then we may have something called a blob, and that would be a reference to data that's too large to store in the database — it would be elsewhere. So that would be if your text message included, let's say, a picture or a video.
MR. LALLY: Now, in regard to your analysis and your ultimate conclusions in this case — you wrote a report, is that correct?
MR. LALLY: And within that report there are a couple of tables and a specific figure, is that correct?
MR. LALLY: Your Honor, with the court's permission — if I could, I'd like to publish to the jury Table 1 from Miss Hyde's report.
JUDGE CANNONE: Okay.
MR. LALLY: Miss Hyde, do you have a copy of your report with you as well?
MR. LALLY: Your Honor, with the court's permission — just because it may be a little difficult to see from where she's seated — if she could refer to the table in her—
JUDGE CANNONE: Thank you.
MR. LALLY: Miss Hyde, what's up on the screen? And I'm sorry — there should be a laser pointer on the desk before you. Do you see that?
MR. LALLY: Using that laser pointer, if you could draw the jury's attention — or direct the jury's attention, I should say — with respect to: what are we looking at in this specific data?
MS. HYDE: Absolutely. So this table here has a couple of different elements that I'm showing, in terms of what searches were searched, where and when, and what source of data they came from. So if we look at this first search — I have them in the timestamp order that is associated with the artifact. I want to clarify that we need to discuss the meanings of these timestamps as we go. So this first one that you're going to see — and I'm just going to reference my paper because of the size — is 02:27:40 a.m. So that's 2 o'clock in the morning, 27 minutes, and 40 seconds. And it is a search for the term "how long to die in cold." And that timestamp in this instance is from the browser_state.db, which is right here. The browser_state.db is an artifact that speaks to when tabs are moved.
MS. HYDE: So when you're using your browser and you open different tabs — you may have a search — this time pertains to the time that that tab moved. It could be lots of things: it could be you switch tabs, it could be you close the tab, it could be you minimize the application. Depending on the version of iOS, what can trigger that browser state tab update — what changes this timestamp — differs. But what's very special to know about this timestamp is that that is not necessarily the time of this search. If a browser is opened and a single search is made, those will match. However, as long as that browser hasn't been closed, moved to the background, et cetera — if that browser has had any activity that happens to that tab in terms of application closing or minimizing — this can update.
MS. HYDE: That is what that timestamp is for. This search term is always going to be the most recent search term in that tab. That means the last item that was opened in that tab was "how long to die in cold." We cannot tell by this particular artifact what time that search occurred. There is a high likelihood that that tab was opened at this time because there was another search that occurred at 2:27 in the morning. It was a sports site — I cannot pronounce it; it began with an H — "H... ? Sports... Hawach?"
MR. LALLY: Okay. Thank you very much for the pronunciation. Hakok Sports.
MS. HYDE: There were a couple of searches pertaining to that done immediately after. But it appears that that's when this tab was opened and the first search done there. So this search, we know was done by this, and we know that it was the last search in the tab, because it comes from this source, the browser_state.db. And that means that particular timestamp and search pertains to what has happened with the tab and that's what allows you to open your Safari browser on your iPhone and then open it on your iPad or your Mac and have the same tabs — that's the function there. This search here, this is really interesting. So at 6:22:49 a.m. we have an entry in cache DB that is specifically — if we dig into this URL — this is an Apple suggested term.
MS. HYDE: So when you use a browser and you start to type a search you've never typed before, the browser tries to help you and say, are you searching for this? So as the user began to type the phrase in the immediate search after, which is how long to die in cold, this particular suggestion came up, indicating in our likelihood that the search term being typed had not previously been searched, because Apple suggested — instead of a previous search — it suggested "how long does it take to digest food." After "how long to digest food" had been suggested, the user then has a search at that time, 6:23:51, for "how long to die in cold," and that is coming from the mobile Safari plist, the com.apple.MobileSafari plist I mentioned earlier.
MS. HYDE: And the search is also shown in the KnowledgeC DB — that's commonly what we would expect. We would expect that search to be in both of those, both that plist and that database. Then — can I stop? Yeah, absolutely. As far as the expectation of that being in both of those, the plist and the other database, why is that? That is because the search is not being tracked not only by the plist for Safari, but also by the system itself, to be able to do that predictive coding so in the future when you start to type "how long" it gives you what you searched last time. And with respect to that "how long it takes to digest food" — based on your analysis, was that a search term that was ever entered into this phone? Correct.
MS. HYDE: The source of that, if you look at this first line where it says cdn2.smooth.apple.com, that's what indicates that this is an Apple suggested term, not a user input term. I'm sorry, I interrupted — if you could please continue with the next. Absolutely. The next search that was done immediately after the search at 6:23, the next search was "how long to die in cold." Again we see that search both in the KnowledgeC DB and the mobile Safari plist. And then we have — yeah, so we see it in both databases, then. So that's "how long to die in cold." Sorry, I was on the next line. Then at 6:24 we get the next search which is "how long to die in cold," and we have that search again both in the KnowledgeC DB and the plist.
MS. HYDE: If I were timelining this based on activity as opposed to timestamps associated — what I see here is the beginning of a user typing a search, Apple making a suggestion of "how long to digest food," the user does not take that suggestion and rather inputs "how long to die in cold." They then make a second search of "how long to die in cold." That search winds up being the last search in the tab, and that is why we see it in this particular browser_state.db. It should be noted that this is in one of those WAL files and not the raw file.
MS. HYDE: And so as far as toward the top there — and I know we'll get a little more to this in a moment — but as far as that reading of 2:27:40 in the morning, as far as the timestamp associated with that search — why would that timestamp be associated with it if searched later in the morning? Because that table isn't showing the time of that search, it's showing the state of the database. So it's saying that the last time that tab was touched, moved to the background or foreground, was 2:27 a.m. — the website. As you search more websites in the tab, that gets updated. So the current state that this is showing is that the tab was moved to the background, or opened, or some action for the tab at 2:27 a.m., when the Hawq Sports — I got it right — Hawq Sports website was visited.
MS. HYDE: And then once the Hawq Sports website was visited, that tab was in continual use. There are other searches and activity that happens — we don't see that — happens eventually. This search gets made at 6:23 a.m., this search at 6:24 a.m., that winds up being the last search in the tab, and because it's the last search in the tab that's what the final update is in the final status. When you say "last search in the tab" — in reference to what time specifically? I'm sorry, I don't understand the question. That question — let me rephrase. So as far as the 2:27 — the tab is closed and then another search done at 6:23, is that correct? 2:27 isn't necessarily the time when the tab was closed.
MS. HYDE: In my report I say that it's undetermined, because there are a lot of things that can cause that timestamp to be there, including the tab being moved, tab being minimized — I don't know exactly what caused the tab to get that particular entry. But it is not — that timestamp is not indicative of the time of the search or any URL that's visited there. That time is indicative of movement of the tab, and the search is the most recent search, so they updated at different times. So I guess my question is, as far as from what your analysis is here — is there any use of that tab, or is there any evidence or artifacts of use of that tab between the 2:27 timestamp and the 6:23 timestamp?
MS. HYDE: So the fact that that timestamp exists at 2:27, combined with the fact that the only existence of this search is at 6:24, means that yes, that particular tab was used after 2:27 when the Hawq Sports was searched. I apologize for not being familiar with that name. That's absolutely fine. Now if I ask you — at this point you would talk about one particular database having sort of an intricate artifact. If you could remind the jury of what that is and if you could explain that. So the artifact that needed a little bit more explanation and digging was this browser_state tab, and that's because if you look at the end of the file name you'll see it's a db-wal — that means it wasn't in the regular database, it was in the Write-Ahead Log.
MS. HYDE: And to Cellebrite's credit, that tool actually parsed the Write-Ahead Log and displayed it where the other tools did not. If you remember before, we talked about Write-Ahead Logs and we talked about the fact that they can contain data that is both removed from a database and data that's not yet to be committed. And this database — we cannot — we do not know which that is. But what I did was I manually went through — and that's in the figure — I manually went through and reconstructed the WAL files using the Sanderson SQLite tool. And in reconstructing the WAL files I found — I'll tell you exactly how many if I can refer to my notes — 1, 2, 3, 4 — approximately 16 instances of that existing in the WAL file.
MS. HYDE: Now what's interesting in that figure is that each entry of something in a Write-Ahead Log gets a unique identifier — it gets a number that makes it unique, just like each one of us has a phone number that's unique to us. Each entry in the database that starts in the Write-Ahead Log gets its own unique identifier, and that unique identifier in this instance — in all 16 entries in the WAL file — is the same, which shows that it's truly only one search, not multiple searches for that search term, in terms of what exists in the WAL file. So that's the first piece of nuance there — that it was only truly searched once, and you would have to really dig into that WAL file to expose that. But I think it was pertinent to know how many times this was searched.
MS. HYDE: I'm sorry — yes, please stop there for one moment. And with the Court's permission, if I could ask Miss Gilman to publish Table 2 now.
JUDGE CANNONE: Okay.
JUDGE CANNONE: Sure. Thank you, your honor.
MR. LALLY: And Miss Hyde, do you recognize what's on the screen?
MS. HYDE: I do. That is Table 2 of my report on page four. And if you could again, using the laser pointer that you have, direct the jury's attention to what if anything is depicted in Table 2. So Table 2 here is designed to share with us what each of the locations that can potentially have Safari data — what kind of data they hold. And one of the things that many of us are interested in sometimes is: do these things hold our private searches? Like if we're in incognito mode in Google, or private searching here. So this table denotes if we're seeing private or non-private searches, if it includes tabs that are open or closed, so that we know which artifacts are giving us what kind of information, as well as what some of the associated timestamps mean.
MS. HYDE: So this was divided as a guide to really understand Table 1 more in depth. So this first one here — this refers to the history DB WAL file, and that file has non-private searches, it includes closed tabs, and the visit time there is the time that a user reopened the tab. We don't always have the artifacts we want, and in this instance we did not have that particular one. Then we have the Safari tab CB — this has private tabs and non-private tabs, and it shows only tabs that have not been closed, so this is an active tabs database. The next one we have is the browser_state.db and its WAL file, and this is the one we were referencing — the one we keep talking about — where that 2:27 a.m. number is coming from.
MS. HYDE: So this one is going to be non-private searches, it includes closed tabs, which is important, and the last view time — so when an item went to the background. So it's about the tab, not about the search. That's probably the most important demarcation. And one of the issues here is when the tools parse the data, they call the columns what the columns are called raw in the data — those don't necessarily tell us what they mean. We have to verify and validate what they actually mean. So when I'm saying "last view time," that's what the database name is, not necessarily the actual function — not what we would presume it to mean. The next one, the last one here, is the mobile Safari plist, and this one actually tells us the date — it gives us the time that something was queried, reliably.
MR. LALLY: Now if I could ask you, if you could just explain to the jury — as far as these SQLite databases, how exactly do they work, and what if any relevance do they have with regard to data information with the write-ahead log, or the WAL file?
MS. HYDE: So SQLite databases work and function as a storage for anything that the phone may need to reference later, or that you're doing. They're stored for a variety of things — most of your third-party apps use SQLite databases, and multiple applications do. So SQLite gets information in and it allows for that information to be updated, added to, or removed from. So data that's in that situation — sitting in the database — that is what we refer to as live data. Then data that is waiting to be put into the database that is sitting in the WAL, or has been requested to be removed — that is what's in the write-ahead log, and we call that non-live data because it's not in the active database.
MS. HYDE: So when you're using an application and it's putting that data to that temporary storage area — that WAL, that write-ahead log — what happens is, then when you request data by going into your contacts, let's say, and looking at the people who are stored in your contacts — right, your mom, your dad, your brother, your sister, all those people — when you're looking up your best friend, you're looking up their numbers — that data is all stored in that database. So when you add someone, it does that entry to the WAL. So if I just added Sally, how do I know that Sally's in there? Well, what happens is the interpretation of the SQLite database goes to the WAL, checks and sees if it's in the WAL, and uses the WAL to incorporate it together to present you everything as if it's in there.
MS. HYDE: Then when you close the application — you close contacts, you don't just put it away like those switch apps, you actually close it — when it closes, all of those changes get committed. So now the items that you've requested to be deleted — that contact you don't like anymore, that ex, whatever — their data is no longer in there. The data that you've added is now in there, and now that's all live. Sometimes we can get more complex and recover some of those items that are left over because they are marked for deletion but still sit in the database, or they're sitting on a page that is no longer active in a database. So there's a lot of places that we can recover data from in addition to the WAL that exists in a database after something has happened.
MS. HYDE: Oh, one of the biggest complexities is the assumption that because data exists in the write-ahead log, a user took an action to delete it. When I teach at the university, when I peer-review my colleagues' reports, this is one of the most common mistakes I see — and it's really because some of the forensics tools, to indicate that the file has been recovered, put a big X on it. And that was actually a big subject of that standardization of file recovery paper I referenced earlier, and it speaks about all kinds of files but actually has a special section on SQLite specifically, because that nuance is often missed by examiners.
MR. LALLY: Now, in reference to that being marked as deleted, or misinterpreted as deleted by user — is there a specific tool, of the tools that you mentioned before, as far as Cellebrite versus [unintelligible] versus — that has that information that others may not?
MS. HYDE: So Cellebrite and Magnet AXIOM — both Cellebrite Physical Analyzer and Magnet AXIOM — both have file system viewers that then have SQLite database viewers. I would say that those SQLite database viewers have more of a basic function in terms of looking at data in the live database. They don't allow for deep analysis of the write-ahead logs, which is why in my analysis I use the specialized Sanderson forensic browser for SQLite.
MR. LALLY: And so when you see something like that in a Cellebrite extraction, is there — what if any further analysis can you do to sort of look behind?
MS. HYDE: Absolutely. Anytime I see an entry marked as recovered — and I'll use the term "recovered" instead of "deleted" — by Cellebrite, and the source is a write-ahead log, I would go and begin reconstruction of the write-ahead logs using Sanderson's forensics tool, to get to the meat of: is this something that has been deleted, or is this something that has yet to be written to the database? You can often see remnants of that based on how many unique entries there are of it. And the other thing we do — so I mentioned that when we have the WAL file, it hasn't been applied — so if I take the SQLite database and the WAL file out, and I make another copy of them: I've got one copy I'm looking at in Sanderson, and I take another copy and I actually open it — what's cool is it does the commits.
MS. HYDE: So then I have a copy of the database that actually has all those changes. So I now know if the entry was deleted by just comparing the two. So I can do that in analysis by basically making the database think it was reopened, by reopening it with a non-forensics tool, in that instance, just to see what it would look like in real life. And then I can look at the difference between the two and determine if something was marked for deletion or addition.
MR. LALLY: Now, are there — from your training, your experience, and your knowledge of these systems — different reasons for why it's your conclusion that that particular search, or those search terms, were not deleted by user specifically?
MS. HYDE: Yeah. Well, there's a couple of reasons. First, the way tabs work from a functional perspective — you can close a tab, but there's not really a user deletion function for the tab, and there was no evidence of other deletion, if that makes sense. I also — I don't see evidence that the term was searched prior to that 6:24 time. So since the term wasn't searched prior to that 06:24 time, the existence of the tab being closed would not mean that that search was deleted, because then it would be evident in the other databases pertaining to the actual search, not the databases pertaining to the state of the tab. Is there a possibility that the tab was closed? Absolutely. But that is not a deletion.
MS. HYDE: Secondly, an existence in the WAL file, as we mentioned, could be that it's something new, not something that was deleted. So just its mere existence in the WAL file doesn't mean it's deleted. I should say that that same X in the tools — you have to check the file, because it could be from a free list or a free page, in which case — I know we didn't go into those details, nuance of SQLite databases — that actually doesn't pertain to this. However, those would be truly deleted items. So there's other places where we would expect potential recovery, not necessary in this instance. And you can have something that's in a WAL file and deleted without being in those other locations.
MS. HYDE: Just stating that its existence in the WAL file itself doesn't mean deletion — but there are other SQLite things that would clearly indicate deletion that aren't applicable to this particular search.
MR. LALLY: Now, your Honor, with the court's permission, if I could ask Miss Gilman to publish Figure 1 from the Hyde report. Thank you, your Honor. And again, do you recognize what's up on the screen?
MS. HYDE: Yes. This is Figure 1. This is a direct screen capture from Sanderson's tool, from the browser_state.db, limited to the "how long to die in cold" search.
MR. LALLY: If you could, using that pointer, direct the jury's attention to what if anything significant or of note that you observed in using the tool — that's Figure 1.
MS. HYDE: Absolutely. So all of the searches are the exact same term. And here we have the order index, and here we have that last view time. And by looking at the last view time — which is the raw title of that artifact for the timestamp — we were looking at these, and these are identical. This, and more importantly, the actual UUID for these are identical, which is right here. Now the offsets are different, and what that means is that this is just appearing several times in the write-ahead log. It has moved as the database has moved data through the write-ahead log to different positions, but has in every instance the same unique identifier — meaning that all of these entries are the same entry, just shown multiple times in the same database through that point.
MR. LALLY: As far as the multiple times that it's listed within there, what if any relationship does that have to how many times it was actually searched by a user?
MS. HYDE: Well, that UUID shows that in this particular instance it existed once — in the last time in the tab — because this artifact does not tell us how many times something was searched by the user. This artifact is limited just to the tab. We would look at the — mobile Safari plist, which has one entry for this timestamp, or knowledgeC.db, which also only has one entry for this search. Those are the two locations where we would determine how many times it had been searched, as opposed to this location, which could tell us that if there were multiple entries — but there is only one. But since this is limited to just the last search that's showing in a tab, it isn't a good source to say how many times it was searched. Just not this particular artifact.
MR. LALLY: How about from other artifacts — were you able to determine that?
MS. HYDE: Absolutely. You can determine that from both mobile Safari plist and knowledgeC.db, and —
MR. LALLY: I'm sorry, I jumped in there. Just, turning your attention back to Figure 1 — what if anything else of significance do you note?
PARENTHETICAL: [gap — approximately 2 minutes of testimony not captured]
MS. HYDE: That's — that's really what's of significance: it is the same entry there multiple times, and we can tell by the unique identifier that it is the exact same entry. Although it's the same search term and the same timestamp, it's only one instance of it; it's just stored in the database multiple times at multiple offsets.
MR. LALLY: Now, from the timestamp associated with that depicted in this figure, what if anything can you say as to — or what if anything were you able to conclude from your analysis as to the time that the website was viewed?
MS. HYDE: Well, I guess I should really clarify for people who may not be familiar — that doesn't look like a normal time, right? That's because this is an Apple WebKit timestamp, so we actually convert it to a regular time. But when you look at this number raw, you can actually see that it's consistent throughout, even though you don't know what it is. But if we put it into a decoder, this particular timestamp is where we're getting January 29th, 2022, at 02:47 a.m. And, to be clear — just for real clarity — when we actually translate this timestamp, it's in Coordinated Universal Time, not in local time. But for the sake of conversation, I'm using the local time here, as opposed to the time in Coordinated Universal Time, which is how this artifact is particularly stored.
MR. LALLY: Now, with your analysis — is there some kind of testing that you conducted? Can you explain to the jury sort of what that is and how it relates to your analysis?
MS. HYDE: Absolutely. So with this artifact, and being able to determine what this timestamp really means, I actually did quite a bit of testing — which is fun, but that's okay. So we went ahead and took a jailbroken iPhone, and the purpose of a phone being jailbroken is we're bypassing the security protocol so we can monitor the data and its storage in real time and bring the data through. We did this both — we did this manually in terms of pulling the data out, and we also did multiple forensic images at different stages. We manipulated tabs, we did some Google searches, and then we would close tabs, move tabs, and perform several different searches that we scripted out first — what we were testing — and then check to see how that timestamp was moving.
MR. LALLY: ...testing that you had conducted — you used the term "we." So, are there other individuals that worked on this particular assignment?
MS. HYDE: I had somebody peer review and test my testing, so they didn't work on the case. But as I was testing, one of my team members just validated that I was following my test plan. That's just normal practice — we use peer review.
MR. LALLY: So the testing that you were testifying about is testing that you conducted, correct?
MS. HYDE: Correct. But I had peer review by my team member on the script I had made to do my testing. So when I do testing, I'm following the NIST guidelines for data set generation and testing, that's outlined by OSAC — which, ironically, I should disclose, I'm a co-author on.
MR. LALLY: And then if you could — I'm sorry, Ms. Hyde, just continue with your test.
MS. HYDE: Absolutely. So then we populated the device and looked at all of the different data that came back from the different states while it was jailbroken, and the changes that happened. What I was saying was interesting is we then tied it to a Mac and saw how the browser state changed if we changed the browser and the tab on a Mac, and how that would affect it. And because we could see the differences — and by "we," I mean I had my testing peer reviewed — because I could see the differences of what the data looked like when I was altering it, both on the Mac and on the phone, I was getting differences in those timestamps in those different conditions.
MS. HYDE: So in other words, I could get a timestamp that showed in the browser_state.db that was earlier than the current search by a variety of means, including manipulating the tabs, closing them, opening them, switching which one was in proper view forward, as well as looking at that tab on an external device, including a Mac and an iPad, on the same account.
MR. LALLY: Now, throughout the course of your work — I think you alluded to it earlier in your testimony — as far as your experience is concerned, you've had occasion to work both as a forensic — excuse me — a forensic investigator, yes, as well as working in research and development. Is that correct?
MR. LALLY: And so from the R&D side, what if any role does that play as far as your analysis — in general terms — when it came to some of the devices, or some of the tools that you were implementing?
MS. HYDE: I believe that my experience in research, development, and reverse engineering throughout my career, pertaining to artifacts of course, influences my analysis processes in terms of wanting to ensure I understand how things work. I think it actually gives me a better understanding, in many ways, of how data moves through devices, as well as understanding why the tools parse and represent data the way they do — in terms of the fact that they are using the proper algorithms to dissect that data and determine what it is, based on testing and knowledge. So I think it actually influences in a positive way the depth that I go through in my analysis.
MR. LALLY: You mentioned earlier in your testimony that you have reviewed both Trio's report as well as an affidavit from Mr. Green — is that correct?
PARENTHETICAL: [unidentified]
JUDGE CANNONE: Court reconvening — inaudible
PARENTHETICAL: [Inaudible.]
COURT OFFICER: just like
PARENTHETICAL: [unidentified]
MR. LALLY: And with regard to the affidavit from Mr. Green, what if any issues of significance did you observe in his analysis?
MS. HYDE: In Mr. Green's analysis, there is a conclusion that is drawn that a search occurred at 02:47 a.m. on January 29th, based on the artifact recovered from browser_state.db that Cellebrite parses. And there are two errors with that: the assumption that that is the time of the search is a misunderstanding of the browser_state.db artifact, because that artifact is not the time of search — it's the time of movement of the tab. As well as the statement that it was deleted, due to the demarcation of Cellebrite as "recovered."
MR. LALLY: And based on the totality of your analysis, your testing, and everything that you looked at in this case, what if any conclusions did you come to?
MS. HYDE: My conclusion is that there was a search at 02:47 a.m. for Hockomock Sports — that that browser was in use and there were continual searches throughout the night. The phone was — I do agree with Mr. Green's findings that the phone was in use at that time, at 02:27. Later, the phone — a web search is done at 6:23 a.m. for "how long," at which moment Apple produces a suggestion for "how long to digest food." The search is instead completed with "how long to die in cold" at 6:23. A new search is conducted at 6:24 of "how long to die in cold" — "how long to die in cold." That search is then the last search that is made in that particular tab. And that's my conclusion.
MR. LALLY: What if any conclusion did you come to in regard to items being deleted, pertaining to those artifacts — in regards to those artifacts pertaining to those search terms?
PARENTHETICAL: [Inaudible.] [Sidebar — inaudible]
COURT OFFICER: — thank you. May be seated.
JUDGE CANNONE: Step? Mr. Lally?
MR. LALLY: Yes, Your Honor. The Commonwealth would seek to introduce and admit as the next three exhibits: Table 1, Table 2, and Figure 1 from Miss Hyde's report.
JUDGE CANNONE: Okay. There's no objection, Mr. Yannetti?
MR. LALLY: Thank you. Just Table 1, Table 2 — I think the order I gave. Thank you.
MR. LALLY: Thank you, Your Honor. And thank you very much, ma'am. I have no further questions.
JUDGE CANNONE: All right. Mr. Yannetti?
MR. YANNETTI: Your Honor. Good afternoon, Miss Hyde.
MR. YANNETTI: Miss Hyde, you just spent maybe 45 minutes to an hour — whatever the time period was — explaining to these jurors what is really a complex set of facts that led you to conclude that the 2:27 a.m. timestamp on that Google search was triggered by the closing of a tab. Did I essentially sum that up correctly?
MS. HYDE: Essentially, except it's not a 2:27 a.m. timestamp on a Google search. It's a 2:27 a.m. timestamp on a tab.
MR. YANNETTI: Fair enough. Thank you. And in the context — or in the course of your explanation — you discussed WAL files?
MR. YANNETTI: SQLite database?
MR. YANNETTI: Plist database?
MR. YANNETTI: KnowledgeC database?
MR. YANNETTI: Thank you for the correction. Opening tabs and closing tabs?
MR. YANNETTI: Yes. And other issues related to your analysis of the extraction, correct?
MR. YANNETTI: Other elements?
MR. YANNETTI: I know. I wouldn't be able to. Would you agree with me that another simple explanation for that 2:27 a.m. timestamp was that the user of that iPhone conducted that search at or before 2:27 a.m.?
MR. YANNETTI: I'm going to phrase it the same way. Repeat it, please.
MR. YANNETTI: Would you agree with me that another simple explanation for that 2:27 a.m. timestamp was that the user of that iPhone conducted that search at or before 2:27 a.m. on January 29th of 2022?
MS. HYDE: That timestamp is not indicative of a time of search, so the question is difficult to answer, because I can't say that that timestamp tells me anything about the time of search.
MR. YANNETTI: Okay. So, as I take your answer, you're not denying that the timestamp — and whatever you get from that timestamp — doesn't rule out that the Google search was done at or before 2:27 a.m.?
MS. HYDE: Correct, if I'm understanding your question. Or — there was a double negative, so I just want to clarify. The question is: if the timestamp could exist because of a search happening at that time?
MR. YANNETTI: No. What I'm saying is: your analysis of the phone does not rule out that the user of that phone performed that Google search at or before 2:27 a.m.
MR. YANNETTI: But again, I'm getting to whether you can rule that out, ma'am. That's really the crux of my question. Can you rule out that the user of that phone conducted that search at or before 2:27 a.m. on January 29th?
MS. HYDE: There is a very unlikely possibility, based on the fact that there is no evidence that the search occurred before that time. That's just like saying that the user searched for pandas at 2:27 a.m. — could you rule that out? I can't rule out something that doesn't exist.
MR. YANNETTI: All right. Well, you would agree with me that with regard to your analysis of this phone — and the extraction — you were given very specific instructions in terms of what to look at?
MR. YANNETTI: Correct. They had a very limited scope in this analysis. And who was it that gave you that limited scope?
MR. YANNETTI: Okay. And Detective Tully instructed you specifically to only look at the part of the phone extraction that dealt with the Safari search history — Safari history, search history specific to January 29th, 2022?
MR. YANNETTI: And he instructed you not to look at anything else on the phone?
MS. HYDE: I looked at related artifacts, such as what was being done at that time — the phone being on, the phone being used — but not anything else outside of that. Correct.
MR. YANNETTI: And you were not instructed, for instance, to look at other user activity on the phone, including examining call logs on the phone?
MR. YANNETTI: And you would agree with me that if you were allowed or instructed to examine the call logs, that could reveal deletion of calls that morning?
MR. LALLY: Objection.
JUDGE CANNONE: Sustained.
MR. YANNETTI: Thank you.
JUDGE CANNONE: Any follow-up, Mr. Lally?
MR. LALLY: No. No redirect.
JUDGE CANNONE: You all set? Thank you very much. All right, Mr. Lally. Your next witness?