Archive for Tool Release/Update – Page 2

Amazon Cloud Drive Forensics: Part 1

Amazon Cloud Drive is yet another way that users can upload and store information in the cloud.  Much like other cloud storage options, an Amazon Could Drive can be used for a variety of purposes, including those a bit more nefarious such as intellectual property theft.  It’s important that an examiner know what artifacts are left behind when an Amazon Cloud Drive is utilized in order to better explain what actions may have taken place surrounding a user and his or her Cloud Drive.  At the time of this writing, I’m unaware of any commercial forensic tool that interprets/parses Amazon Cloud Drive artifacts.  This post (as well as the next) sets out to highlight some of the forensic artifacts available to an examiner after a user transfers files to and from an Amazon Cloud Drive.

A user may currently interact with an Amazon Cloud Drive in one of three ways:

  1. Via the desktop application
  2. Via the online interface (i.e. using a web browser)
  3. Via the mobile application (iPhone and Android)

Depending on the method used to interact with the Cloud Drive, artifacts will be left in different locations of the associated file system.  At the time of my research, the Amazon Cloud Drive mobile app had not been released so I currently do not have details of the artifacts found on mobile devices.  This post will cover the the artifacts left as a result of the Windows desktop application being utilized for Amazon Cloud Drive file transfers.  In Part Two of this series, I will cover the artifacts available to an examiner as a result of the user accessing his or her Cloud Drive using only a web browser.

Desktop Application Usage

The Amazon Cloud Drive desktop application is a small app that can be installed on a Windows or Macintosh system that helps streamline operations carried out on an Amazon Cloud Drive.  Once installed, the user will be prompted to enter his or her credentials to access their Cloud Drive.  After the credentials have been verified and the desktop app is running, the user may drag and drop files either to a small window associated with the app (see screenshot below) or the app’s icon in the taskbar to initiate an upload.

Uploading files via ACD desktop application

One aspect of an Amazon Cloud Drive that sets it apart from some of the other cloud storage solutions is that there is no “magic folder” or directory within the file system that is set to automatically sync with the Cloud Drive.  Instead, a user must selectively choose which files and folders he or she would like to upload or download from their Cloud Drive. Because of this, an examiner is not able to focus in on a single directory as the source of uploads/downloads on the local system.  Luckily, the app has left us with a log file that is very helpful during forensic examinations.

Desktop Application Forensic Artifacts

ADriveNativeClientService.log
The ADriveNativeClientService.log file is a simple ASCII text file that holds, among other things, a log of completed file transfers made between the Cloud Drive and local machine.  This file is located in the user’s Application Data directory under “Users<user>AppDataLocalAmazonCloudDrive” on a Windows 7 system.  By analyzing the records within this file, an examiner can determine details regarding completed file transfers such as: file name, local path, cloud path, file size, and whether the transfer was an upload or download. Here’s an example of a record you might find within an ADriveNativeClientService.log file (with some potentially more relevant portions bolded for emphasis):

DEBUG [pool-1-thread-2] c.a.a.c.l.PostTransferHandler.handleRequest (PostTransferHandler.java:24) – Task has been retried FileUploadTask:taskId=C:UsersPublicPicturesSample PicturesHydrangeas.jpg,parentTaskId=upload,status=FINISHED,taskInfo={“fileCounts”:{“PAUSED”:0,”CANCELLED”:0,”RUNNING”:0,”PENDING”:0,”FINISHED”:1,”ERRORED”:0},”byteCounts”:{“PAUSED”:0,”CANCELLED”:0,”RUNNING”:0,”PENDING”:0,”FINISHED”:595284,”ERRORED”:0},”taskId”:”C:UsersPublicPicturesSample PicturesHydrangeas.jpg”,”timeRemaining”:””},children=[],localpath=C:UsersPublicPicturesSample PicturesHydrangeas.jpg,cloudpath=/Uploads,conflictResolution=RENAME,filesize=595284
Since this file is in plain text, it’s possible to simply open the file in a text editor and analyze each record line by line or perhaps search for file names of interest within the file.  However, if there was much activity with the Cloud Drive and this log expanded to hundreds or thousands of records, manual analysis would become quite cumbersome.  After manually analyzing a few records from an ADriveNativeClientService.log file, the benefit of a script to parse this file is obvious.  To carry out such automation, I’ve written a Perl script that parses the records within an ADriveNativeClientService.log file and outputs the result in CSV format for easy viewing within a spreadsheet application.

The Perl script for parsing this log file simply accepts the path to the log file (or a directory of log files specified with the “-d” flag) and outputs the result in CSV format.  By using this script, you can translate records similar to the one above into a much more readable format as in the screenshot below. The script is freely available for download here.

Sample output from acdLogParse.pl

1319b5c6-2672-49b4-b623-bf5a33fd4c40.db
This is a SQLite database stored in the same directory as ADriveNativeClientService.log that holds the queue of files to be transferred to or from a Cloud Drive. When many files are requested to be uploaded or downloaded at once, a queue will be formed and tasks (one task per file) will be assigned a status of “PENDING”.  When a transfer tasks’s status is “PENDING”, information about the transfer (local path, cloud path, file size) is written to a record within this database. Additionally, if a transfer is paused while there are still files in the queue, the transfer tasks in the queue will be assigned a status of “PAUSED” and will remain in this database. If network connectivity is lost while a large transfer is taking place, this database allows the transfer to continue when connectivity is regained by supplying the list of files waiting to be transferred to either the local machine or the Amazon Cloud Drive.  When the file transfer is complete, the associated database record is removed and information about the individual transfer is logged to ADriveNativeClientService.log.

As I have not found this database to contain any active records relating to completed file transfers (i.e. where the transfer status is “COMPLETE”), the ADriveNativeClientService.log file appears to contain more useful information regarding completed file transfers.  The information within this SQLite database may be relevant, however, to aid in demonstrating a user’s intent to transfer the files described in the database records to his or her Cloud Drive.

The Amazon Cloud Drive desktop application is easy to use and streamlines the process of file transfers to and from an Amazon Cloud Drive.  This method of transfer may be desirable for many users, but for those that do not want to install an application on their workstation, the online interface provides a means of transferring files that requires nothing more than a web browser (and network connectivity of course). The forensic artifacts left behind as a result of file transfers to and from an Amazon Cloud Drive using only a web browser will be discussed in Part Two of this series.

For more detailed coverage of Amazon Cloud Drive forensics, please see my Digital Investigation article on the topic.

Automating USB Device Identification on Mac OS X

One of the common methods examiners may use during USB analysis on Mac OS X machines (running Snow Leopard or above) is to search the kernel log for “USBMSC” entries to identify USB devices that have been connected to the machine.  BlackBag Technologies has a couple of excellent blog posts here and here describing the logging of USB device information in the kernel log.  The “USBMSC” entries appear to have been moved to the system log in Mac OS X 10.8 (as discussed in this blog post), but the information contained within each entry looks to be unchanged.  An example of the approach an examiner may take in attempting to identify USB devices that have been connected to a Macintosh computer is:

  1. Extract the kernel logs from the file system.
  2. Search the kernel logs for “USBMSC Identifier”.
  3. Extract all entries containing “USBMSC Identifier”.
  4. Format the entries to make them easier to read/sort (e.g. Excel format).
  5. Search the USB ID public repository for the vendor and product ID associated with each device.
  6. Update each extracted log entry for which the associated vendor and product ID was found within the USB ID public repository.

Correlating the vendor and product IDs with the USB ID public repository can help to associate a name with the type of device that was connected (e.g. Amazon Kindle, PNY flash drive, etc.).  This process can quickly become tiresome though, especially if there are many log entries for which you must search the USB ID repository.  An examiner performing the same small task over and over again is a good argument for automating that task.  With that in mind, I decided to write a script that would do as much of this for me as possible.  Perl seemed to be a good option to use since it’s cross-platform and great with parsing.  And after a bit of coding, testing, and tweaking, I now have a script that takes care of steps 2-6 listed above in a matter of seconds.The script is capable of parsing either a single kernel log or a directory full of kernel logs, outputting the “USBMSC” log entries in csv format. After using a regular expression to locate the “USBMSC” log entries, the script queries the USB ID public repository and attempts to correlate the vendor and product ID of each entry parsed from the kernel log(s) with a particular device listed in the repository.  The script defaults to checking the online version of the repository, but if you don’t have a network connection or want to be able to run the script without querying the online database, you can optionally pass in the path to a local copy of the online database using the “-u” parameter (the text of the file must be in the same format as the online repository).

When you’re importing the script’s csv file into Excel (or your choice of spreadsheet application), you’ll want to be sure and set the formatting of the USBMSC ID, Vendor ID, Product ID, and Device Release columns to text values in order to avoid the application interpreting the values as numbers and removing leading zeroes, etc..

I’ve only been able to test this script on kernel logs, but it should also work on the system logs from Mountain Lion as the relevant entries appear to be in the same format.  As always, feedback, suggestions, and bug reports are welcome and appreciated.

The script is available for download here.

Console view of usbmsc.pl
Snippet of usbmsc.pl output file after correlation

Resources

VSC Toolset Update: File Recovery

I’ve recently added an important functionality that has been missing from VSC Toolset: the ability to systematically extract files from shadow copies.  You can now do this with VSC Toolset either by utilizing the “Copy” command from the main window or via browsing the directory structure of a shadow copy and utilizing the context menu option.

When browsing an individual shadow copy, you can easily verify the location of the files or folders you wish to copy and extract them accordingly.  To extract files in this manner, simply navigate to the folder of interest, highlight the files or folders you wish to extract, and select “Copy” from the right-click context menu.  You will be prompted to select a location to save the data, then a small status window will appear while the data is being extracted (see screenshot below).  The downside to this approach is that you must copy the files of interest from each shadow copy individually.  To alleviate this problem, the option to copy a selected file or folder from multiple shadow copies in a single operation is available from the main window of VSC Toolset.

Copying Files via VSC Browser Context Menu

By utilizing the Copy command from the main VSC Toolset window, you can extract a file or folder from multiple shadow copies in a batch processing manner.  It’s as simple as selecting the shadow copies from which to extract the file or folder, inputting the path (or browsing to it using the Browse button), and clicking the Run Command button.  It’s important that the path to the file or folder of interest be the full path on the drive containing the VSCs.  For example, if the image containing the shadow copies is mounted as the H: drive, the path to the file/folder to copy should be something like H:foldersubfolderfile.txt.  VSC Toolset will then use the batch files associated with the copy operation to copy the specified file or folder from all selected shadow copies.  The extracted files will be stored in the “VSCToolset_OutputExtractedFiles” folder (the location of which may be changed under Tools –> Options).

Copying Files from VSC Toolset Main Window

All copy operations issued with VSC Toolset are simply passing parameters to a robocopy batch file that resides in the VSC Toolset “batch” folder.  Robocopy is a powerful copying utility and is a standard feature of Windows Vista and above.  For information on Robocopy options, check out this Microsoft article.  With VSC Toolset copy operations, the /COPYALL flag is passed for file and folder copies to copy all file information (including time stamps).  Additionally, the /E flag is passed during folder copy operations to include empty subdirectories. These options can of course be modified by changing the respective batch files within the “batch” folder used by VSC Toolset.  CopyFile.cmd and CopyFolder.cmd are the batch scripts used to issue the robocopy commands for file and folder copying, respectively. The robocopy log, which can also be customized by modifying the batch files, is saved in the “VSCToolset_OutputRobocopyLogs” directory that is created by VSC Toolset upon issuing a copy operation.

A couple of other improvements have been made as well, including adding multiple threads for processing.  By making VSC Toolset a multi-threaded application, the user interface remains responsive even when running time-consuming operations such as Diff or a large copy operation.  This allows you to immediately start a process such as running Diff against a couple of shadow copies and then running a RegRipper plugin or profile against one or more shadow copies while Diff is still executing in the background.

You can download the latest version of VSC Toolset here.

For tips on setting up and using VSC Toolset, check out this blog post. To get the most out of the program, you’ll need the accompanying tools below. Also, keep in mind that with the exception of RegRipper, all accompanying executable files and scripts should be stored in the same directory as the VSC Toolset executable in order for the program to see them.

Feedback, suggestions, and bug reports are always welcome and appreciated.

Quickly Find the Date Range of EVTX Event Logs

It’s helpful to know the date range that an event log spans, as that information lets you know whether or not you should expect the events from a particular time to be included in the event log, assuming the events you’re interested in are being audited.  I’ve often used Harlan’s evtrpt.pl script (available on the WFA 2e DVD) to find, among other things, the date range that is covered by an EVT file.  This has proven to be very helpful in identifying whether a particular event log covers the time frame of interest in an examination.  However, to my knowledge, no such script exists for EVTX files.

I originally wrote a batch script for pulling the date range from EVTX files as an add-on to VSC Toolset, but I figured it would be helpful to have the ability to run it against the most current version of event logs (i.e. those not in volume shadow copies) as well.  A couple of modifications to the VSC Toolset batch script made it ready for use on its own.

In writing the batch script, I decided to harness the power of Log Parser to get the job done.  If you’re unfamiliar with Log Parser, it’s a great tool from Microsoft that allows you to interpret data files (event logs, for example) as SQL records and execute SQL queries against them to quickly pull out specific information.  The command that I used to find the oldest event record (by TimeGenerated) in an event log is “logparser -i: EVT “SELECT TOP 1 TimeGenerated FROM %1 ORDER BY TimeGenerated ASC”.  Walking through the command, I simply notify Log Parser that the input file is an event log and then specify the query that I want to execute against the file.  The “%1” indicates the value passed into the batch file (G:filesSecurity.evtx, for example). The query instructs Log Parser to return the top value existing in the TimeGenerated field when that field is listed in ascending order.  You should actually get the same results without “ORDER BY TimeGenerated ASC” since we’re only interested in the first entry of the event log.To find the newest event record by TimeGenerated, I simply had to sort the event log in reverse order by TimeGenerated.  This can be done by changing the “ASC” in the previous command to “DESC”.  I also gathered the oldest and newest records by TimeWritten to report in addition to the TimeGenerated values.  The bulk of the code and work on my part in writing the batch file came from formatting the output for a very easy-to-read display.  I won’t break down the code I used for that here, but it turned out to be a nice exercise in batch programming for me.

To use the script, download it here and copy the Log Parser executable and DLL into the same folder as the script (or vice versa).  Note that you’ll have to install Log Parser from the MSI before the executable and DLL are available in the Program Files directory.  Then execute the evtxdaterange.bat script from the command line, passing in the path to the extracted event log.  For example, issuing “evtxdaterange k:filesSecurity.evtx” should give you output similar to that in the screenshot below.

If you’re interested in learning more about Log Parser, I would recommend taking a look at the Log Parser Toolkit book (however there are also many resources available online, such as this article by Chad Tilbury).  If you’re interested in batch scripting, there are countless online references, including this one by Corey Harrell that goes over getting started with batch scripting.

VSC Toolset Update

I thought it would be helpful to quickly be able to determine the date range covered by an event log within a shadow copy, particularly if the most current version doesn’t go back far enough.  So if you’re interested in finding which shadow copy contains the event log covering the date range of interest, you can simply run the EventLogDateRange command against all shadow copies to pinpoint which event logs you’ll want to parse.  Event log parsing has also been incorporated in the latest update, via Log Parser.  You can read about the other updates and download the latest version of VSC Toolset here.

TypedURLsTime RegRipper Plugin

I mentioned in a previous post that a RegRipper plugin (or something similar) would need to be written in order to easily correlate the contents of the TypedURLs subkey with the TypedURLsTime subkey that is present in Windows 8.  Being that I haven’t had the opportunity to do a whole lot with Perl or write a RegRipper plugin, I figured this would be a good learning experience for me and another way to contribute a bit to the community.  Harlan’s Windows Registry Analysis book does a nice job explaining the process of creating a RegRipper plugin, so I decided to start there.

The book mentions the fact that you can create plugins while writing very little code by copying the functionality from other existing plugins. After all, why spend time rewriting something that’s already been put out there (although an argument could be made for the sake of learning)? With that in mind, I thought about the different plugins that I’d executed in the past and what they did.  The typedurls plugin would obviously take care of parsing the TypedURLs subkey for me; I only needed to find code that would parse the TypedURLsTime value data containing the FILETIME structures.  The first plugin that came to mind is also one of my favorites: the userassist2 plugin.

So to create the TypedURLsTime plugin, I started simply by copying the code that parses the contents of the UserAssist key and adding that to the parsing code for the TypedURLs key.  I then went in and removed some unnecessary portions, such as the part decoding the ROT-13 values existing in the UserAssist key.  I was left with code that would parse both of the subkeys I’m interested in; I just needed to correlate the results to make the output easier to read.  There is an abundance of material out there to help you get started with Perl.  Learn.perl.org is a nice way to learn the basics; you can also go out and buy one of the many books that exist on the subject (Learning Perl on Win32 Systems was recommended to me).  After reading though a bit of the basics (comparing values, looping, hashes, etc.), I put together the rest of the plugin to correlate the results and display them appropriately.  I added a couple of variables, but the vast majority of the code and the actual work was completed using functionality from two previously written plugins.  That’s it.  In very little time and roughly 10 lines of code, I’d put something together that can be used to extract and make use of the information from the TypedURLsTime subkey or simply add in as part of a plugin file for future use.

This obviously required very little coding, but that’s part of the point.  I was surprised at how easy it was and the limited amount of Perl knowledge that’s required to create a working RegRipper plugin.  That’s not to say that other plugins wouldn’t require much more coding and a deeper knowledge of Perl, but this is an example of how easy it can be. So the next time you think about writing a RegRipper plugin or realize that one would be helpful, think about what you’re trying to pull from the registry. What is the format of the data you’re trying to parse?  Are there existing plugins that perform some or all of the required functionality, except applied to a different key?  You might find that nearly everything you need is already out there and available, you just need to piece it together.

If you’re interested in viewing or testing the typedurlstime plugin, it’s included as part of the updated RegRipper plugins archive available from the RegRipper wordpress site (or more precisely, the RR supplemental plugins google code page).  I also went ahead and modified the reporting format to allow for outputting in TLN format, which is available with the typedurlstime_tln plugin (included in the download).  The output of the typedurlstime plugin could easily be modified to report in csv format as well.