Reference no: EM133701213
Lab 1 - Wireshark
Overview
In this lab, you will use Wireshark to search through a traffic capture and derive conclusions about the packets that are captured. A pcap file is attached in D2L that contains the specific traffic capture for the lab. You can verify that you have the correct file by going to Statistics > Capture File Properties and verifying that there are 39924 packets captured.
Tasking
Answer the following questions below with both a written answer AND a screenshot showing your search and how you got your results.
Question 1: What is the IP address of the host that is recording the capture? I will call this the "client" for the rest of the lab (except question 5).
How did you reach this conclusion?
Question 2: What is the operating system of the client in the capture? (Do not just give an NT version)
Hint: Some protocols include extra information identifying a device, find one of them.
Question 3: What is the username/password to the FTP server?
Question 4: Take a screenshot of one of the jfif files that were transferred over FTP (this was not shown in class, you will need to research this).
Take a screenshot of the actual image, not just the packets. You will need to extract the image itself from the capture.
Hint: FTP uses two different ports for control and data, and these are separate filters.
Question 5: There is a lot of traffic related to a method of remote access (other than FTP and SMB - think remote interactive).
What IP address was the SERVER (being accessed) and what IP address was the client (doing the accessing)?
How can you tell which is the client and which is the server? Hint: something about the protocol is registered.
Question 6: There's a file transferred over HTTP called "success.txt".
Where is it from (what URL)?
What are its contents?
What is its purpose (research)?
Question 7: The user manually visited 3 unsecured websites, what were they?
Only include sites that the user explicitly went to, not automated things from the browser or dynamically loaded websites due to external resources.
Do not utilize DNS to get your final answer, I am specifically interested in websites, not general name resolution.
Question 8: What "hardware" was the machine involved in this capture running on?
I am not looking for the operating system; was this system Dell, HP, Framework, Apple, Lenovo, virtualized, etc.
Hint: Nearly any frame involving the client can get you this information.
Question 9: There is a DNS request for "doh.test" in the capture.
What was the response?
What is the purpose of that request? Do some digging on this, you may have to make a slight leap in logic. Think about what the underlying technology is, and whether we are currently using it. The specification may be useful.
Question 10: There are a few HTTP 301 responses.
What does that status code mean?
How are the 301 responses being used in this circumstance? What seems to be the end goal of them?
Hint: follow the trail of locations until no more remain.
Lab 2 Security Onion
Task 01
Many in the course may have an interest in teaching in some capacity. Seems it could be a good time for you to make your own lab! You will also need to create the answer key for this lab.
Create or find a packet capture with some amount of "interesting" traffic that you would want a learner to analyze
You may utilize the systems in the IA Lab to do this if you are creating your own
The file may be too large to submit to the dropbox folder, so just explain the interesting elements of your capture
Alternatively, utilize an existing packet capture that is readily available, and document this in your lab
Import this packet capture into the Security Onion instance you have been provided
Do this with scp and so-import-pcap, not through the web interface
Create a lab document with at least five questions that relate to the packet capture in some way; include the answers to all questions in an answer key
These should be answerable by utilizing tools present in Security Onion
I would recommend basing your questions off data present in Kibana in particular, as we will be spending more time in that tool compared to the base SOC interface later in this course
Do not use Wireshark as a means to answer any questions, it must be done
with something that is a part of Security Onion
Consider questions like what was asked in our Wireshark lab
Example: "What malware family seems to be present in the capture? Hint: you would be alerted to find this in your network traffic."
Yes, you can have fun with puns :)
No, you do not need to be looking at malware; it can be a question that is more mundane, such as benign HTTP URIs
Turn in the lab document (with answer key) and packet capture if possible
The answer key should show how you answered those questions utilizing Security Onion and its related tools
Task 02 (10 points)
In this field, we often want to analyze malicious activity in some capacity. There is a really useful site, https://malware-traffic-analysis.net that has many samples. Be careful as there are malware samples that may still be active if you happen to run them!
Select at least one sample from this site and get its packet capture file
Provide a link to the source of the malware in addition to your analysis
Do not include the malicious packet capture itself in your submission
Import it into your Security Onion instance
Do this with scp and so-import-pcap, not through the web interface
Perform analysis on it and report your findings
You may use Wireshark to help point you towards items of interest in Security Onion, but your analysis should primarily be in Security Onion
Some items of interest for this investigation
Domains (does it use a randomly-generated domain or does it look "normal?")
Protocols used (HTTP, SSH, SMTP, etc.)
Ongoing communications/C2
Data exfiltration
Indicators of Compromise that could be used to identify the traffic as malicious
Any other pertinent information
For a ballpark on how much information to document, a few relevant screenshots and 400 words
You can go above this if you would like, but please no 10+ page submissions for this task
Bonus (10 points max)
Arkime was introduced in lectures, but it is not required by this lab. For extra credit, incorporate it into the above tasks. It can be installed on the Ubuntu host. ens192 can be set as your "capture" interface. It has been set to be off/disconnected (unless it decided to activate itself again, because Ubuntu...), so you will need to turn that interface on. A few resources that may be helpful to install and configure the tool can be found below.
The instructions direct from the tool creators
Demonstrates installing elasticsearch separately from the Arkime installer
Shows installation in REMnux, but the steps generally apply
Utilizes an existing elasticsearch host, you would need to also install elasticsearch somewhere
To get the packet captures fully rendering into Arkime, utilize tcpreplay with the
arkimecapture service running.
Lab 3 - Kibana Querying and Dashboards
Overview
Time for even more NSM! You can use the same environment as last week. We're going to work with querying for useful data and creating dashboards for analysts to monitor during an ongoing threat. The lab environment is available at https://ialab.dsu.edu in the learn organization. It is the same environment as the Security Onion lab.
Do be careful in this lab as you may be handling malware.
Tasking
Provide a screenshot and an explanation/description when needed for each step below.
Identify a malware sample, set of malware samples (maybe even of the same family), or create some custom PCAP(s) to do analysis on
It cannot be one that I demonstrated in a lecture
Share either links to the samples or explain how you created your own data
I do not require the source PCAP to be uploaded to D2L for this lab, just a link/explanation
Import these captures into Security Onion (via scp and so-import-pcap)
Take a screenshot that shows the import worked
Pretend that you are monitoring an outbreak of whatever is causing this network data
Create a dashboard that contains information that would be relevant to an analyst monitoring the outbreak
Create at least six distinct visualizations (different modules that go in your dashboard)
This can be the same source of data, but one represented as a graph over time, the other being a table of the actual data
Don't do multiple proportional visualizations (pie, donut) of the exact same data
At least two of these visualizations should also use a query of some sort
The query should be more complicated than just a tag
tags: dns, for example, does not count
The visualizations must use at least two different data sources/tags
Alerts, HTTP, DNS, connections, etc.
Take a screenshot of the edit page of two of the more complex visualizations
It should have a filter/query at minimum
Take a screenshot of the final dashboard
Can use multiple if you need so that all visualizations are shown
Lab 4
Overview
Let's go further beyond utilizing tools and alerts provided to us. We are going to write our own rules to detect traffic of interest in our networks. The lab environment (in a vApp named Suricata) is available at https://ialab.dsu.edu in the learn organization. The packet capture is available on D2L in the dropbox folder. Do be careful in this lab as you may be handling malware. All rule names should include your name or initials.
Tasking
Provide a screenshot of each rule (all 7) and a screenshot running it against the packet capture and showing at least one match. Provide extra information as is relevant or required. I provide some extra information as well that may help you either contextualize the message and why it is weird, or give you a nudge on something you may want to specifically detect.
Frame 91
File extension, user-agent seems old
Frame 93 may give more context to what is happening here
Frame 495
Make a rule that detects windows updates!
Write it in a way that it should match all windows updates
Involve as much of the URI as you can
We'll call this an "informational" category as it is not malicious
You should get multiple matches with this rule!
Frame 644
Match on both a header (one date looks interesting and historical, epoch time) that seems weird as well as the contents of the message itself
Both of these components must be in your rule for full credit!
Frame 799
Content type says it is text/html, but it sure doesn't look like it
Maybe you can search for data you would expect to be present in HTML?
Look for the negation of normal text being present
Use at least 4 HTML tags
One example (that you cannot use) would be "<table>"
You may get multiple matches
Are they all malicious or suspicious?
Track them down using filters in Wireshark to do your analysis
Frame 2834
Over SMB, seems there's some accessing of a share on a domain controller
We are going to consider that file path to be indicative of malware
Use SMB Keywords, escape characters, and also pay attention to what IPs this communication is in between
Start with a weak match and use documentation!
Frame 6971
Note the mismatch between the content type and the actual content
Frame 6969 seems to be the initial request and may help to see more of what is happening here
Frame 9858
Do not use HTTP keywords for this one (to make it more interesting), match TCP sessions and do not use http.uri or the like
Match on at least 2 different parts of the message for full credit
Lab 05
Overview
How about we create even more advanced rules? We can utilize extra functionality to reduce false positives, though it is at the expense of processing and memory on our Suricata host.
Regardless, this is a good exercise for writing rules, so we're doing it! The lab environment (in a vApp named Suricata) in the learn organization. The packet capture is the same as Lab 05 (file present on D2L). Do be careful in this lab as you may be handling malware. All rule names should include your name or initials.
Tasking
Provide a screenshot of each rule set (all rules involved in the resulting match) and a screenshot running it against the packet capture and showing at least one match. Provide extra information as is relevant or required. I provide some extra information as well that may help you either contextualize the message and why it is weird, or give you a nudge on something you may want to specifically detect.
Frames 91, 93
File extension is weird, user-agent seems old
What type of file appears to be in frame 93?
Discuss the type of file (in writing) and if it is what you would expect given what file the client initially requested
Tie frame 91 and 93 together with flowbits or flowints
Frames 495, 15511, 34118 (separate matches, do not tie them together in one alert)
Start with your rule from the last lab
Update your rule so that it uses at least one keyword such as depth, offset, startswith, endswith, etc.
Below are some of the guidelines from last lab
Make a rule that detects windows updates!
Write it in a way that it should match all windows updates (the listed frames above)
Involve as much of the URI as you can
You should get multiple matches with this rule!
Frames 642, 644
Tie these together into a single alert
You can use a weak match (headers, and does the endpoint of the URI line up with what it is actually sent?) for the first message
You may also want to involve frames 639 and 641
Frames 797, 799
Tie these together with flowbits or flowints
Utilize at least one keyword such as depth, offset, startswith, endswith, etc.
The first message looks a bit weird
There's a field used that appears like it could be Base64
You do not need to decode the Base64, but the presence of it and the URI seem off
Frame 799, content type says it is text/html, but it sure doesn't look like it
Maybe you can search for data you would expect to be present in HTML?
Look for the negation of normal text being present
Use at least 4 HTML tags
One example (that you cannot use) would be "<table>"
Frames 6969, 6971
Note the mismatch between the content type/requested content and the actual file
Looks somewhat similar to a discussion from class
Again, tie the two frames together into one alert utilizing flowbits or flowints