2 hours 5 minutes
Hello, everyone, and welcome back to the course, identifying Web attacks through logs
After a brief review of Web server logs and log importance in this video, we'll keep talking about logs, but I'll also give you some advice, and I'll demonstrate some common issues. A mistake's that can occur during log analysis.
The video objectives are toe. Understand the differences between availability and security. Log analysis. Understand that some log fields could be crafted to hide something from the analysis and show some mistakes that can occur when analyzing logs.
Let's start with the rise of security risk.
Recently, the presence of security staff have increased, and sock teams have started to play an important role in companies.
Nowadays, it's common to have a knock and sock team working together.
They have the same worries with different perspectives.
Knock and sock teams want to keep things functioning, but
knock usually worries If the systems are up and knock sock worries about security incidents.
A security incident occurring doesn't mean that a resource is down, though a security incident can affect a resource even if it is working as expected.
consider this Web server law.
What do you think it ISS.
Is it malicious weird or is it just okay?
Since we have a 200 status code, a knock analyst could say that Yes, it's OK.
The server is up and it's answering.
They could also check the CPU and memory and say everything's okay,
though. For a sock analyst, this is suspicious behavior, so it's better to investigate.
During this course, you will learn that this request is an attack, specifically an SQL injection attack.
In the previous slide, we had our logs that was related to an attack.
The logs are generated by the Web server, but
how are the logs generated? Can I trust other log information?
Logs are generated from two actions. The client Request and the Web server. Answer.
The Web server is known. It's under our control.
The client is someone who we probably only know the I P address and user idea off. And usually we don't know if the I P is from an attacker or a real client.
The conclusion we reach is that
we can't trust the client too much.
Because of this, we have our doubts. Are the logs 100% trustable?
The answer is No,
But let's see why,
http protocol consists of basic text commands.
It is easy to craft text packets.
Remember, we have a lot of user agent software, and some of the software can craft package with http requests.
What do you think will happen with the crafted packet?
it's an http request.
As soon as the crafted packet arrives at the Web server, it will be processed
The Web server job answers the request.
It doesn't care who sent the request.
For example, you can say that it's a different user agent or a different refer.
This happens commonly during attacks because the attacker wants to hide,
so it's better to use a normal user agent than a suspicious user agent.
Web browsers are considered normal user agents.
During this video, you'll see some examples of suspicious user agents like Curl.
During the course, you will see another example of suspicious user agents like python libraries.
Interestingly, for TCP IP communication, the Web clients I p address is always true.
Since the IAP
starts off the three way handshake,
the I P address on the log will be the same. I pop that established the connection.
One possible problem is when the user connects to a VPN or a Web proxy.
This, too, will hide the Web clients Rely Pete,
in this case, the VPN or the Web proxy address with the clients i. P.
the Web server doesn't care if it's a proxy VPN or an end user and will long the Web proxy or the VPN I p address
to get the real I p. You need the logs from VPN or the Web proxy, and then you need to correlate them.
Another thing that can't be crafted is the status code.
Can you guess why the status code can't be crafted by the user
for an example? Quickly. The status code should be 404
but it'll show 200 on the lock.
The status code is generated by the Web server,
though it depends on the client request.
You can craft request to get a 400 for status code, but to change the status code from 400 to 200 you need to change the law inside the Web server log file,
and that cannot be done during the http request.
Based on what we've seen allow me to show you some quick examples.
Using the Lennox machine, we will perform some requests using different user agents.
The I P address of our Web server is 10.2 point 0.101 and are false access will be a simple tell that
Check the log of the Telnet request.
It didn't show our user agent, but we can see the same status Code
for the 2nd and 3rd request we used. Curl
Curl is the Linux command to request Web pages
in the first Girl request. It's easy to see that Curl is the user agent.
However, Curl has many options.
One of the options is that you can change the user agent.
If we use the curl option to change the user agent, the Web server will log exactly what we put in the option.
Here. We used Mozilla Firefox.
There are many other options and curl that could be used to craft http packets and lots of other software with the same capabilities.
For the summary of this video,
check out this table.
It has the key log fields, and if it's possible for that respective field to be crafted in a request
on the set crafted, it's possible to generate and manipulate the http request to hide some information about the request. Like the user agent
based on this
three i p address cannot be changed.
As we said before. This is because of the three way handshake
date, and time depends on the Web server configurations.
User ID can be crafted, and the attacker can use this to perform a brute force attack or to steal someone. Session
method and requested file could be crafted.
But if the requested file doesn't exist, the Web server will always answer with a 404.
We will see in the next video
that 404 errors can help us identify some kind of attacks.
Http Status code is generated by the Web servers so it cannot be crafted, and it is possible to craft the user agent.
Other client related fields can be crafted.
Crafting packets and http requests is one way to tell how Web server is compromised.
Some crafted requests can actually trigger vulnerabilities.
Maybe you're thinking now. Well, now that I know I can't trust Web server logs, why should I use them to identify an attack.
Even if you don't trust them, you need the logs to identify the attacks.
You always need to take care when doing analysis.
A really important thing is to know your application.
if your Web page isn't compatible with with mobile phones,
well, you should not see user agents related to mobile,
I would think as an end user. And guess if the end user will do the same thing as you have in the log,
try to guess if that info could be fake
and always get more logs to correlate.
This will continue in the next video.