Tips and Common Issues Part 1

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
2 hours 5 minutes
Difficulty
Beginner
CEU/CPE
3
Video Transcription
00:00
Hello, everyone, and welcome back to the course, identifying Web attacks through logs
00:06
After a brief review of Web server logs and log importance in this video, we'll keep talking about logs, but I'll also give you some advice, and I'll demonstrate some common issues. A mistake's that can occur during log analysis.
00:19
The video objectives are toe. Understand the differences between availability and security. Log analysis. Understand that some log fields could be crafted to hide something from the analysis and show some mistakes that can occur when analyzing logs.
00:33
Let's start with the rise of security risk.
00:36
Recently, the presence of security staff have increased, and sock teams have started to play an important role in companies.
00:42
Nowadays, it's common to have a knock and sock team working together.
00:46
They have the same worries with different perspectives.
00:48
Knock and sock teams want to keep things functioning, but
00:51
knock usually worries If the systems are up and knock sock worries about security incidents.
00:58
A security incident occurring doesn't mean that a resource is down, though a security incident can affect a resource even if it is working as expected.
01:07
For example,
01:07
consider this Web server law.
01:11
What do you think it ISS.
01:12
Is it malicious weird or is it just okay?
01:18
Since we have a 200 status code, a knock analyst could say that Yes, it's OK.
01:23
The server is up and it's answering.
01:25
They could also check the CPU and memory and say everything's okay,
01:29
though. For a sock analyst, this is suspicious behavior, so it's better to investigate.
01:34
During this course, you will learn that this request is an attack, specifically an SQL injection attack.
01:44
In the previous slide, we had our logs that was related to an attack.
01:48
The logs are generated by the Web server, but
01:49
how are the logs generated? Can I trust other log information?
01:55
Logs are generated from two actions. The client Request and the Web server. Answer.
02:01
The Web server is known. It's under our control.
02:05
The client is someone who we probably only know the I P address and user idea off. And usually we don't know if the I P is from an attacker or a real client.
02:14
The conclusion we reach is that
02:15
we can't trust the client too much.
02:19
Because of this, we have our doubts. Are the logs 100% trustable?
02:23
The answer is No,
02:24
But let's see why,
02:28
http protocol consists of basic text commands.
02:32
It is easy to craft text packets.
02:36
Remember, we have a lot of user agent software, and some of the software can craft package with http requests.
02:43
What do you think will happen with the crafted packet?
02:46
Remember,
02:46
it's an http request.
02:50
As soon as the crafted packet arrives at the Web server, it will be processed
02:53
and answered.
02:54
The Web server job answers the request.
02:58
It doesn't care who sent the request.
03:00
For example, you can say that it's a different user agent or a different refer.
03:05
This happens commonly during attacks because the attacker wants to hide,
03:08
so it's better to use a normal user agent than a suspicious user agent.
03:13
Web browsers are considered normal user agents.
03:16
During this video, you'll see some examples of suspicious user agents like Curl.
03:22
During the course, you will see another example of suspicious user agents like python libraries.
03:30
Interestingly, for TCP IP communication, the Web clients I p address is always true.
03:36
Since the IAP
03:38
http transfer
03:38
starts off the three way handshake,
03:40
the I P address on the log will be the same. I pop that established the connection.
03:46
One possible problem is when the user connects to a VPN or a Web proxy.
03:51
This, too, will hide the Web clients Rely Pete,
03:54
in this case, the VPN or the Web proxy address with the clients i. P.
03:59
Again,
04:00
the Web server doesn't care if it's a proxy VPN or an end user and will long the Web proxy or the VPN I p address
04:06
to get the real I p. You need the logs from VPN or the Web proxy, and then you need to correlate them.
04:14
Another thing that can't be crafted is the status code.
04:19
Can you guess why the status code can't be crafted by the user
04:24
for an example? Quickly. The status code should be 404
04:28
but it'll show 200 on the lock.
04:31
The status code is generated by the Web server,
04:34
though it depends on the client request.
04:36
You can craft request to get a 400 for status code, but to change the status code from 400 to 200 you need to change the law inside the Web server log file,
04:47
and that cannot be done during the http request.
04:53
Based on what we've seen allow me to show you some quick examples.
04:57
Using the Lennox machine, we will perform some requests using different user agents.
05:01
The I P address of our Web server is 10.2 point 0.101 and are false access will be a simple tell that
05:11
Check the log of the Telnet request.
05:14
It didn't show our user agent, but we can see the same status Code
05:17
400
05:18
for the 2nd and 3rd request we used. Curl
05:23
Curl is the Linux command to request Web pages
05:28
in the first Girl request. It's easy to see that Curl is the user agent.
05:31
However, Curl has many options.
05:34
One of the options is that you can change the user agent.
05:39
If we use the curl option to change the user agent, the Web server will log exactly what we put in the option.
05:46
Here. We used Mozilla Firefox.
05:47
There are many other options and curl that could be used to craft http packets and lots of other software with the same capabilities.
05:56
For the summary of this video,
05:58
check out this table.
06:00
It has the key log fields, and if it's possible for that respective field to be crafted in a request
06:04
on the set crafted, it's possible to generate and manipulate the http request to hide some information about the request. Like the user agent
06:14
based on this
06:15
three i p address cannot be changed.
06:18
As we said before. This is because of the three way handshake
06:23
date, and time depends on the Web server configurations.
06:26
User ID can be crafted, and the attacker can use this to perform a brute force attack or to steal someone. Session
06:32
method and requested file could be crafted.
06:34
But if the requested file doesn't exist, the Web server will always answer with a 404.
06:43
We will see in the next video
06:45
that 404 errors can help us identify some kind of attacks.
06:48
Http Status code is generated by the Web servers so it cannot be crafted, and it is possible to craft the user agent.
06:55
Other client related fields can be crafted.
06:58
Crafting packets and http requests is one way to tell how Web server is compromised.
07:03
Some crafted requests can actually trigger vulnerabilities.
07:09
Maybe you're thinking now. Well, now that I know I can't trust Web server logs, why should I use them to identify an attack.
07:16
Even if you don't trust them, you need the logs to identify the attacks.
07:21
You always need to take care when doing analysis.
07:25
A really important thing is to know your application.
07:28
For example,
07:29
if your Web page isn't compatible with with mobile phones,
07:32
well, you should not see user agents related to mobile,
07:35
I would think as an end user. And guess if the end user will do the same thing as you have in the log,
07:43
try to guess if that info could be fake
07:45
and always get more logs to correlate.
07:49
This will continue in the next video.
Up Next