Ron module for talking about the different weights to get data in the ***.
we'll discuss some examples of the many data surface Blanc can work with.
We'll talk about ways to get data
go through, creating that index.
Add some data by uploading a file,
talk about source types and create some field extractions. Then we'll finish off with the quiz.
image of examples of what *** can index from *** dot com. This highlights different types of data, such as Windows event logs,
Events from cloud service is weblogs database quarries, net flow data, click stream data, power consumption information and more.
Done at the bottom. Here
are some common sources of data, and in the middle we see
popular forms of data. For example, you might retrieve metrics on Web logs or in just
data from tickets open by intrusion detection systems.
of information could be transformed into events that are searchable, unusable in different ways.
There are many ways to get these types of data.
You can, for example, monitor files and directories,
run scripts and collect the results
with a lot of network ports, including listening for sis like messages,
collect events. Eugene W. M. I. Run quarries against connected databases. Perform a P I. Calls and other methods
will be uploaded a file for a simple demonstration in this video
a Splunk index is a data repositories.
When raw data is turned into events, it gets put into an index.
These indexes are helpful for running efficient searches.
When you're able to search across the specific index that can speed up your searches,
for example, you might have an index called Cisco Essay that just contain Cisco Essay logs.
When you're looking for A S A logs, you wouldn't want to search across all your Windows event logs while trying to find something.
Indexes can also help you apply more control to your data.
For example, if you know you need to keep authentication logs for six months, but I only need to keep application locks for one month. You can apply those different kinds of retention policies by index.
Additionally, you could easily limit users to certain types of data, but only allowing them to search across specific indexes apply to their jobs.
Source types are used to identify the structure of events, and Splunk uses these to form at the data. While indexing,
you might have multiple source types in the same index. For example, you might collect all your WebSphere logs and an index called WebSphere. But WebSphere activity logs are formatted differently and are marked with a different source type,
then WebSphere system era logs.
You can also use source type to narrow down. Your search is in sports.
Field extraction is pulling out fields from event data.
Splunk automatically recognizes fields for some source types, and you can also manually extract fields on your data.
In this example, we have a net screen firewall event and that we have lots of potential fields, including an action field.
In this case, the field name could be action in the field. Value would be deny. In another event, the field name would still be action, but the field value could be allow.
The field name does not have to be specified. In the event
J. U N at the beginning could be extracted and have a feel name of month and feel value of J. R. N or June.
With that, we're going to jump into a basic example.
I have my Splunk server up and on it. I have a file filled with some example exchanging mail logs.
Normally, we probably wouldn't want to upload a file to get these type of events, But we'll do it for this example.
Going to my Splunk Web interface, I'm going to click on settings
I also have the option on my main page here.
I'm going to scroll down and click upload.
Then I'm going to select the file. We were just looking at
*** automatically did a good job of breaking these events out and identifying the time stamps.
If I click on source type here,
I can try some of the pre trained source types one cast under email, for example.
Um, if I click Prock male as a source type,
it no longer breaks out into events and leaves them in a club. That's obviously not the right source type for this. I'm gonna go back to the default here and make my own source type.
There are some other options under here that you can play with, but I'm just gonna click save as and I'm gonna call this exchange logs and put it in the email category.
Now, if I want to upload the same type of file in the future, I can pick the source type,
and I'm gonna click next on this
from here. I have to decide what the hostess, since I uploaded the file leave me as my hosting makes sense. So I'm just gonna keep that as is.
It's also currently set to go to the default index.
It's often a good idea to put things into the default index until you make sure things are working. But I'm gonna create a new index for this data just by clicking here
for next name. I'm gonna call it exchange,
leave as events and leave all the rest of the saint the default for now.
And I'm going to save this
from here. I can run a search across my new data.
It's already, um, picked out my index here that I assigned it. We've got all these events. If you notice here's kind of the timeline of these events,
I can look at one of these and it's broken out. Some of the fields such as the source type
and along the site here, it's got things like hour and day and minute broken out.
But I'd like to have more fields, so are merely click here in extract new fields.
So here I'll just select a sample
And now I can decide between using a regular expression or
the limiters to break out my field.
The limiters would be a good option if my fields were separated by something like a tab or a special character. So in this case, I'm gonna pick regular expression and hit next.
You have the option of writing your own rejects here or by trying to make a point to do the work.
Try to get an extraction. I'm gonna highlight a field. So we look at this first I p
and I'm going to name this field I p and hit at extraction.
Now down at the bottom. We can go and see how this works with other events.
If we see here, um, on different events, it's pulled out this I p field.
Even with different values,
I can click here to see if there's anything that doesn't match,
which leaves nothing. And I think we're in pretty good shape for this
now, for one another field,
I can do the same thing.
So I'm gonna look at this to email address,
and I'm gonna call this recipient
and add the extraction.
Now, if I look at the sample events
I'm gonna remove that. I got a notification that failed.
I'm gonna try and select. I think I missed that first letter there. So let's see if this does better
that worked. But if I scroll down here, there's a problem.
So I called it recipient,
um, because it said to hear and it made sense for the sample event I had. But now I'm also getting it for from so I probably wanna work more on this field extraction. If I wanted to, I could click, show regular expression and write my own,
but I'm just gonna remove it to keep this video short.
And we'll keep this I p field
I'll review it. And this all looks good. Point out the I p s on all these different events. Click next.
Everything looks good here and finish
now I can explore the field. I just created search. So now if we were to rerun that search, there should be a new field value there
that we could use to.
Index equals exchange,
I need all times since there were events that were older than the last 24 hours.
So now if I look on the left side here, there's a new field that's R I. P field
and, like it gives us account. So we've got
12 counts of this 205 i p.
We could also do other things, like specifically, search for a result and see the events related to list. So if I just wanted to see events for that,
we could pull that out. Or I could do things like run statistics,
Bye I p and a lot of other types of searches that will get into in the next module.
Now that we've successfully added data to our *** index is quiz time
true or false? You should keep all of your data in the same index.
The answer is false. Breaking out your data into different indexes can help you run searches more efficiently and apply different rules, two different types of data
Well, I'd even more data to Splunk by modifying a conflict file on the machine where we installed the Universal Foreigner.