9 hours 41 minutes
I welcome back to modulate. This is less than a point to execution plans with Explain Now, my sequel has an OPTIMIZER and my sequel automatically tempts toe Optimize your queries we can use explain to view my Siegel's execution plan.
My sequel permits options to override its decision. So if you use explain to view command and you notice that it's using the using an incorrect index, you can issue a command to your query to use a specific index and fix that problem.
So the Explain command displays the running costs of the quarry you want to use. Format equals Jason to reveal what the running cost is, and then you can see if your numbers way too high or it's really fast and really low.
You can use the explain to identify problems in slow queries such as full table scans. That brings us to the next bullet point, which is the type calm The Tech column is. A lot of the columns in the Explain command are useful, but the type column will tell you when you have a full table scan and you want to avoid full table scans whenever possible.
This is indicated by the word all in the type column.
Sequel professionals expect strong execution plan knowledge as this displays an intimate knowledge of the sequel language. So if you can identify and fix slow quarries by identifying poor, poorly structured queries, that's really going to show that you know a lot about sequel.
So learning how to use explain to fix quarries will be very useful in your career. If you plan to use equal professionally
now, let's go ahead. I have an additional slide with this slide, so let's take a look at that slide as well.
So this is the next slide. Execution plans continued. This is where I briefly described the columns off the explain command. Now this isn't the dais on format output, so there is some additional data in the Jason format output that is not mentioned on this slide. So the first column and the regular explain output is the select Identify.
This isn't really a focal point for query optimization.
It's just a signing an I d to your ah, to your layer off the explain command.
This select type, the type of query. This includes information such as Simple Primary Union on, and there's some other values as well.
Uh, simple is probably the best one that you can get in that column
table. That's the table name or the alias used for a table
partitions. It's the partition used for inquiry
Access type. Important column. This can indicate the full table scans and indicates other values to possible keys. These are the index is eligible for query execution key.
That's the index that was used.
Key length is the length of the index. Shorter lengths are better, with one exception. Being 00 would basically mean none
reference columns compared to used index
row. Compare estimation. So it's what my sickle believes, or it's the number rose that my civil believes it will have to compare to reach resolution
filtered percentage of rose filtered by a table condition
extra. This is additional useful information. For example, you can use this extra come to avoid additional cost if possible, such as the using temporary or files sort. Those words will appear in the extra column and follow Sort, for example, is considered a pretty bad thing to have in inquiry
as it's not efficient and should be removed if possible.
and the last bullet point is used for Matt Equal Jason to get additional information, such as query cost. Now let's jump into the development environment and see what this looks like.
Here we are at the development environment. I have two queries on the script pain. The only current difference between these two curies is that one is much more efficient than the other. If we run the first query,
we'll see that we get one record back with user name Malone.
If we run the second query,
we'll see that we get the same record back.
However, one is much more efficient than the other, and we can get an idea of what the outcome is going to be by taking a quick look at the table architecture.
So if you go to the users table, we go to the properties on. We look at the constraints will see that PK idea and company, a company I. D. Is mentioned. No mention of email.
We go to the index's again, and we look at all the indexes that are indicated. We'll see no mention of email again
now if we run the top quarry with the explain command added on,
We'll see that the type is all which is bad. That means that a full table scan was conducted, which means that the equity had to look at all the rose to make its decision. We can see the rose that it looked at 55
and if we go to the users table
and we go to data, will see that it has 55 records. So this query is very inefficient. It had to look at every record on the table to identify whether or not it needed to bring back the target record.
Now, if we run the second quarry
with the explain Command added on, we'll see that the type is CONST, which is much better than a full table scan. We'll see. The rose that it looked at was one so was able to go pretty precisely to the record it needed and bring it back, which is much more efficient than a full table skin.
Now there are only 55 records on this table currently,
so even when you have inefficient query, it still appears to be very quick.
However, that changes drastically as you get is you deal with tables that have more and more records, such as transaction tables, which could have millions, maybe even billions of records on them. You really want to make sure that you're using your indexes and your primary keys correctly, and the Explain Command
lets you quickly analyze inquiry to identify potential problems.
And this example. We were looking for the type, all which indicated a full table scan.
Now is there any way we could fix this
if we wanted to use the email
in Amore Efficient way?
Yes, we could, we could. We could create an index on the email, which would make this query more efficient. So let's go ahead and do that. Unfortunately, we cannot make it a unique index because our test data has duplicate emails in it,
so we can Onley make it in index and a lot of data base applications. You will see the email get created into a either a primary key composite key or a unique index.
So create new index
because that's part of the composite key
and email. We're not gonna make it unique.
I'm just going to click. OK,
we're going to save it,
now. Let's go take a look at the explain command for this top quarry. Again,
we can see the type is not all its reference,
and it was able to get to that record pretty fast.
So we were able to look at two queries. Look at how one was inefficient, and we could either use them or efficient variant.
Or we could change the table architecture to support the query we were writing much better. For example, the email was not an index, so we increased this efficiency by actually changing the table architecture to make it an index. All right, let's take a look at the format. Equal Jason Expression added to the Explain command.
It does provide some additional useful information to look at, such as query cost,
and in this example, we're going take a look at query cost before optimization and after optimization. I have removed the index from the user's table again, and we're gonna look at the quarry costs before and after adding the index and do a couple calculations to really show the difference. And throughout the rest of module,
I won't be using the format equal Jason expression,
but feel free to look at the query costs before and after a nympho imitation of an optimized solution. Now let's go ahead and run this NDB vor, And if you're using D beaver, you'll be able to enter the same commands that I do to see useful information.
If you're using some other I D. You will have to figure out how you would do it in that I d.
So let me go ahead and run this
so you can see that I've gotten a Jason format back. So if I click into this box here and then hit shift enter, I get another box and I can see the query cost. Here is 5.75 which doesn't seem too bad, but it is actually kind of small because there's so few rows on the table. The rose on the table will,
affect the query cost, and you can see that some of that is coming from the reed cost, which is 5.65 which makes sense. So let me go ahead and close that. Let's go ahead and apply an index or the same index that we did before,
right Click create new Index Company I D
and email when hit. Okay, I'm gonna save that persistent.
Let's go look at the cost now.
So if I run this again
it ran again when clicked back down here in this box and hit shift dinner.
And now we can see the query cost is 0.35 which is significantly lower. So what kind of impact does that have? Let's take a look down here.
We have done some calculations. So the slow was five point Sunday fire. The improvement was 5.0.35 Now the percentage of improvement if we run through this little formula not trying to get to math heavy here, but a little bit of math. Does help really calculate and show those differences will come out with a difference of a
improvement of 93.91%.
So what does this mean? Well, let's relate it to time. Let's say process a takes 600 seconds to execute. This is also 10 minutes, and now let's see process be. Hasn't 93.91%
improve it and it's more efficient.
The same process will take about 37 seconds to execute. Now think about that. You went from 10 minutes
2 37 seconds. That is a huge time saver.
Now. Maybe you don't mind waiting an extra nine minutes for one item, however, consider that all your processes have been built with the same level of inefficiency.
Now you might be waiting hours, maybe even days when you should be waiting minutes. And that's really why a lot of professionals get interested in making their processes mawr efficient.
So keep that in mind as we work through this module. Now. With that said, let's go ahead and take a look at the summary, and that brings us to the 8.2 summary. So what did we do in this lesson while we looked at the my sequel, Optimizer And again, my sequel creates an execution plan or optimizes your queries that you write.
You can view this plan
via the Explain Command. You can change the plan or identify problem areas by examining the explain commands. Output.
Full table scans is a common problem, found enquiries, and it's usually caused by a failure to correctly use an available index or filters that air to open, for example, where column like wild card E wild card.
So that completes this lesson, and I hope to see you in the next thank you
The Microsoft Azure Test is a premium Cybrary assessment created by Interview Mocha. The exam ...
AWS Data Engineering
The AWS Data Engineering test is a premium Cybrary assessment test created by Interview Mocha. ...