Hello, Siberians. Welcome to lesson three point stain off Monetary off, Discuss stated, is that receiver one Microsoft Azure architect design
had 11 objectives in this video.
We'll start by covering the concept off partitioning as it relates to as your cosmos D B
with them proceeds to cover the data migration scenarios that cosmos D B support for the different AP eyes.
Let's get into this.
Let's talk about petitioning as it relates to as your cosmos TV.
So as your cause. Most baby uses this concept called dynamic petitioning. And whenever we set about your Christmas Day beyond, we create our containers. One of the things that were one of the important thing that we need to select. It's something called a petition, an idea, a petition in key value.
To put it very simply,
I times with the same petition key I kept together on the same physical. Not
so because Joker's mostly be does not start data in on a single note right to avoid ought, but our petitions. So what's happening is that the data will be spread across or the data within our containers are We didn't within the collections will be spread across
different physical notes within the azure data scientists.
And what's gonna the time in which datas are kept together will be based on what was selected for a petition in Key.
So it's always a good idea to choose the petition. Nikita has a wide range of values and access patterns. Good candidate, for example, for a petition key will be something like properties that are appear very frequently when you do your queries, right, So you want to group those together after to make your query is much more quick, it
so that's queries can be efficiently rooted.
You want to group them to get on the same petitions, so that's very conditions that we have to make.
So let's talk about my great and data into azure cosmos TB.
So here's some of the tools that are available
so house, when we're thinking about my great and data from existing no sequel databases into Cosmos TV, we have the azure database migration service.
We have the azure coast mostly be data migration to
we have a set a copy.
We have the secure hell show the copy copy command on Sequels that is the Cassandra Query language
which off? This tools that we're going to be using the paint on the A p hi. That was selected when we created a cosmos dp on the paint on which database that we're moving from.
So, for example, if we're working with the Mongol d B A p I off cosmos TB,
we can use Theodore database migration service. Actually, this is the only case when it comes to cosmos TB that we can use the Azure database Migration service.
You can see the sauce that supported on the right hand side. So, for example, we can move database from Mongo D B Sava on premises on virtual machine straight into the mambo D B a p a of hydra cosmos TB on. I'll show you it them off this
if we're working with a sequel ap I native FBI off because most baby what? What formerly called documents DP.
We can use the azure cosmos TB data migration to on the sauces are what you can see on the right hand side. So, for example, we can use the cosmos David on my Christian to to move data from Jason Foul. CS three fouls
mongo D B as your table storage Amazon dynamo d B on the others that you can see on. We can use that to move data in tow.
The sequel. FBI off Cosmos TV
When talking about stable a p I,
we can use the azure cosmos Deby data migration to to my great state a form on as your table storage into the table a p i.
We can also use a zit copy to copy data, auto exports and imports data from an as your table stoppage
for referring to the Cassandra AP Hi.
We can use the Cassandra Cassandra Query Language Copy Command to move what existing Cassandra Walk loads into the Cassandra A p I.
But one thing that I want you to be Claire office. What supported with what a p I
We can also use pack
to move existing Cassandra Walk loads into Cassandra a p I.
So he has a visual representation off the thing that we just mentioned.
So when it comes to as your database migration service on as your cosmos D B on the monger d B E supported
when it comes to a joke because most D. B did a migration to Onley, the sequel AP High and Table A P I supported and you can see the difference sources that I supported on the right inside
Let's talk about Jacque Cosmos TB design decisions
When it comes to availability, the 1st 1 I want to mention to. It's about durability off this service.
So data is terribly committed by a quorum off replicas before right operation is acknowledge. What that means is that if you make a right operation request to as a cosmos TB,
that data is gonna be committed
in about four different places in the edge of region that you're making the records into before the right operations acknowledge, which means the data is highly durable. On off course, this changes. If you're using mortar right regions, that's that even much more durable. It's gonna ensure that that they die is committed
to another right region
before there's an acknowledgement.
So when it comes to automatic online backup, that's that automatically and never advise your cosmos D B. A backup is taking every four hours, and it kept for 30 days in redundant stoppage.
He has one caveats to that is that if you do want to do every stories in the backup units to raise us apart, stick it with Microsoft on. They're gonna be the one toe do the rich stuff for you, and they're gonna be doing the restore in most cases to another cause. Most Eby account.
What that means is that time is off essence whenever you need to do every stop, because you need to ensure that you raise a request with Microsoft before the retention period off. Today's expires. For the data that you want to re stop
good practices to set up at least two regions, preferably set up at least to write regions because that will help. You also were in terms of automatic fail. I'll talk about that a little bit of a minute, but that's that's the best. That's best practice right there
if you're gonna be using single right region with multiple regions, in other words, you're the one want white must and the others. I just read replicas.
There's an option to enable automatic fail over so that if the Vigen where the right master is located, if they were to be a failure in that region, Microsoft automatic if automatically fail over right operations toe one of your read replicas, and then that becomes the new master, so to speak.
But you have to enable that option
if you're using multiple right Regent Street. Do not need to enable automatic fail over if there is a multiple right Regent. If there were to be a failure in one of your right regions, makes off automatically fails of other requests to the other available right regions.
When it comes to scalability,
jokers must be busy. Something called requests you needs, and that's what the time is. The performance. So So, for example, one kilobyte of document read because the one request you need and on and on
so you can go up to a 1,000,000 request you needs per container
on. You can go up to a 1,000,000 request you needs
part database. Now you can increase that where you need to raise a Microsoft supports ticket.
The other thing that I want to point out to you on the screen is tthe e maximum storage container and the maximum storage but database, and you can see that that's on limited. And that's also very important to not
when it comes to monitoring of Asia.
It's very similar to what we talked about five years sequel there. Three men, things that worry for into We're talking about metrics, which is information that's coming from the platform about the state of the service. Other point in time
on dhe We're talking about activity locks, which is information that's coming from the management layer from on the subscription level. Off for the service.
Andi. We're talking about diagnostic logs, which is something that's not enabled by default on we after a never within the service itself.
So for metrics and activity logs that automatically collected nothing to do to enable them, there's no cost associated with dent with diagnostic logs, we have to enable that by ourselves on when we go to a nibble it we have to select three options for the destination.
So, for example, we can decide to start that data in a Nigel storage accounts that's good for her archiving use cases, and we can also define the retention period for the logs with in the storage account.
Then we have the azure event up option where we can send the data to event up on This is good if we look into Butte, something like database telemetry or pipelines for monitoring solution.
The final option that we have for diagnostic logs is the Log Analytics workspace, where we can send this information's Logan. It'd work space. We can then configure our weekend into queries and then report in and the lats off example. We could easily put together a query that's gonna identify
database performance issues with Nigel Cosmos TV using the Cousteau query language that Logan Isaac supports.