Time
14 hours 28 minutes
Difficulty
Intermediate
CEU/CPE
15

Video Transcription

00:00
Hello, Siberians. Welcome to lesson three point stain off Monetary off, Discuss stated, is that receiver one Microsoft Azure architect design
00:11
had 11 objectives in this video.
00:14
We'll start by covering the concept off partitioning as it relates to as your cosmos D B
00:21
with them proceeds to cover the data migration scenarios that cosmos D B support for the different AP eyes.
00:30
Let's get into this.
00:32
Let's talk about petitioning as it relates to as your cosmos TV.
00:36
So as your cause. Most baby uses this concept called dynamic petitioning. And whenever we set about your Christmas Day beyond, we create our containers. One of the things that were one of the important thing that we need to select. It's something called a petition, an idea, a petition in key value.
00:52
To put it very simply,
00:55
I times with the same petition key I kept together on the same physical. Not
01:02
so because Joker's mostly be does not start data in on a single note right to avoid ought, but our petitions. So what's happening is that the data will be spread across or the data within our containers are We didn't within the collections will be spread across
01:19
different physical notes within the azure data scientists.
01:23
And what's gonna the time in which datas are kept together will be based on what was selected for a petition in Key.
01:32
So it's always a good idea to choose the petition. Nikita has a wide range of values and access patterns. Good candidate, for example, for a petition key will be something like properties that are appear very frequently when you do your queries, right, So you want to group those together after to make your query is much more quick, it
01:51
so that's queries can be efficiently rooted.
01:55
You want to group them to get on the same petitions, so that's very conditions that we have to make.
02:04
So let's talk about my great and data into azure cosmos TB.
02:09
So here's some of the tools that are available
02:13
so house, when we're thinking about my great and data from existing no sequel databases into Cosmos TV, we have the azure database migration service.
02:23
We have the azure coast mostly be data migration to
02:27
we have a set a copy.
02:30
We have the secure hell show the copy copy command on Sequels that is the Cassandra Query language
02:38
and we have spark
02:39
now
02:40
which off? This tools that we're going to be using the paint on the A p hi. That was selected when we created a cosmos dp on the paint on which database that we're moving from.
02:53
So, for example, if we're working with the Mongol d B A p I off cosmos TB,
02:59
we can use Theodore database migration service. Actually, this is the only case when it comes to cosmos TB that we can use the Azure database Migration service.
03:09
You can see the sauce that supported on the right hand side. So, for example, we can move database from Mongo D B Sava on premises on virtual machine straight into the mambo D B a p a of hydra cosmos TB on. I'll show you it them off this
03:25
if we're working with a sequel ap I native FBI off because most baby what? What formerly called documents DP.
03:34
We can use the azure cosmos TB data migration to on the sauces are what you can see on the right hand side. So, for example, we can use the cosmos David on my Christian to to move data from Jason Foul. CS three fouls
03:46
mongo D B as your table storage Amazon dynamo d B on the others that you can see on. We can use that to move data in tow.
03:54
The sequel. FBI off Cosmos TV
03:59
When talking about stable a p I,
04:01
we can use the azure cosmos Deby data migration to to my great state a form on as your table storage into the table a p i.
04:11
We can also use a zit copy to copy data, auto exports and imports data from an as your table stoppage
04:18
for referring to the Cassandra AP Hi.
04:23
We can use the Cassandra Cassandra Query Language Copy Command to move what existing Cassandra Walk loads into the Cassandra A p I.
04:32
But one thing that I want you to be Claire office. What supported with what a p I
04:38
We can also use pack
04:40
to move existing Cassandra Walk loads into Cassandra a p I.
04:45
So he has a visual representation off the thing that we just mentioned.
04:49
So when it comes to as your database migration service on as your cosmos D B on the monger d B E supported
05:00
when it comes to a joke because most D. B did a migration to Onley, the sequel AP High and Table A P I supported and you can see the difference sources that I supported on the right inside
05:12
for each.
05:13
Let's talk about Jacque Cosmos TB design decisions
05:16
When it comes to availability, the 1st 1 I want to mention to. It's about durability off this service.
05:24
So data is terribly committed by a quorum off replicas before right operation is acknowledge. What that means is that if you make a right operation request to as a cosmos TB,
05:38
that data is gonna be committed
05:41
in about four different places in the edge of region that you're making the records into before the right operations acknowledge, which means the data is highly durable. On off course, this changes. If you're using mortar right regions, that's that even much more durable. It's gonna ensure that that they die is committed
06:00
to another right region
06:01
before there's an acknowledgement.
06:06
So when it comes to automatic online backup, that's that automatically and never advise your cosmos D B. A backup is taking every four hours, and it kept for 30 days in redundant stoppage.
06:21
He has one caveats to that is that if you do want to do every stories in the backup units to raise us apart, stick it with Microsoft on. They're gonna be the one toe do the rich stuff for you, and they're gonna be doing the restore in most cases to another cause. Most Eby account.
06:42
What that means is that time is off essence whenever you need to do every stop, because you need to ensure that you raise a request with Microsoft before the retention period off. Today's expires. For the data that you want to re stop
06:59
good practices to set up at least two regions, preferably set up at least to write regions because that will help. You also were in terms of automatic fail. I'll talk about that a little bit of a minute, but that's that's the best. That's best practice right there
07:14
if you're gonna be using single right region with multiple regions, in other words, you're the one want white must and the others. I just read replicas.
07:23
There's an option to enable automatic fail over so that if the Vigen where the right master is located, if they were to be a failure in that region, Microsoft automatic if automatically fail over right operations toe one of your read replicas, and then that becomes the new master, so to speak.
07:42
But you have to enable that option
07:44
if you're using multiple right Regent Street. Do not need to enable automatic fail over if there is a multiple right Regent. If there were to be a failure in one of your right regions, makes off automatically fails of other requests to the other available right regions.
07:58
When it comes to scalability,
08:00
jokers must be busy. Something called requests you needs, and that's what the time is. The performance. So So, for example, one kilobyte of document read because the one request you need and on and on
08:13
so you can go up to a 1,000,000 request you needs per container
08:18
on. You can go up to a 1,000,000 request you needs
08:22
part database. Now you can increase that where you need to raise a Microsoft supports ticket.
08:28
The other thing that I want to point out to you on the screen is tthe e maximum storage container and the maximum storage but database, and you can see that that's on limited. And that's also very important to not
08:43
when it comes to monitoring of Asia.
08:48
Cosmos TB
08:48
It's very similar to what we talked about five years sequel there. Three men, things that worry for into We're talking about metrics, which is information that's coming from the platform about the state of the service. Other point in time
09:01
on dhe We're talking about activity locks, which is information that's coming from the management layer from on the subscription level. Off for the service.
09:11
Andi. We're talking about diagnostic logs, which is something that's not enabled by default on we after a never within the service itself.
09:22
So for metrics and activity logs that automatically collected nothing to do to enable them, there's no cost associated with dent with diagnostic logs, we have to enable that by ourselves on when we go to a nibble it we have to select three options for the destination.
09:37
So, for example, we can decide to start that data in a Nigel storage accounts that's good for her archiving use cases, and we can also define the retention period for the logs with in the storage account.
09:52
Then we have the azure event up option where we can send the data to event up on This is good if we look into Butte, something like database telemetry or pipelines for monitoring solution.
10:07
The final option that we have for diagnostic logs is the Log Analytics workspace, where we can send this information's Logan. It'd work space. We can then configure our weekend into queries and then report in and the lats off example. We could easily put together a query that's gonna identify
10:26
areas Are cases off
10:28
database performance issues with Nigel Cosmos TV using the Cousteau query language that Logan Isaac supports.

Up Next

AZ-301 Microsoft Azure Architect Design

This AZ-301 training covers the skills that are measured in the Microsoft Azure Architect Design certification exam. Learn strategies to plan for the exam, target your areas of study, and gain hands-on experience to prepare for the real world.

Instructed By

Instructor Profile Image
David Okeyode
Cloud Security Architect
Instructor