Sep 4, 2024
Deploying AKS 2 – Creating and Configuring Containers
  1. Next, we have the Authentication tab; here, you will note we can modify the setting for the Authentication method type to either Service principal or System-assigned managed identity. This will be used by AKS for managing the infrastructure related to the service. For this exercise, we will leave this at its default setting. Then, you have the Role-based access control (RBAC) option. By default, this is set to Enabled; this is the best option to manage the service as it allows fine-grained access control over the resource; leave this as Enabled. You will also have the choice to enable AKS-managed Azure Active Directory. Checking this will enable you to manage permissions for users on the service based on their group membership within Azure AD. Note that once this function has been enabled, it can’t be disabled again, so leave this unchecked for this exercise. Finally, you have the option of the Encryption type value you want. For this exercise, leave it as the default setting. Click Next: Networking >. The process is illustrated in the following screenshot:

Figure 11.47 – Creating a Kubernetes cluster: Authentication tab

  1. For the Networking section, we will leave most of the settings as their default configuration. Note that for Network configurations we have two options here, one for Kubenet and another for Azure CNI. kubenet is a new VNet for the cluster whereby Pods are allocated an IP address and containers have network address translation (NAT) connections over the shared Pod IP. Azure Container Networking Interface (Azure CNI) enables Pods to be directly connected to a VNet. In association, this allows containers to have an IP mapped to them directly, removing the need for NAT connection. Next, we have the DNS name prefix field, which will form the first part of your FQDN for the service. You will then notice Traffic routing options available to us for the service—we will discuss this more in one of the next exercises, as well as the Security options available to us. Select Calico under Network policy. Click Next: Integrations >. The process is illustrated in the following screenshot:

Figure 11.48 – Creating a Kubernetes cluster: Networking tab

  1. On the Integrations tab, you will note the option to select a container registry. We will select the registry that we previously deployed. You will also note you have the option to deploy a new registry directly from this creation dialog. Next, we have the option to deploy container monitoring into the solution on creation. We will leave the default setting here, but monitoring will not be covered under the scope of this chapter. Finally, you have the option of applying Azure Policy directly to the solution; this is recommended where you want to enhance and standardize your deployments. This solution enables you to deliver consistently and control your deployments on AKS more effectively. Click Review + create, then click Create. The process is illustrated in the following screenshot:

Figure 11.49 – Creating a Kubernetes cluster: deployment

You have just successfully deployed your first Kubernetes cluster; you now know how to deploy and manage containers at scale and in a standardized way. Next, we will look at how we configure storage for Kubernetes and make persistent storage available to our solution.

More Details
Sep 3, 2024
Deploying AKS – Creating and Configuring Containers

Deploying AKS

Thisexercise is to help you gain familiarity with AKS. We will deploy our first AKS instance, and then with the corresponding exercises, we will explore the different management components for this. Proceed as follows:

  1. Sign in to the Azure portal at https://portal.azure.com.
  2. Open the resource group you will be using for this exercise, click Overview on the left menu, then clickCreate.
  3. On the left Category menu, select Containers, then click Create under Kubernetes Service, as illustrated in the following screenshot:

Figure 11.43 – Creating an Azure Kubernetes service

  1. Select your Resource group type, enter your Kubernetes cluster name value, select your Region type, and then leave all the other settings on their default configuration and click Next: Node pools >. The process is illustrated in the following screenshot:

Figure 11.44 – Creating a Kubernetes cluster

  1. Scroll down the page, and note that we can change the Node size value. Changing this option allows you to choose a size very similar to what you saw in the previous chapter with VMs. Next, you can select how you would like to manage scaling; this can be either Manual or Autoscale. Manual means that you would like to modify the scale count yourself whenever you would like to change the count of nodes for your pool, whereas Autoscale will allow you to scale automatically based on a scaling rule. For this option, you select a range being the minimum number of nodes you want and the maximum you would like to scale it up to. As you can see, there is a lot of room for scaling with this service. Click Next: Node pools >. The process is illustrated in the following screenshot:

Figure 11.45 – Primary node pool sizing

  1. On the Node pools tab, you will note the ability to add additional pools that can also be configured to scale in the same fashion as the Primary node pool type we configured in the previous step. You also have the Enable virtual nodes option, which enables you to scale your containers beyond your VM specified in the previous steps, and scale to use ACI as additional nodes when AKS needs to scale out. The final option available is Enable virtual machine scale sets. You will note that it is already checked for our deployment, the reason being that it’s required to support the Availability zones configuration we have from Step 4. Scale sets enable us to have a scaling VM supporting the container service; this allows better scaling of resources when it’s required since a single VM will be limited in the number of containers it can host. This also enables dynamic scaling (no downtime) due to horizontal scaling ability. Having to change the size vertically (such as more or fewer resources—essentially, the stock-keeping unit (SKU)) would result in losing access to the resource temporarily while it’s resized and restarted. This is a more static type of scaling. Next, click Next: Authentication >. The process is illustrated in the following screenshot:

Figure 11.46 – Creating a Kubernetes cluster: Node pools tab

More Details
Jan 4, 2024
Upgrading an AKS cluster – Creating and Configuring Containers

Upgrading an AKS cluster

You have the choice to automatically upgrade your AKS clusters or to manually manage upgrades yourself. As part of your upgrade decisions, you can decide if you would like to upgrade both the node pools and control plane or the control plane only. Automatic upgrades have the option of choosing different channels that best apply to your requirements; these are listed as follows:

• None: Used for disabling auto upgrading.

• Patch: The cluster is automatically updated to the latest support patch version.

• Stable: The cluster is automatically updated to the latest stable version.

• Rapid: The cluster is automatically updated to the latest N-2 minor version.

• Node-image: The cluster is automatically updated to the latest version available.

Top Tip

It’s important to note that when upgrading your AKS clusters, you will upgrade to a supported patch version for your cluster, one version at a time where more than one version upgrade exists.

We will now perform the exercise of upgrading your cluster with the following steps:

  1. Sign in to the Azure portal at https://portal.azure.com.
  2. Navigate to the AKS cluster you created in the previous section. On the left menu, select the Cluster configuration option and click Upgrade version on the right, as illustrated in the following screenshot:

Figure 11.62 – Cluster configuration

  1. Select your desired Kubernetes version and select an Upgrade scope type, then click Save. The process is illustrated in the following screenshot:

Figure 11.63 – Upgrading the version of Kubernetes

You have just successfully upgraded your Kubernetes version and understood the various automated options also available to do this. Next, we will run through the chapter summary and all that we have covered in this chapter.

Summary

In this chapter, we discovered what containers are and how we deploy and manage them, we learned about Docker, the limitations of Docker, and container deployments, and finally, we found out how we can extend default container services through orchestration tools such as Kubernetes that greatly enhance the way we manage and scale containers. As part of your learning, you have discovered how to work with ACI Instances and learned how to also attach persistent storage to containers using AKS, how to enhance the security around containers, and about the various networking options available to you as part of AKS. You also experienced working with deployments and administrative tasks such as creating an Azure container registry, deploying Azure container instances, and creating and configuring Azure container groups.

You should now feel confident about the administration of containers within Azure, the methods of deployment, and how to orchestrate and manage these.

In the next chapter, we will explore Azure App Service, what this is, how to configure and deploy it, and becoming confident in how to use this on Azure.

More Details
Sep 10, 2023
Understanding App Service plans and App Service – Creating and Configuring App Services

Understanding App Service plans and App Service

When discussing Azure app services and understanding what they are, compared to traditional servers, it’s important to understand the relationship between Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), andserverless (such as Function as a service (FaaS)). As you move through the different service offerings, you have different layers of responsibility that you manage. This is also the easiest way of understanding the differences between the services. IaaS, PaaS, SaaS, and serverless are cloud-based services and fit well into the Azure platform since Microsoft has developed some great ways to manage the services you deploy. This also allows you to choose the level of control you would like to adopt. There are limitations to each model, which is a much deeper topic, but understanding these limitations at a core level will help you succeed in your Azure journey. The following diagram illustrates the management relationships between the cloud-based services:

Figure 12.1 – Shared responsibility model

As you can see, the closer you approach SaaS, the fewer components you are required to manage and, subsequently, can manage. Finally, the serverless component can be a little confusing as it falls between the PaaS and SaaS layers; you can only manage your code and, ideally, split your code into single repeatable components called functions. Serverless components are also classified as microservices since the services are broken down into their most basic forms. Here, you define the functions you need to run your code; you can deploy the code and forget about which server it runs on. This approach does lead to more in-depth and complex discussions that are beyond the scope of this book; you just need to understand that this exists and that in Azure, we often refer to it as FaaS. Now that you understand the relationship between these services and what you manage, we can classify Azure app services. They fall into the PaaS category; therefore, you only have to worry about how you manage your application and its data. You also have the choice to secure your application using controls that have been exposed to Azure. The rest is taken care of by the platform itself.

App Service plans

To run your applications, you must deploy and configure your server infrastructure appropriately to suit your applications. For example, your applications may require the Windows operating system and the .NET Framework. To accommodate these configurations, Azure has App Service plans. This is a server that’s related to your application deployments, where you can choose an operating system, the number of nodes in your cluster, server-related security configurations, and operations. It also allows you to run several applications against a server with the chosen specifications for memory and CPU resources and only scales as per your requirements or budget.

Top Tip

Although Azure Functions falls into the serverless category, when assigned to an App Service plan, it becomes a PaaS service since it is linked to a server. This increases what you can manage on the service and allows better control over, for example, security features.

Now that you understand more about Azure App Service and App Service plans, let’s dive into some exercises where we will work with these later. In the next section, we will deploy an App Service plan and dive into the available configuration options.

More Details
Jul 4, 2023
Creating an App Service plan – Creating and Configuring App Services

Creating an App Service plan

In this exercise, you will be creating an App Service plan for Azure. This will act as the server configuration for hosting your Azure web applications and function applications. Follow these steps to do so:

  1. Sign in to the Azure portal at https://portal.azure.com.
  2. Open the resource group you will be using for this exercise, click Overview via the left menu, and clickCreate.
  3. Type app service plan in the search bar and click App Service Plan:

Figure 12.2 – App Service Plan

  1. On the next screen, click Create:

Figure 12.3 – App Service Plan – Create

  1. Enter the name of your Resource Group, then enter a name for your App Service plan. Here, we have used myappserviceplan. Next, choose an Operating System. For this demo, we will deploy a Windows App Service plan. Finally, select your Region and SKU and size; we will select Standard S1. Click Review + create, then Create:

Figure 12.4 – Create App Service Plan

With that, you have configured your first App Service plan and are ready to host your first application on the service. In the next section, you will learn how to create an App Service in your newly deployed App Service plan.

Creating an app service

In this exercise, you will deploy your first web application in Azure using the Azure Web Apps service. Follow these steps:

  1. Sign in to the Azure portal at https://portal.azure.com.
  2. Open the resource group you will be using for this exercise, click Overview via the left menu, and clickCreate.
  3. From the left menu bar, clickWeb, then click Create under Web App:

Figure 12.5 – Web App

  1. Enter the name of your Resource Group, then enter a name for your web app. Here, we have used myfirstwebapp221221. Next, choose the type of deployment you would like. We will select Code. Note that you could also select a Docker Container. Then, select a Runtime stack – this will support the code you are deploying. Now, choose an Operating System. For this demo, we will deploy a Windows web app; we did this for the App Service plan we deployed previously. Select your Region, this will also be the same as what you selected for your App Service plan:

Figure 12.6 – Create Web App – Basics

Finally, select a Windows Plan – this is the App Service plan you created previously. Note that when you select this, it automatically configures your SKU and size, which will match what you chose for your App Service plan. Also, note that you have the option to create your App Service plan directly in the Deployment menu. Click Next : Deployment >:

Figure 12.7 – Create Web App – Basics 2

  1. Here, you have the option to do a Continuous deployment. We won’t be configuring this in this exercise. Click Next : Monitoring >:

Figure 12.8 – Create Web App – Deployment

  1. On the Monitoring tab, you will have the option to deploy Application Insights for your application. Note that you can either create a new Application Insights deployment through this blade or create it as part of the deployment. For this exercise, we will select No for Enable Application Insights. Click Review + create, then Create:

Figure 12.9 – Create Web App – Monitoring

  1. Navigate to your application, click on Overview via the left-hand menu, and note your URL for your application. This blue text is clickable; you can either click on this or copy it into your browser to navigate to your application to confirm that it’s working:

Figure 12.10 – Web App – Overview

You will be presented with a screen similar to the following for your application. Congratulations – you have successfully deployed your application using the Azure portal!

Figure 12.11 – Web App – running in your browser

Now that you know how to deploy a web application using the Azure portal, let’s learn how to do the same using PowerShell. This time, we will create a Linux service plan.

More Details
Mar 4, 2023
PowerShell scripts 2 – Creating and Configuring App Services

Here, you scaled down your application from the S1 SKU to the B1 SKU, which shows how easy it is to change its size. Note that the application will restart upon being resized. You will need to resize the application so that it’s a production SKU for the next exercise. When changing its size, select Production and click See additional options. Click S1 and then Apply:

Figure 12.13 – Scaling up to S1

Now, let’s learn how to scale out horizontally:

  1. Navigate to the App Service plan you worked on in the previous exercise.
  2. From the left menu blade, under theSettings context, click Scale out (App Service plan).
  3. Note that you can choose either Manual scale or Custom autoscale. Here, it would be best to manually scale since you are working on Dev / Test workloads, but for production workloads, you should choose Custom autoscale. Change Instance count to 2 and click Save:

Figure 12.14 – Manual scale

  1. Now, change the setting to Custom autoscale. Enter a name for your Autoscale setting name and select your Resource group:

Figure 12.15 – Custom autoscale

For our Default scale condition, we will create the first one using Scale based on a metric for Scale mode, and we will set Instance limits to 1 for Minimum, 2 for Maximum, and 1 for Default. Then, click Add a rule:

Figure 12.16 – Scale condition setup

  1. For the Criteria section, set Time aggregation to Average, Metric namespace to App Service plans standard metrics, and Metric name to CPU Percentage. Set Operator to = and Dimension Values to All values (this means any web app).

Note the timeline chart at the bottom of the screen, which indicates the average CPU percentage that you have experienced over time, with the average also written below it. In this case, it is 3.78 %:

Figure 12.17 – Scale rule

  1. Below the CPUPercentage (Average) section, you will notice some other configuration options. Set Operator to Greater than and Metric threshold to trigger scale action to 70. Then, set Duration (minutes) to 10 and Time grain statistic to Average. This will define a rule that states that when the CPU average percentage reaches greater than 70% usage over 10 minutes, it will trigger an Action.
  2. For the Action section, set Operation to Increase count by, Cool down (minutes) to 5, and Instance count to 1. This will increase the instance count of the running web applications by 1 when the criteria that we configured in step 7 have been identified. Once triggered, a cooldown period will occur, where no further actions can be performed until the cooldown window has elapsed. In this case, it is 5 minutes. If the criteria for scaling are observed again after this cooldown period, then the action will be triggered again. Click Add:

Figure 12.18 – Configuring a scale rule – Thresholds

  1. You have just configured a rule for scaling your application up in terms of its instance count, but what if you would like the application to scale back down when you don’t need as many instances anymore? You would need to configure a new scale condition to trigger a scale-down action you would like to perform. Click + Add a rule below the Scale out rule you just created:

Figure 12.19 – Add a rule

  1. For the Criteria section, set Time aggregation to Average, Metric namespace to App Service plans standard metrics, and Metric name to CPU Percentage. Set Operator to = and Dimension Values to All values (this means any web app).

Note that the timeline chart at the bottom of the screen indicates the CPU percentage average that you have experienced over time, with the average also written below it. In this case, it is 2.55 %:

Figure 12.20 – Scale rule

  1. Below the CPUPercentage (Average) section, you will notice some other configuration options. Set Operator to Less than and Metric threshold to trigger scale action to 30. Then, set Duration (minutes) to 10 and Time grain statistic to Average. This will define a rule that states that when the CPU average percentage reaches less than 30% usage over 10 minutes, it will trigger an Action.
  2. For the Action section, set Operation to Decrease count by, Cool down (minutes) to 5, and Instance count to 1. This will decrease the instance count of the running web applications by 1 when the criteria that we configured in Step 11 have been identified. Once triggered, there will be a cooldown period where no further actions can be performed until the cooldown window has elapsed. In this case, it is 5 minutes. If the criteria for scaling are observed again after this cooldown period, then the action will be triggered again. Click Add:

Figure 12.21 – Scale rule – Threshold and Action sections

  1. Click Save.

Now that you have configured your autoscale rules using the Azure portal, let’s learn how to use PowerShell to do the same.

More Details
Sep 4, 2022
Configuring a backup for an app service – Creating and Configuring App Services

Configuring a backup for an app service

Your application is running well, but you’re concerned that if something should fail or data is lost, you can’t restore your application. You decide that backing it up is a good idea and start to explore different ways to back up your application. Thankfully, Azure makes this a simple process, where you just need to think about what your backup strategy needs to look like and then configure the service accordingly. Remember that using a backup is different from performing DR in that DR restores operational services, whereas backups enable point-in-time restorations of data to recover from loss or accidental deletion. Follow these steps to configure a backup for your application:

  1. Navigate to the App Service plan you worked on in the previous exercises.
  2. From the left menu blade, underSettings, click Backups. From the blade that appears, click Configure at the top of the screen. The Backup Configuration blade will appear.
  3. You will need a storage account to store your backups. Since we haven’t pre-created an account, we will create it as part of this exercise. Click the Storage Settings button:

Figure 12.49 – Storage Settings

  1. Create your storage account and click OK. Next, you will be prompted for a container. Currently, this doesn’t exist since we created a new storage account. Click + Container, name the container backups, and click Create. Click the new container and click Select.
  1. For backups, you have the option to decide if you would like an automated schedule or if you would like to manually back up as and when needed. Preferably, you would like an automated schedule that prevents mistakes from occurring, such as forgetting to back up. Enable Scheduled backup. Configure your backup so that it runs every day at a set time from the date you would like this to start. In this example, we have set this to 28/12/2021 at 7:05:38 pm. Set your Retention period (in days) and set Keep at least one backup to Yes:

Figure 12.50 – Backup Schedule

  1. Note that you also have the option to configure a backup for your database. We won’t configure this for this exercise. Click Save:

Figure 12.51 – Backup Database

  1. You will see that your first backup is currently in progress and that the light blue box reflects the configuration for your backup schedule. You will also see two other blue buttons; the first, Backup, is for manually initiating a backup to be performed, while the other, Restore, allows you to recover data when required:

Figure 12.52 – Backup overview

You now understand how to back up your Azure App Service and should feel confident in configuring this going forward. In the next section, you will learn about the various network settings. Since we covered some of the available networking configurations in the previous sections, we will focus predominantly on how to configure a private endpoint.

More Details
Jul 14, 2022
Configuring networking settings – Creating and Configuring App Services

Configuring networking settings

You learned how to perform VNet integration in the Securing an app service section. In this section, you will learn how to configure behind a private endpoint:

  1. Navigate to the App Service plan you worked on in the previous exercises.
  2. From the left menu blade, underSettings, click Scale up (App Service plan). On the blade that appears, ensure that you have chosen the Premium V2, Premium V3, or Elastic Premium SKU to continue with this exercise. Click Apply.
  3. From the left menu blade, underSettings, click Networking. From the blade that appears, click Private endpoints in the Inbound Traffic section:

Figure 12.53 – Private endpoints

  1. Click Add:

Figure 12.54 – Private Endpoint connections – Add

  1. Enter a Name, ensure that you have the right Subscription selected, and select the correct Virtual network your private endpoint will be connecting to. Then, select a Subnet you would like to connect to. Finally, select Yes for Integrate with private DNS zone. Thisfeature allows Azure to create a Fully Qualified Domain Name (FQDN) for your private endpoint that can be reached by your resources. If you select No, then you will need to ensure that your DNS zone is maintained by another DNS service, such as Active Directory (on-premises version), and configured on your VNet for DNS lookup queries to forward to your DNS server(s):

Figure 12.55 – Add Private Endpoint

  1. On the Private Endpoint connections screen, which you will see after deploying your resource, click on the new endpoint you have created. Click the name of your Private endpoint (where the text is highlighted in blue) to open the Private endpoint blade:

Figure 12.56 – Backup overview

  1. From the left menu blade, under theSettings context, click Networking. From the blade that appears, scroll down to Customer Visible FQDNs and note the FQDN names associated with your service. Note that these are now associated with a private IP that belongs to the subnet you selected previously:

Figure 12.57 – Customer Visible FQDNs

  1. Scrolling down further, you will see Custom DNS records. Note that the FQDN variable that’s been assigned is very much the same as the website FQDN you have for azurewebsites.net, except it also contains privatelink as a prefix. So, you now have an FQDN of [app name].privatelink.azurewebsites. net. This is also associated with the private IP we saw previously. Note that if you performed an NSLookup on the preceding FQDNs, you will get a public IP address for your service:

Figure 12.58 – Custom DNS records

  1. Attempting to access your site now will deliver a 403-Forbidden error since public access is now revoked:

Figure 12.59 – Error 403 – Forbidden

Top Tip

If you have applied DNS to the VNet you are associating with and have configured a private DNS zone, you will need to ensure that your DNS servers have been configured to forward lookup to Azure for the private endpoint namespace related to your service.

With that, you have just configured a private endpoint and should feel confident in how to deploy one. You are also aware of some of the DNS complexities you should look out for to ensure you can resolve the host correctly by your resources.

More Details
May 4, 2022
Configuring deployment settings – Creating and Configuring App Services

Configuring deployment settings

There are several deployment settings related to your app service that you should be aware of. These allow you to upload your code or manage source control and deployment slots.

Deployment slots are logical segmentations of your application that can pertain to different environments and versions. Let’s say you have an application that is running in production mode (meaning it’s live and operational), and you want to work on some new code updates to introduce new features to the next version of your application. Typically, you would work on this in a test environment and deploy it accordingly to the production environment, once you felt that adequate testing had been performed before deploying anything to production.

Well, deployment slots provide a solution that allows you to deploy code to these slots to test the different functions and features of your applications, as well as code updates. You can run your primary deployment slot as the native application and deploy additional slots, such as TEST, that can be used for your new code. You have the option to swap deployment slots and revert at any time. The transition period is quick and enables a different paradigm in app management. You can, for instance, switch to the TEST slot and find that your application is not connecting to the required services and is slow. In this case, you can quickly flip back to the original code you had before any changes were made.

Let’s look at a brief configuration of a deployment slot before proceeding to the next part of this section:

  1. Navigate to the App Service plan you worked on in the previous exercises.
  2. From the left menu blade, underDeployment, click Deployment slots.
  3. From the top of the blade, click + Add Slot. Enter a Name – in this case, TEST – and leave Clone settings from set to Do no clone settings. Click Add, then Close:

Figure 12.60 – Add a slot

  1. The name you chose previously will form part of the FQDN for the deployment slot so that it can be accessed as a normal application, as shown in the preceding screenshot.
  2. Click Swap and set your Source as the new deployment slot you just created, and Target as the current slot. Click Swap, then Close:

Figure 12.61 – Swap

Now that you know about deployment slots, let’s explore the Deployment Center:

  1. Navigate to the App Service plan you worked on in the previous exercises.
  2. From the left menu blade, underDeployment, click Deployment Center. Click the Settings tab.
  3. Here, you have the option to deploy code from a Continuous Integration/ Continuous Deployment (CI/CD) tool. At the time of writing, the available options are GitHub, Bitbucket, and Local Git. Once you have chosen your Source CI/CD tool, you must Authorize your account and click Save:

Figure 12.62 – Deployment Center – Settings

  1. Click the FTPS credentials tab and note FTPS endpoint. Application scope is an automatically generated Username and Password that’s limited to your application and deployment slot. You can use this to connect to your FTPS endpoint. You can also define a User scope and create a username and password:

Figure 12.63 – Deployment Center – FTPS credentials

With that, you have learned about the deployment settings that are available to you for your app services. You should now feel comfortable navigating this component of Azure App Service as you know where to integrate CI/CD and where to find your FTPS credentials so that you can modify your application code. Next, we will summarize what we covered in this chapter.

Summary

In this chapter, we covered what an App Service is within Azure, the role of App Service plans and why they are essential to your App Service, and how to deploy an application, including how to manage its settings and configurations and how to secure it. Then, we explored and discussed various networking configurations for your application and the considerations you need to have when configuring these settings. You should now feel confident working with applications on Azure using App Service.

In the next chapter, we will cover some examples of deploying and managing compute services within Azure. There will be a VM lab, a container lab, and an App Service lab. After following these examples, you will feel more comfortable working with Azure compute services.

More Details
Mar 4, 2022
Downloading and extracting files for labs – Practice Labs – Deploying and Managing Azure Compute Resources

Downloading and extracting files for labs

Follow these steps to download and extract the required files:

  1. Navigate to the following URL and download the archive folder (.zip): https://github.com/MicrosoftLearning/AZ-104-MicrosoftAzureAdministrator/archive/master.zip.
  2. Depending on the browser you are using, you will likely be presented with different versions of the following dialog. Click Save File and OK at the bottom of the screen:

Figure 13.1 – Downloading files (ZIP)

  1. Right-click the ZIP file you downloaded and click Extract All…(on Windows systems):

Figure 13.2 – Extract All… (ZIP)

  1. Navigate to your downloaded folder and follow instructions from labs when needing files that will be in that folder.

You have now downloaded all the files you need for performing the labs later in the chapter.

Managing virtual machines lab

This lab willguide you through creating standalone Virtual Machines (VMs) and VMs as a scale set, as well as exploring storage for these different deployments and how both solutions can be scaled. Furthermore, you will explore how VM custom script extension can be assigned and use to automatically configuring your VMs.

Estimated time: 50 minutes.

Lab method: PowerShell, ARM templates, and the Azure portal.

Lab scenario: In this lab, you play the role of an administrator evaluating different methods for deploying VMs for scale and resiliency. You are also exploring how VMs manage storage to support your scale. You need to determine whether standalone VMs or VMs deployed as a scale set are best suited to your deployments and understand the differences between them to ascertain when to use the different deployment types. As part of your exploration task, you want to see whether there is any mechanism that can assist you in reducing the administrative effort involved in deploying your VMs or automatically completing configuration tasks. You have heard that a custom script extension can assist with this, and you want to see how this will guide you to achieve your expected result.

Visit the following link (Lab URL) to the official Microsoft Learning GitHub labs, where you will be guided through each task step by step to achieve the following objectives.

Lab objectives:

  1. Task one: Deploy two VMs in two different zones for resiliency.

II. Task two: Use VM extensions to configure your VMs.

III. Task three: Configure and attach data disks to your VMs.

IV. Task four: Register the required resource providers for your subscription.

V. Task five: Deploy your VM scale sets.

VI. Task six: Use VM extensions to configure your scale set.

VII. Task seven: Configure autoscale for your scale set and attach data disks.

Lab URL: https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_08-Manage_ Virtual_Machines.html.

Lab architecture diagram: The following diagram illustrates the different steps and deployment components involved in the exercise. The tasks are numbered 1 to 7 to correlate with the steps in the exercise:

Figure 13.3 – Managing VMs – architecture diagrams

You have now experienced working with VMs both as individual resources and scale sets and should feel confident in working with these in your environments. It’s best practice to delete your resources from the lab to prevent unnecessary spending.

More Details