learn to configure virtual networks and subnets, including IP addressing.
Azure networking components offer a range of functionalities and services.
Azure Virtual Network (VNet) is a representation of your own network in the cloud.
Each VNet you create has its own CIDR block and can be linked to other VNets and on-premises networks if the CIDR blocks do not overlap.
A virtual network can be segmented into one or more subnets. Subnets provide logical divisions within your network. Subnets can help improve security, increase performance, and make it easier to manage the network.
- Service requirements. Each service directly deployed into virtual network has specific requirements for routing and the types of traffic that must be allowed into and out of subnets.
- Virtual appliances. Azure routes network traffic between all subnets in a virtual network, by default. You can override Azure’s default routing to prevent Azure routing between subnets, or to route traffic between subnets through a network virtual appliance.
- Network security groups. You can associate zero or one network security group to each subnet in a virtual network. You can associate the same, or a different, network security group to each subnet.
- Private Links. Azure Private Link provides private connectivity from a virtual network to Azure platform as a service (PaaS), customer-owned, or Microsoft partner services.
Note: Azure reserves the first four and last IP address for a total of 5 IP addresses within each subnet.
Note: Plan to use an address space that is not already in use in your organization, either on-premises or in the cloud. Even if you plan for cloud-only virtual networks, you may later decide to connect an on-premises site.
Plan IP addressing
Private IP addresses: For communication within an Azure virtual network (VNet), and your on-premises network, when you use a VPN gateway or ExpressRoute circuit to extend your network to Azure.
Public IP addresses: For communication with the Internet, including Azure public-facing services.
Static vs dynamic addressing
Static IP addresses don’t change and are best for certain situations such as:
- DNS name resolution, where a change in the IP address would require updating host records.
- IP address-based security models that require apps or services to have a static IP address.
- TLS/SSL certificates linked to an IP address.
- Firewall rules that allow or deny traffic using IP address ranges.
- Role-based VMs such as Domain Controllers and DNS servers.
Note: You may decide to separate dynamically and statically assigned IP resources into different subnets.
IP Version. Select IPv4 or IPv6 or Both. Selecting Both will result in two Public IP addresses being created- one IPv4 address and one IPv6 address.
- Dynamic. Dynamic addresses are assigned only after a public IP address is associated to an Azure resource, and the resource is started for the first time.
- Static. Static addresses are assigned when a public IP address is created. Static addresses aren’t released until a public IP address resource is deleted.
Associate public IP addresses
Associate private IP addresses
- Dynamic. Azure assigns the next available unassigned or unreserved IP address in the subnet’s address range. For example, Azure assigns 10.0.0.10 to a new resource, if addresses 10.0.0.4-10.0.0.9 are already assigned to other resources. Dynamic is the default allocation method.
- Static. You select and assign any unassigned or unreserved IP address in the subnet’s address range. For example, if a subnet’s address range is 10.0.0.0/16 and addresses 10.0.0.4-10.0.0.9 are already assigned to other resources, you can assign any address between 10.0.0.10 – 10.0.255.254.
Interactive lab simulation 1
Configure network security groups
learn how to implement network security groups and ensure network security group rules are correctly applied.
Implement network security groups
A network security group contains a list of security rules that allow or deny inbound or outbound network traffic. An NSG can be associated to a subnet or a network interface
Determine network security group rules
There are three default inbound security rules:
The rules deny all inbound traffic except from the virtual network and Azure load balancers.
There are three default outbound security rules:
The rules only allow outbound traffic to the Internet and the virtual network.
Determine network security group effective rules
If there was incoming traffic on port 80:
1 You would need to have the NSG at the subnet level ALLOW port 80.
2 You would also need another NSG with an ALLOW rule on port 80 at the NIC level.
For incoming traffic:
the NSG set at the subnet level is evaluated first, then the NSG set at the NIC level is evaluated.
For outgoing traffic, it’s the reverse.
If you have several NSGs and aren’t sure which security rules are being applied, you can use the Effective security rules link. For example, you could verify the security rules being applied to a network interface.
Here we a NSG nsgglobal that is attached to the subnet, vm is deployed to that subnet. Check vm blade | networking:
Create network security group rules
- Source. The source filter can be Any, an IP address range, an Application security group, or a default tag.
- Destination. The destination filter can be Any, an IP address range, an application security group, or a default tag.
- Service. The service specifies the destination protocol and port range for this rule.
- Priority. Rules are processed in priority order; the lower the number, the higher the priority. We recommend leaving gaps between rules – 100, 200, 300, etc. – so that it’s easier to add new rules without having to edit existing rules.
Implement Application Security Groups
- logically group virtual machines by workload and define network security rules based on those groups
- ASGs work in the same way as NSGs but provide an application-centric way of looking at your infrastructure. You join virtual machines to the ASG, and then use the ASG as a source or destination in NSG rules.
Advantages of using an application security group
- The configuration doesn’t require specific IP addresses.
- It would be difficult to specify IP addresses because of the number of servers and because the IP addresses could change. You also don’t need to arrange the servers into a specific subnet.
- This configuration doesn’t require multiple rule sets. You don’t need to create a separate rule for each VM. You can dynamically apply new rules to ASG. New security rules are automatically applied to all the VMs in the Application Security Group.
- The configuration is easy to maintain and understand since is based on workload usage.
Interactive lab simulation
Note: had to add an public ip, since I forgot it.
Configure Azure Firewall
Your company is spread across multiple Azure regions.
The networking infrastructure includes multiple virtual networks and connections to an on-premises network.
The IT staff is concerned about malicious actors trying to infiltrate the network.
You need to implement Azure Firewall.
Determine Azure Firewall uses
Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources.
You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks.
Azure Firewall features
- Built-in high availability
- Azure Firewall can be configured during deployment to span multiple Availability Zones for increased availability.
- Unrestricted cloud scalability.
- Application FQDN filtering rules.
- Network traffic filtering rules
- Threat intelligence, alert and deny traffic from/to known malicious IP addresses and domains.
- Multiple public IP addresses. You can associate multiple public IP addresses with your firewall.
Create firewall rules
There are three kinds of rules that you can configure in the Azure Firewall. Remember, by default, Azure Firewall blocks all traffic, unless you enable it.
You can configure Azure Firewall Destination Network Address Translation (DNAT) to translate and filter inbound traffic to your subnets. Each rule in the NAT rule collection is used to translate your firewall public IP and port to a private IP and port. Scenarios where NAT rules might be helpful are publishing SSH, RDP, or non-HTTP/S applications to the Internet.
Any non-HTTP/S traffic that will be allowed to flow through the firewall must have a network rule. For example, if resources in one subnet must communicate with resources in another subnet, then you would configure a network rule from the source to the destination.
Application rules define fully qualified domain names (FQDNs) that can be accessed from a subnet. For example, specify the Windows Update network traffic through the firewall. Configuration settings include:
Name, source address. protocol, port and target FQDNS
When a packet is being inspected to determine if it is allowed or not, the rules are processed in this order:
Application Rules (network and application)
Once a rule is found that allows the traffic through, no more rules are checked.
Configure Azure DNS
Azure DNS enables you to host your DNS records for your domains on Azure infrastructure
Initial domain name
This instance of the domain has an initial domain name in the form domainname.onmicrosoft.com. The initial domain name is intended to be used until a custom domain name is verified.
Custom domain name
The initial domain name can’t be changed or deleted. You can however add a routable custom domain name you control
Verify custom domain names
After adding the custom domain name, you must verify ownership of the domain name. Verification is performed by adding a DNS record. The DNS record can be MX or TXT. Once the DNS record is added, Azure will query the DNS domain for the presence of the record. This could take several minutes or several hours. When Azure verifies the presence of the DNS record, it will then add the domain name to the subscription.
Create Azure DNS zones
Azure DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without needing to add a custom DNS solution.
A DNS zone hosts the DNS records for a domain. So, to start hosting your domain in Azure DNS, you need to create a DNS zone for that domain name. Each DNS record for your domain is then created inside this DNS zone.
The name of the zone must be unique within the resource group, and the zone must not exist already.
The same zone name can be reused in a different resource group or a different Azure subscription.
Where multiple zones share the same name, each instance is assigned different name server addresses.
Root/Parent domain is registered at the registrar and pointed to Azure DNS.
Child domains are registered in AzureDNS directly.
Delegate DNS domains
To delegate your domain to Azure DNS, you first need to know the name server names for your zone. Each time a DNS zone is created Azure DNS allocates name servers from a pool. Once the Name Servers are assigned, Azure DNS automatically creates authoritative NS records in your zone.
The easiest way to locate the name servers assigned to your zone is through the Azure portal. In this example, the zone has been assigned four name servers: ‘ns1-02.azure-dns.com’, ‘ns2-02.azure-dns.net’, ‘ns3-02.azure-dns.org’, and ‘ns4-02.azure-dns.info’:
Once the DNS zone is created, and you have the name servers, you need to update the parent domain. Each registrar has their own DNS management tools to change the name server records for a domain. In the registrar’s DNS management page, edit the NS records and replace the NS records with the ones Azure DNS created.
The term registrar refers to the third party domain registrar. This is the company where you registered your domain.
When delegating a domain to Azure DNS, you must use the name server names provided by Azure DNS. You should always use all four name server names, regardless of the name of your domain.
Add DNS record sets
DNS record sets
A record set is a collection of records in a zone that have the same name and are the same type.
The Add record set page will change depending on the type of record you select. For an A record, you will need the TTL (Time to Live) and IP address. The time to live, or TTL, specifies how long each record is cached by clients.
Plan for private DNS zones
Use your own custom domain names rather than the Azure-provided names
The DNS records for the private zone are not viewable or retrievable. But, the DNS records are registered and will resolve successfully.
Azure private DNS benefits
Removes the need for custom DNS solutions.
Use all common DNS records types. Azure DNS supports A, AAAA, CNAME, MX, PTR, SOA, SRV, and TXT records.
Automatic hostname record management. Along with hosting your custom DNS records, Azure automatically maintains hostname records for the VMs in the specified virtual networks.
Hostname resolution between virtual networks. Unlike Azure-provided host names, private DNS zones can be shared between virtual networks. This capability simplifies cross-network and service-discovery scenarios, such as virtual network peering.
Familiar tools and user experience. To reduce the learning curve, this new offering uses well-established Azure DNS tools (PowerShell, Azure Resource Manager templates, and the REST API).
Available in all Azure regions.
Scenario 1: Name resolution scoped to a single virtual network
In the above diagram, VNET1 contains two VMs (VM1 and VM2). Each VM has a private IP address. When you create a Private Zone (contoso.lab) and link it to VNet1, Azure DNS will automatically create two A records in the zone if you enable auto registration in the link configuration. DNS queries from VM1 to resolve VM2.contoso.lab will receive a DNS response that contains the Private IP of VM2. And, a Reverse DNS query (PTR) for the Private IP of VM1 (10.0.0.4) issued from VM2 will receive a DNS response that contains the FQDN of VM1, as expected.
Scenario 2: Name resolution for multiple networks
Name resolution across multiple virtual networks is probably the most common usage for DNS private zones. The following diagram shows a simple version of this scenario where there are only two virtual networks – VNet1 and VNet2.
VNet1 is designated as a Registration virtual network and VNET2 is designated as a Resolution virtual network.
The intent is for both virtual networks to share a common zone contoso.lab.
The Resolution and Registration virtual networks are linked to the zone.
DNS records for the Registration VNet VMs are automatically created. You can manually add DNS records for VMs in the Resolution virtual network.
Configure virtual network peering
The company has deployed services into separate virtual networks. It hasn’t configured private connectivity between the virtual networks.
Once peered, the virtual networks appear as one, for connectivity purposes.
There are two types of VNet peering.
Regional VNet peering connects Azure virtual networks in the same region.
Global VNet peering connects Azure virtual networks in different regions.
Benefits of virtual network peering
- Private. Network traffic between peered virtual networks is private, Az backbone.
- Performance. A low-latency, high-bandwidth connection.
- Seamless. The ability to transfer data across Azure subscriptions, deployment models, and across Azure regions.
- No disruption.
Determine gateway transit and connectivity
When virtual networks are peered, you configure a VPN gateway in the peered virtual network as a transit point.
A virtual network can have only one gateway. (VNet Peering and Global VNet Peering)
Create virtual network peering
Here are the steps to configure VNet peering.
- Create two virtual networks.
- Peer the virtual networks.
- Create virtual machines in each virtual network.
- Test the communication between the virtual machines.
ping or tnc should fail before we peer
When you add a peering on one virtual network, the second virtual network configuration is automatically added.
Refresh and it should be connected
Network watcher topology
vmtest1 is in Virtual network/subnet; vnet004799/default, 10.0.x.x
vmtest2 is in Virtual network/subnet; vnet004798/default2, 10.1.x.x
Now if we in install RabbitMQ on vmtest2
NSGs in Azure is a way for you to control (similar to access lists) what traffic is allowed to pass through. Remember that NSGs can be applied to either a subnet or a VM NIC, so therefore you can control traffic in/outbound at different points.
The windows firewall is what we all are used to. However, there are other resources in Azure that will exist in a VNET that does not have its own firewall – unlike windows does. So it’s important to have the NSGs to protect them. You are right that you will need to allow traffic on the port both at the NSG level and the Windows firewall.
By default, when you configure a peering, it has full access between vnet’s. You can use nsg (network security group) to block specific traffic.
No NSG or edit firewall on vmtest2:
No NSG, just edit advanced firewall on vmtest2 add inbound 5672.
tcn from vmtest1
Add a deny rule on vmtest2 nsg
and false is true
Determine service chaining uses
VNet Peering is nontransitive. When you establish VNet peering between VNet1 and VNet2 and between VNet2 and VNet3, VNet peering capabilities do not apply between VNet1 and VNet3. However, you can configure user-defined routes and service chaining to provide the transitivity. This allows you to:
- Implement a multi-level hub and spoke architecture.
- Overcome the limit on the number of VNet peerings per virtual network.
When you deploy hub-and-spoke networks, the hub virtual network can host infrastructure components like the network virtual appliance or VPN gateway. All the spoke virtual networks can then peer with the hub virtual network. Traffic can flow through network virtual appliances or VPN gateways in the hub virtual network.
User-defined routes and service chaining
Virtual network peering enables the next hop in a user-defined route to be the IP address of a virtual machine in the peered virtual network, or a VPN gateway.
Service chaining lets you define user routes. These routes direct traffic from one virtual network to a virtual appliance, or virtual network gateway.
Interactive lab simulation
Task 1: Create the infrastructure environment. In this task, you’ll deploy three virtual machines. Virtual machines will be deployed in different regions and virtual networks.
Use a template to create the virtual networks and virtual machines in the different regions. You can review the lab template.
Use Azure PowerShell to deploy the template.
Task 2: Configure local and global virtual network peering.
Create a local virtual network peering between the two virtual networks in the same region.
The above has been done with example
Create a global virtual network peering between virtual networks in different regions.
Task 3: Test intersite connectivity between virtual machines on the three virtual networks.
Test the virtual machine connections in the same region = Done above
Test the virtual machine connections in different regions.
testit-vnet3, vnet004797, East US, default3, 172.16.x.x
testit-vnet2, vnet004799, West Europe, default 10.0.x.x
Peer vnets and check port 80
There are now two peered in vnet004799
And browsing IIS is success
View topology, search for network watcher
We recommend that you use the address ranges enumerated in RFC 1918, which have been set aside by the IETF for private, non-routable address spaces:
10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
Configure VPN Gateway
You need to create VPN gateways to securely connect your company sites to Azure.
A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet.
Each virtual network can have only one VPN gateway (but multiple connections to the same VPN gateway)
Site-to-site connections connect on-premises datacenters to Azure virtual networks
VNet-to-VNet connections connect Azure virtual networks (custom)
Point-to-site (User VPN) connections connect individual devices to Azure virtual networks
A virtual network gateway is composed of two or more VMs that are deployed to a specific subnet you create called the gateway subnet.
Virtual network gateway VMs contain routing tables and run specific gateway services.
These VMs are created when you create the virtual network gateway. You can’t directly configure the VMs that are part of the virtual network gateway.
Note: Creating a virtual network gateway can take up to 45 minutes to complete.
Create site-to-site connections
High level steps
Reserve an IP address range for this virtual network that does not overlap with on-prem. No duplicate ranges.
Specify the DNS server (optional)
Create the gateway subnet
Before creating a virtual network gateway for your virtual network, you first need to create the gateway subnet.
CIDR block of /28 or /27 to provide enough IP addresses
VMs are deployed to the gateway subnet and configured with the required VPN gateway settings
Never deploy other resources to the gateway subnet
The gateway subnet must be named GatewaySubnet.
Create the VPN gateway
The VPN gateway settings that you chose are critical to creating a successful connection.
Determine the VPN gateway type
Point-to-Site (P2S) connection requires a Route-based VPN type.
Site-to-Site (S2S) configurations require a VPN device. Some VPN devices only support a certain VPN type.
Route-based VPNs. Route-based VPNs use routes in the IP forwarding or routing table to direct packets into their corresponding tunnel interfaces.
The tunnel interfaces then encrypt or decrypt the packets in and out of the tunnels.
Are configured as any-to-any (or wild cards).
Policy-based VPNs. Policy-based VPNs encrypt and direct packets through IPsec tunnels based on the IPsec policies configured with the combinations of address prefixes between your on-premises network and the Azure VNet.
Defined as an access list in the VPN device configuration
Basic gateway SKU
Policy-based VPNs for S2S connections
Most VPN Gateway configurations require a Route-based VPN.
Determine gateway SKU and generation
Bandwidth, S22 Tunnels, P2S Tunnels
Create the local network gateway
The local network gateway typically refers to the on-premises location.
Set up the on-premises VPN gateway
There is a validated list of standard VPN devices that work well with the VPN gateway. This list was created in partnership with device manufacturers like Cisco, Juniper, Ubiquiti, and Barracuda Networks.
To configure your VPN device;
A shared key. The same shared key that you specify when creating the VPN connection.
The public IP address of your VPN gateway. The IP address can be new or existing.
Create the VPN connection
Once your VPN gateways are created, you can create the connection between them. If your VNets are in the same subscription, you can use the portal.
Name. Enter a name for your connection.
Connection type. Select Site-to-Site (IPSec) from the drop-down.
Shared key (PSK). In this field, enter a shared key for your connection. You can generate or create this key yourself.
In a site-to-site connection, the key you use is the same for your on-premises device and your virtual network gateway connection.
Verify the VPN connection
Determine high availability scenarios
Every Azure VPN gateway consists of two instances in an active-standby configuration. For any planned maintenance or unplanned disruption that happens to the active instance, the standby instance would take over (failover) automatically, and resume the S2S VPN or VNet-to-VNet connections.
The switch over will cause a brief interruption. For planned maintenance, the connectivity should be restored within 10 to 15 seconds. For unplanned issues, the connection recovery will be longer, about 1 minute to 1 and a half minutes in the worst case. For P2S VPN client connections to the gateway, the P2S connections will be disconnected and the users will need to reconnect from the client machines.
You can now create an Azure VPN gateway in an active-active configuration, where both instances of the gateway VMs will establish S2S VPN tunnels to your on-premises VPN device.
Configure ExpressRoute and Virtual WAN
Determine ExpressRoute uses
Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud.
Use Azure ExpressRoute to create private connections between Azure datacenters and infrastructure on your premises or in a colocation environment.
ExpressRoute connections don’t go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical Internet connections.
ExpressRoute gives you a fast and reliable connection to Azure with bandwidths up to 100 Gbps.
The high connection speeds make it excellent for scenarios like periodic data migration, replication for business continuity, and disaster recovery.
Determine ExpressRoute capabilities
ExpressRoute is supported across all Azure regions and locations.
Microsoft uses BGP to exchange routes, Layer 3 connectivity.
Each ExpressRoute circuit consists of two connections (Redundancy) to two Microsoft Enterprise edge routers (MSEEs) from the connectivity provider/your network edge.
ExpressRoute connections enable access to Microsoft Azure services.
Connect to Microsoft in one of our peering locations and access regions within the geopolitical region.
(Amsterdam through ExpressRoute, you’ll have access to all Microsoft cloud services hosted in Northern and Western Europe.)
Coexisting ExpressRoute and VPN gateway
You configure a Site-to-Site VPN as a secure failover path for ExpressRoute or use Site-to-Site VPNs to connect to sites that are not part of your network, but that are connected through ExpressRoute.
ExpressRoute connection models
Colocated at a cloud exchange
Colocated in a facility with a cloud exchange, you order virtual cross-connections to the Microsoft cloud through the colocation provider’s Ethernet exchange.
Point-to-point Ethernet Connection
Connect your on-premises datacenters/offices to the Microsoft cloud through point-to-point Ethernet links.
Integrate your WAN with the Microsoft cloud
Intersite connection options
Determine Virtual WAN uses
Azure Virtual WAN is a networking service that provides optimized and automated branch connectivity to, and through, Azure.
Azure regions serve as hubs that you can choose to connect your branches to.
Azure Virtual WAN brings together many Azure cloud connectivity services such as site-to-site VPN, User VPN (point-to-site), and ExpressRoute into a single operational interface.
Virtual WAN advantages
Integrated connectivity solutions in hub and spoke.
Automated spoke setup and configuration.
Intuitive troubleshooting. You can see the end-to-end flow within Azure, and then use this information to take required actions.
Virtual WAN types
Configure network routing and endpoints
Azure uses system routes to direct network traffic:
Traffic between VMs in the same subnet.
Between VMs in different subnets in the same virtual network.
Data flow from VMs to the Internet.
A route table contains a set of rules, called routes, that specifies how packets should be routed in a virtual network. Routing tables are associated to subnets, and each packet leaving a subnet is handled based on the associated route table. Packets are matched to routes using the destination. The destination can be an IP address, a virtual network gateway, a virtual appliance, or the internet. If a matching route can’t be found, then the packet is dropped.
Identify user-defined routes
Azure automatically handles all network traffic routing.
But, what if you want to do something different?
Configure user-defined routes (UDRs). UDRs control network traffic by defining routes that specify the next hop of the traffic flow. The hop can be a virtual network gateway, virtual network, internet, or virtual appliance.
Each route table can be associated to multiple subnets, but a subnet can only be associated to a single route table.
There are no charges for creating route tables in Microsoft Azure.
Determine service endpoint uses
A virtual network service endpoint provides the identity of your virtual network to the Azure service.
Once service endpoints are enabled in your virtual network, you can secure Azure service resources to your virtual network by adding a virtual network rule to the resources.
With service endpoints, service traffic switches to use virtual network private addresses as the source IP addresses when accessing the Azure service from a virtual network.
Why use a service endpoint?
Improved security for your Azure service resources.
When service endpoints are enabled in your virtual network, you secure Azure service resources to your virtual network by adding a virtual network rule. The rule improves security by fully removing public Internet access to resources, and allowing traffic only from your virtual network.
Optimal routing for Azure service traffic from your virtual network.
Endpoints always take service traffic directly from your virtual network to the service on the Microsoft Azure backbone network.
Simple to set up with less management overhead. You no longer need reserved, public IP addresses in your virtual networks to secure Azure resources through IP firewall. There are no NAT or gateway devices required to set up the service endpoints. Service endpoints are configured through the subnet. There’s no extra overhead to maintaining the endpoints.
Determine service endpoint services
Several services are available including: Azure Active Directory, Azure Cosmos DB, EventHub, KeyVault, Service Bus, SQL, and Storage.
Adding service endpoints can take up to 15 minutes to complete. Each service endpoint integration has its own Azure documentation page.
Identify private link uses
Azure Private Link provides private connectivity from a virtual network to Azure platform as a service (PaaS), customer-owned, or Microsoft partner services. It simplifies the network architecture and secures the connection between endpoints in Azure by eliminating data exposure to the public internet.
Private connectivity to services on Azure.
Integration with on-premises and peered networks.
Protection against data exfiltration for Azure resources.
Services delivered directly to your customers’ virtual networks.
Interactive lab simulation
Task 1: Create and configure a virtual network in Azure.
Create a virtual network, az104-04-vnet1.
Add two subnets, Subnet0 and Subnet1, to the virtual network.
Task 2: Deploy virtual machines into different subnets of the virtual network.
Task 3: Configure private and public IP addresses of Azure VMs. Ensure the IP addresses don’t change over time
Task 4: Configure network security groups. Protect the virtual machine public endpoints from being accessible from the internet.
Task 5: Configure Azure DNS for internal name resolution. Ensure internal Azure virtual machines names and IP addresses can be resolved.
Create a private DNS zone for your organization.
Add a virtual network link to the virtual network.
Verify the virtual machines DNS records are registered.
Verify internal DNS name resolution is working.
Task 6: Configure Azure DNS for external name resolution. Ensure a publicly available domain name can be resolved by external queries.
Create a DNS zone for a publicly available domain name.
Add a DNS record for each virtual machine.
Verify external DNS name resolution is working.
Private dnz zone vm only
Configure Azure Load Balancer
Determine Azure load balancer uses
Load Balancer can be used for inbound and outbound scenarios.
Keep this diagram in mind since it covers the four components that must be configured for your load balancer: Frontend IP configuration, Backend pools, Health probes, and Load-balancing rules.
Implement a public load balancer
Two types of load balancers: public and internal.
Public load balancer maps the public IP address and port number of incoming traffic to the private IP address and port number of the VM.
Implement an internal load balancer
Directs traffic to resources that are inside a virtual network or that use a VPN to access Azure infrastructure. Frontend IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.
Following types of load balancing:
Within a virtual network.
For a cross-premises virtual network.
For multi-tier applications.(Internet-facing tier apps where backend is is not internet-facing).
For line-of-business applications.
A public load balancer could be placed in front of the internal load balancer to create a multi-tier application.
Determine load balancer SKUs
Create backend pools
To distribute traffic, a back-end address pool contains the IP addresses of the virtual NICs that are connected to the load balancer.
Create load balancer rules
The rule maps a given frontend IP and port combination to a set of backend IP addresses and port combination. Before configuring the rule, create the frontend, backend, and health probe.
By default, Azure Load Balancer distributes network traffic equally among multiple VM instances.
Session persistence specifies how traffic from a client should be handled. The default behavior (None) is that successive requests from a client may be handled by any virtual machine.
Can change this:
None (default) specifies any virtual machine can handle the request.
Client IP specifies that successive requests from the same client IP address will be handled by the same virtual machine.
Client IP and protocol = above and protocol
Create health probes
Monitor the status of application.
The health probe dynamically adds or removes VMs from the load balancer rotation based on their response to health checks.
HTTP custom probe, every 15 sec, http 200 is healty.
TCP custom probe, must have success TCP session
Next unit: Interactive lab simulation
Implement traffic management
Task 1: Provision the lab environment. In this task, you’ll deploy four virtual machines into the same Azure region. The first two will reside in a hub virtual network, while each of the remaining two will reside in a separate spoke virtual network.
$rgname = "az104-06-rg1-682093" New-AzResourceGroup -Name $rgname -Location "west europe" -Force New-AzResourceGroupDeployment -ResourceGroupName $rgname -TemplateFile .\az104-06-vms-loop-template.json -TemplateParameterFile .\az104-06-vms-loop-parameters.json -WhatIf
Task 2: Configure the hub and spoke network topology. In this task, you’ll configure local peering between the virtual networks you deployed in the previous tasks in order to create a hub and spoke network topology.
Configure virtual network peering between the virtual networks.
Ensure to allow forwarded traffic to facilitate routing between spoke virtual networks.
In the hub and spoke model, the hub is a virtual network that acts as a central location for managing external connectivity and hosting services used by multiple workloads. The spokes are virtual networks that host workloads and connect to the central hub through virtual network peering.
All traffic passing in or out of the workload spoke networks is routed through the hub network where it can be routed, inspected, or otherwise managed by centrally managed IT rules or processes.
Task 3: Test transitivity of virtual network peering. In this task, you’ll test transitivity of virtual network peering by using Network Watcher.
Network watcher | Connection troubleshoot
This result is expected since the two spoke vnets are not peered with each other(vnet peering is not transitive)
vnet2 and vnet3 can only talk to vnet1, not each other.
Task 4: Configure routing in the hub and spoke topology. In this task, you’ll configure and test routing between the two spoke virtual networks.
vm az-104-06-nic | IP configuration | IP forwarding = Enabled
vm az-104-06-vm0 | Run command | RunPowershellScript | Install-Windows RemoteAccess -IncludeManagementTools
And some more cmd for adding and enable route
In Azure add route tables (2) and assosiate them with vnet->subnet.
Network watcher and test:
The network path shows that the traffic was routed via the IP address for the hub virtuale machine that was configured as a router.
Resukt is expexted, since the trffic between spoke vnets is now routed via the vm located in the hub vnet, whitch acts as a router.
Task 5: Implement Azure Load Balancer.
Load balancer, add/create pub ip, backend lb, add the two vms, inbound rule and health probe, copy pub ip and use in browser.
Task 6: Implement Azure Application Gateway.
New subnet in hub vnet, add/create application gateway, pub ip, add backend pool, private ip of vms in the spoke vnets, add routing rules and backend settings,copy pub ip and use in browser.
They behave the same, what is the difference?
The main difference between Azure Load Balancer and Application Gateway is that Azure Load Balancer is a service that allows you to distribute traffic across multiple servers in your Azure deployment. Application Gateway is a software that sits between your web applications and the HTTP and HTTPS protocols.
Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 – TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port.
Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model. It’s the single point of contact for clients. Load balancer distributes inbound flows that arrive at the load balancer’s front end to backend pool instances. These flows are according to configured load-balancing rules and health probes. The backend pool instances can be Azure Virtual Machines or instances in a Virtual Machine Scale Set.
Configure Azure Application Gateway
The vehicle registration website has been running on a single server, and has suffered multiple outages because of server failures. This has resulted in frustrated drivers trying to register their vehicles by month’s end before their registrations expire.
You would like to improve resiliency by adding multiple web servers to its site, and distribute the load across them. You would also like to centralize their site on a single load-balancing service. This will simplify the URLs for site visitors.
Application Gateway manages the requests that client applications send to a web app.
Uses round robin to send load balance requests to the servers in each back-end pool.
Provides session stickiness.
Support for the HTTP, HTTPS, HTTP/2 and WebSocket protocols.
A web application firewall to protect against web application vulnerabilities.
End-to-end request encryption.
Autoscaling, to dynamically adjust capacity as your web traffic load change.
Determine Application Gateway routing
Requests to your web apps to the IP address or DNS name of the gateway.
The gateway routes using a set of rules.
There are two primary methods of routing traffic, path-based routing and multiple site routing.
Path-based routing sends requests with different URL paths to different pools of back-end servers.
path /video/* to a back-end pool to pool-1
path /images/* requests to a pool-2
Multiple site routing:
Multiple site routing configures more than one web application on the same application gateway instance.
you could direct all requests for http://contoso.com to servers in one back-end pool,
and requests for http://fabrikam.com to another back-end pool.
Multi-site configurations are useful for supporting multi-tenant applications
Redirection. Redirection can be used to another site, or from HTTP to HTTPS.
Rewrite HTTP headers. HTTP headers allow the client and server to pass parameter information with the request or the response.
Custom error pages. Application Gateway allows you to create custom error pages instead of displaying default error pages.
Application Gateway component set up
Application gateway components:
IP address, a private IP address, or both.
Listener accepts traffic arriving on a specified combination of protocol, port, host, and IP address.
A Basic listener only routes a request based on the path in the URL. A Multi-site listener can also route requests using the hostname element of the URL.
Listeners also handle TLS/SSL certificates.
A routing rule binds a listener to the back-end pools.
HTTP(S), Other configuration information includes Protocol, Session stickiness, Connection draining, Request timeout period, and Health probes.
A back-end pool references a collection of web servers.
Each pool can specify a fixed set of virtual machines, a virtual machine scale-set, an app hosted by Azure App Services, or a collection of on-premises servers. Each back-end pool has an associated load balancer that distributes work across the pool.
Web application firewall:
Handles incoming requests before they reach a listener.
Checks each request for many common threats, based on the Open Web Application Security Project (OWASP). Common threats include SQL-injection, Cross-site scripting, Command injection, HTTP request smuggling, HTTP response splitting, Remote file inclusion, Bots, crawlers, and scanners, and HTTP protocol violations and anomalies.
WAF is enabled on your Application Gateway by selecting the WAF tier when you create a gateway.
Health probes determine which servers are available for load-balancing in a back-end pool. HTTP response.
If you don’t configure a health probe, Application Gateway creates a default probe that waits for 30 seconds before deciding that a server is unavailable.
Design an IP addressing schema for your Azure deployment
A good Azure IP addressing schema provides flexibility, room for growth, and integration with on-premises networks.
You need to plan the public and private IP addresses for the network carefully, so you don’t run out of addresses and will have capacity for future growth.
Network IP addressing and integration
On-premises IP addressing
A typical on-premises network design includes these components:
There are three ranges of non-routable IP addresses that are designed for internal networks that won’t be sent over internet routers:
10.0.0.0 to 10.255.255.255
172.16.0.0 to 172.31.255.255
192.168.0.0 to 192.168.255.255
Azure IP addressing
Azure virtual networks use private IP addresses. The ranges of private IP addresses are the same as for on-premises IP addressing.
In a typical Azure network design, we usually have these components:
Network security groups
The Azure network does not follow the typical on-premises hierarchical network design. The Azure network provides the ability to scale up and scale down infrastructure based on demand. Provisioning in the Azure network happens in a matter of seconds. There are no hardware devices, like routers or switches. The entire infrastructure is virtual, and you can slice it into chunks that suit your requirements.
Basic properties of Azure virtual networks
A virtual network is your network in the cloud. You can divide your virtual network into multiple subnets. Each subnet has a portion of the IP address space that is assigned to your virtual network. You can add, remove, expand, or shrink.
By default, all subnets in an Azure virtual network can communicate with each other. However, you can use a network security group to deny communication between subnets.
Integrate Azure with on-premises networks
Before you start integrating Azure with on-premises networks, it’s important to identify the current private IP address scheme used in the on-premises network. There can be no IP address overlap for interconnected networks.
For example, you can’t use 192.168.0.0/16 on your on-premises network and use 192.168.10.0/24 on your Azure virtual network. These ranges both contain the same IP addresses, and won’t be able to route traffic between each other.
You can use the 10.10.0.0/16 address space for your on-premises network and the 10.20.0.0/16 address space for your Azure network
Public and private IP addressing in Azure
In Azure, you can use two types of IP addresses:
Public IP addresses
- A public IP address can be assigned to a VM, an internet-facing load balancer, a VPN gateway, or an application gateway.
Private IP addresses
Both types of IP addresses can be allocated in one of two ways:
Dynamic, can change over the lifespan of the Azure resource. Default.
Static, won’t change over the lifespan of the Azure resource.
SKUs for public IP addresses
Basic public IPs can be assigned by using static or dynamic allocation methods. Basic public IPs can be assigned to any Azure resource that can be assigned a public IP address, including network interfaces, VPN gateways, application gateways, and internet-facing load balancers.
By default, Basic SKU IP addresses:
Are open. Network security groups are recommended but optional for restricting inbound or outbound traffic.
Are available for inbound only traffic.
Are available when using instance meta data service (IMDS).
Don’t support Availability Zones.
Don’t support routing preferences.
Always use static allocation.
Are secure, and thus closed to inbound traffic. You must enable inbound traffic by using a network security group.
Are zone-redundant; and optionally zonal (they can be created as zonal and guaranteed in a specific availability zone).
Can be assigned to network interfaces, Standard public load balancers, application gateways, or VPN gateways.
Can be utilized with the routing preference to enable more granular control of how traffic is routed between Azure and the Internet.
Can be used as anycast frontend IPs for cross-region load balancers.
Public IP address prefix
Public IP address prefix is a reserved, static range of public IP addresses.
Benefit of a public IP address prefix is that you can specify firewall rules for a known range of IP addresses.
Private IP addresses
Private IP addresses are used for communication within an Azure Virtual Network, including virtual networks and your on-premises networks. Private IP addresses can be set to dynamic (DHCP lease) or static (DHCP reservation).
IP addressing for Azure virtual networks
You choose the private IP addresses that are reserved by Internet Assigned Numbers Authority (IANA) based on your network requirements:
A subnet is a range of IP address within the virtual network.
For all subnets in Azure, the first three IP addresses are reserved by default. For protocol conformance, the first and last IP addresses of all subnets also are reserved. 5
In Azure virtual networks, IP addresses can be allocated to the following types of resources:
Virtual machine network interfaces
Plan IP addressing for your networks
Gather your requirements
How many devices do you have on the network?
How many devices are you planning to add to the network in the future?
When your network expands, you don’t want to redesign the IP address scheme. Here are some other questions you could ask:
Based on the services running on the infrastructure, what devices do you need to separate?
How many subnets do you need?
How many devices per subnet will you have?
How many devices are you planning to add to the subnets in future?
Are all subnets going to be the same size?
How many subnets do you want or plan to add in future?
Isolation of services provides an additional layer of security, but also requires good planning. For example, your front-end servers can be accessed by public devices, but the back-end servers need to be isolated.
All subnets within a virtual network can communicate with each other in Azure.
Provide further isolation, you can use a network security group.
Remember that Azure uses the first three addresses on each subnet. The first and last IP addresses of the subnets also are reserved for protocol conformance. 5
Distribute your services across Azure virtual networks and integrate them by using virtual network peering
Connect services by using virtual network peering
When you use peering to connect virtual networks, virtual machines (VMs) in these networks can communicate with each other as if they’re in the same network.
With peered virtual networks, traffic between virtual machines is routed through the Azure network. The traffic uses only private IP addresses. It doesn’t rely on internet connectivity, gateways, or encrypted connections. The traffic is always private, and it takes advantage of the high bandwidth and low latency of the Azure backbone network.
Virtual network peering, same regions. NE
Global virtual network peering, different regions, NE to WE.
Create with ps1 or cli, only one side of the peering is created, must create reverse.
Create with portal, both sides is completed at same time.
Cross-subscription virtual network peering.
Vnet peering can be used with between different subscriptions.
Administrators of each subscription must grant the peer subscription’s administrator the Network Contributor role on their virtual network.
Only virtual networks that are directly peered can communicate with each other.
Virtual networks can’t communicate with peers of their peers.
3 vnets A, B, C, example A<->B<->C
Resources in A cannot reach C, then you must peer A to C
Can connect to on-prem network from peered vnet if enable gateway transit from the vnet that has vpn gateway.
By using virtual network peering with gateway transit, you can configure a single virtual network as a hub network. Connect this hub network to your on-premises datacenter and share its virtual network gateway with peers.
Enable gatweay transit->configure allow gateway transit option in hub vnet where the gateway connection is deployed to on-prem.
Also configure: Use remote gateways options in any spoke vnet.
If you want to enable the Use remote gateways option in a spoke network peering, you can’t deploy a virtual network gateway in the spoke virtual network.
Overlapping address spaces
IP address spaces of connected networks within Azure, between Azure and your on-premises network can’t overlap. This is also true for peered virtual networks.
Alternative connectivity methods
Connect virtual networks together through an ExpressRoute circuit.
VPNs use the internet to connect your on-premises datacenter to the Azure backbone through an encrypted tunnel.
You can use a site-to-site configuration to connect virtual networks together through VPN gateways.
When to choose virtual network peering
Virtual network peering should be your first choice when you need to integrate Azure virtual networks.
Peering might not be your best option if you have existing VPN or ExpressRoute.
Exercise – Prepare virtual networks for peering by using Azure CLI commands
Host your domain on Azure DNS
Azure DNS lets you host your DNS records for your domains on Azure infrastructure. With Azure DNS, you can use the same credentials, APIs, tools, and billing as your other Azure services.
Let’s say that your company recently bought the custom domain name wideworldimporters.com from a third-party domain name registrar. The domain name is for a new website that your organization plans to launch. You need a hosting service for DNS domains. This hosting service would resolve the wideworldimporters.com domain to the IP address of your web server.
You’re already using Azure to build your website. You decide to use Azure DNS to manage your domain.
What is DNS?
DNS, or the Domain Name System, is a protocol within the TCP/IP standard.
DNS server role of translating the human-readable domain names, www.wideworldimports.com to IP address.
A DNS server is also known as a DNS name server, or just a name server.
How does DNS work?
Two primary functions
Cache of accessed/used domain and IP.
Maintains key-value pair database of IP-adr and host/subdomain which DNS has authority.(Mail, web, internet domain service)
DNS server assignment
Computer or server must reference a DNS server.
Domain lookup requests
Check if name is stored in short-term cache?
Contact more DNS servers to look, if found, then returned
Else domain cannot be found error
Ipv4 and Ipv6
Ipv4, 4 sets of nr range 0-255,separated with ., most used but since much IoT devices Ipv4 will not keep up
Ipv6, will replace 4, eights groups of hexdecimal nr, separated by :
Dns settings for your domain
Must be configured for each host type including web, email and more.
DNS server will act as SOA, start of authority for the domain.
DNS record types
A is the host, maps domain to IP.
Cname, alias from one domain to another. (Many domains access same website, use this).
TXT, assosiate text strings with domain name.
More, Wildcards, CAA(cert auth), NS(name server), SOA, SPF(sender policy framework), SRV(server location)
SOA and NS is autmatically created when you create DNZ zone with Azure DNS.
A record set allows for multiple resources to be defined in a single record.
www.wideworldimports.com. 3600 IN A 127.0.0.1
www.wideworldimports.com. 3600 IN A 127.0.0.2
What is Azure DNS
Host and manage your domains with global DNS infrastructure, acts as SOA.
Must use third party to register domain.
Alias record sets
CNAME (alias fron one domain to another)
Configure Azure DNS to host your domain
Step 1: Create a DNS zone in Azure
Configure a public DNS zone
You’ll use a DNS zone to host the DNS records for a domain, such as wideworldimports.com.
You used a third-party domain-name registrar to register the wideworldimports.com domain.
To host the domain name with Azure DNS, you first need to create a DNS zone for that domain.
Create DNS zone:
Subscription, rg, name (wideworldimports.com), rg location
Step 2: Get your Azure DNS name servers
Get the name server details from the name servers (NS) record.
Step 3: Update the domain registrar setting
Changing the NS details is called domain delegation. When you delegate the domain, you must use all four name servers provided by Azure DNS.
Step 4: Verify delegation of domain name services
Can take 10 min or longer.
Query the SOA record.
SOA record was automatically set up when DNS zone was created.
Can use nslookup
SOA = represents domain and will be reference point when other DNS servers asks for domain.
Step 5: Configure your custom DNS settings
The domain name is wideworldimports.com. When it’s used in a browser, the domain resolves to your website. But what if you want to add in web servers or load balancers? These resources need to have their own custom settings in the DNS zone, either as an A record or a CNAME.
A record: Name, type=A, TTL, IP address
Cname: (Alias for an A record), when many domains should point to same IP adr.
Example: Web function, could use CNAME
Configure private DNZ zone
Name resolutions for vm’s within vnet and between vnet’s.
Step 1: Create private DNS zone
Something like private.wideworldimports.com
Step 2: Identify virtual networks
Step 3: Link your virtual network to a private DNS zone
Go to the private zone, and select Virtual network links.
Select Add to pick the virtual network you want to link to the private zone.
Dynamically resolve resource name by using alias record
Dynamically resolve resource name by using alias record
You’ve now successfully delegated the domain from the domain registrar to your Azure DNS and configured an A record to link the domain to your web server.
You know that the A record and CNAME record don’t support direct connection to Azure resources like your load balancers. You’ve been tasked with finding out how to link the apex domain with a load balancer.
What is an apex domain?
Highest level of domain, that’s wideworldimports.com (zone or root apex).
CNAME records is not supported at apex level.
What are alias records?
Alias records enable zone apex domain to reference other Azure resources from the DNS zone.
No need for complex redirection policies.
The Azure alias record can point to the following Azure resources:
A Traffic Manager profile
Azure Content Delivery Network endpoints
A public IP resource
A front door profile
Alias records also provide support for load-balanced applications in the zone apex.
The alias record set supports the following DNS zone record types:
A: The IPv4 domain name-mapping record.
AAAA: The IPv6 domain name-mapping record.
CNAME: The alias for your domain, which links to the A record.
Uses for alias records
Prevents dangling DNS records (not up to date)
Updates DNS record set automatically when IP adress changes.
Host Load-balanced applications at the zone apex.
Points zone apex to Azure CDN endpoints.
Manage and control traffic flow in your Azure deployment with routes
A virtual network lets you implement a security perimeter around your resources in the cloud.
You can control the information that flows in and out of a virtual network.
You can also restrict access to allow only the traffic that originates from trusted sources.
Identify routing capabilities of an Azure virtual network
Learn the purpose and benefits of custom routes.
Learn how to configure the routes to direct traffic flow through a network virtual appliance (NVA).
Network traffic in Azure is automatically routed across Azure subnets, virtual networks, and on-premises networks.
This routing is controlled by system routes, which are assigned by default to each subnet in a virtual network.
You can’t create or delete system routes, but you can override the system routes by adding custom routes to control traffic flow to the next hop.
Every subnet has the following default system routes:
Within Azure, there are other system routes.
Virtual network peering and service chaining
Virtual network peering and service chaining let virtual networks within Azure be connected to one another.
With this connection, virtual machines can communicate with each other within the same region or across regions.
Service chaining lets you override these routes by creating user-defined routes between peered networks.
Virtual network gateway
To send encrypted traffic between Azure and on-premises over the internet and to send encrypted traffic between Azure networks.
Virtual network service endpoint
Virtual network endpoints extend your private address space in Azure by providing a direct connection to your Azure resources.
System routes might make it easy for you to quickly get your environment up and running, but there are many scenarios in which you’ll want to more closely control the traffic flow within your network. For example, you might want to route traffic through an NVA or through a firewall. This control is possible with custom routes.
Two options for implementing custom routes: create a user-defined route, or use Border Gateway Protocol (BGP) to exchange routes between Azure and on-premises networks.
Override the default system routes so traffic can be routed through firewalls or NVAs.
You might have a network with two subnets and want to add a virtual machine in the perimeter network to be used as a firewall.
You can create a user-defined route so that traffic passes through the firewall and doesn’t go directly between the subnets.
When creating user-defined routes, you can specify these next hop types:
Virtual appliance: A virtual appliance is typically a firewall device used to analyze or filter traffic that is entering or leaving your network.
Virtual network gateway: Use to indicate when you want routes for a specific address to be routed to a virtual network gateway.
Virtual network: Use to override the default system route within a virtual network.
Internet: Use to route traffic to a specified address prefix that is routed to the internet.
None: Use to drop traffic sent to a specified address prefix.
With user-defined routes, you can’t specify the next hop type VirtualNetworkServiceEndpoint, which indicates virtual network peering.
Border gateway protocol
BGP is the standard routing protocol that is normally used to exchange routing and information among two or more networks. BGP is used to transfer data and information between different host gateways like on the internet or between autonomous systems.
You’ll typically use BGP to advertise on-premises routes to Azure when you’re connected to an Azure datacenter through Azure ExpressRoute. You can also configure BGP if you connect to an Azure virtual network by using a VPN site-to-site connection.
Route selection and priority
If multiple routes are available in a route table, Azure uses the route with the longest prefix match.
The longer the route prefix, the shorter the list of IP addresses available through that prefix. When you use longer prefixes, the routing algorithm can select the intended address more quickly.
For example, if a message gets sent to the IP address 10.0.0.2, but two routes are available with the 10.0.0.0/16 and 10.0.0.0/24 prefixes.
Then 10.0.0.0/24 prefix is used because it’s more specific.
What is an NVA? (Build DMZ Layer in permiter)
What is NVA?
Network virtual appliance (NVA) consists of virious layers like:
A WAN optimizer
Can deploy from marketplace, providers include Cisco, Check Point, Barracuda and more.
Use NAV to filer traffic inbound to vnet, block malicious and unexpected requests.
Network virtual appliance
NVAs are vms that control flow of network traffic by controlling routing.
Typically from from perimeter-network to other networks/subnets to manage traffic.
For most environments, the default system routes already defined by Azure are enough to get the environments up and running. In certain cases, you should create a routing table and add custom routes. Examples include:
Access to the internet via on-premises network using forced tunneling
Using virtual appliances to control traffic flow
Network virtual appliances in a highly available architecture
If traffic is routed through an NVA, the NVA becomes a critical piece of your infrastructure. Any NVA failures will directly affect the ability of your services to communicate. It’s important to include a highly available architecture in your NVA deployment.
Exercise – Create an NVA and virtual machines
In this exercise, you’ll deploy the nva network appliance to the dmzsubnet subnet. Then you’ll enable IP forwarding so that traffic from publicsubnet and traffic that uses the custom route is sent to the privatesubnet subnet.
Exercise – Route traffic through the NVA
Now that you’ve created the network virtual appliance (NVA) and virtual machines (VMs), you’ll route the traffic through the NVA.
You’ve now configured routing between subnets to direct traffic from the public internet through the dmzsubnet subnet before it reaches the private subnet. In the dmzsubnet subnet, you added a VM that acts as an NVA. You can configure this NVA to detect potentially malicious requests and block them before they reach their intended targets.
Improve application scalability and resiliency by using Azure Load Balancer
Spread user request across multiple virtual machines or services.
Use LB to scale applications and create high avaliability for vms/services.
By default, a five-tuple hash is used to map traffic to available servers. The hash is made from the following elements:
Source IP: The IP address of the requesting client.
Source port: The port of the requesting client.
Destination IP: The destination IP of the request.
Destination port: The destination port of the request.
Protocol type: The specified protocol type, TCP or UDP.
Support in and outbound.
Scales up ti millions of flows for TCP/UDP.
With LB’s you can use availiability sets (SLA 99.95), Protection from hardware failures within datacenters.
or availability zones (SLA 99.99), Protection from entire datacenter failure.
Logical group for isolating vm’s from each other and run across multiple physical servers.
If HW or SW fail only a subset is affected.
Groups of one or more data centers, placed at different physical locations within same region.
If entire data center fails, you can continue to server users.
Select the right Load Balancer product
Outbound connections through SNAT, source NAT.
Diagnostics with Az log analytics
Standard all of basic pluss:
HTTPS health probes
Diagnostics through Az monitor
High avaliability (HA) ports
Guaranteed SLA 99.99 for two or more vm’s
An external load balancer operates by distributing client traffic across multiple virtual machines. An external load balancer permits traffic from the internet.
An internal load balancer distributes a load from internal Azure resources to other Azure resources.
Configure a public load balancer
By default, Azure Load Balancer distributes network traffic equally among virtual machine instances. The following distribution modes are also possible if a different behavior is required:
Five-tuple hash. The default distribution mode for Load Balancer is a five-tuple hash. The tuple is composed of source IP, source port, destination IP, destination port, and protocol type.
Source IP affinity. This distribution mode is also known as session affinity or client IP affinity. To map traffic to the available servers, the source IP affinity mode uses a two-tuple hash (from the source IP address and destination IP address) or a three-tuple hash (from the source IP address, destination IP address, and protocol type). The hash ensures that requests from a specific client are always sent to the same virtual machine behind the load balancer.
If the load balancer must provide source IP affinity to maintain a user’s session:
$lb = Get-AzLoadBalancer -Name MyLb -ResourceGroupName MyResourceGroup $lb.LoadBalancingRules.LoadDistribution = 'sourceIp' Set-AzLoadBalancer -LoadBalancer $lb
Load Balancer and Remote Desktop Gateway
The default five-tuple hash in Load Balancer is incompatible with this service. If you want to use Load Balancer with your Remote Desktop servers, use source IP affinity.
Load Balancer and media upload
Another use case for source IP affinity is media upload. In many implementations, a client initiates a session through a TCP protocol and connects to a destination IP address. This connection remains open throughout the upload to monitor progress, but the file is uploaded through a separate UDP protocol.
Internal load balancer
Distribute traffic from front-end servers evenly among back-end servers.
ou can configure an internal load balancer in almost the same way as an external load balancer, but with these differences:
When you create the load balancer, select Internal for the Type value. When you select this setting, the front-end IP address of the load balancer isn’t exposed to the internet.
Assign a private IP address instead of a public IP address for the front end of the load balancer.
Place the load balancer in the protected virtual network that contains the virtual machines you want to handle the requests.