Before Azure Stack, WAP (Azure Pack) was too integrated with other products such as System Center and was restricted to the traditional three-tier architecture with Compute, Storage, and Networking as separate parts of the infrastructure. Azure Stack, on the other hand, is the true next generation enterprise private cloud platform.
Azure Stack is a solution to have a consistent experience between public Azure and Azure Stack.
All features & services will be identical to their counterparts on public Azure. If a function is added to Azure Stack it will have the same “look and feel” similar to the feature in public Azure cloud, so, from a developer standpoint this will translate into smaller adjustments if you have applications or ARM templates that are being used for public Azure to be able to use these for Azure Stack.
Azure Stack comes as a bundled platform from the OEM vendor which means Azure Stack cannot be installed on any infrastructure. Logically, Microsoft wants to take total responsibility for the life-cycle management of the platform as well as ensuring optimal performance. The reason is just that if, OEM vendor releases a firmware update, BIOS update or an update to the hardware Microsoft wants to ensure that the upgrade process goes as smooth as possible and that the patch/firmware has been re-validated in testing. Though you can use an AzureStack development kit to have a test environment to understand and get preliminary mastery.
AzureStack High-level Architecture, Source: Microsoft
When we deconstruct Azure Stack, you can see some different layers. This will be the way we walk through the rest of the day, so we thought it would make more sense to spend a little time on it now. At the very top of the guest workload resources that are created by Azure Stack and which end-users (devs / IT Pros) use to get their work done. At the next layer down we get into the actual bits of Azure Stack, starting with the end user experiences. It’s also very important to remember that this is “just Azure” and so the experiences that you’ve come to know and count on there are the same in Azure Stack. This means the same Azure Portal, same support for a variety of open source technologies and same support for development tools including integration with Visual Studio.
So, in short, the benefit which you are going to achieve using AzureStack are:
AzureStack logical architecture
The architecture of the Azure Stack is split across into Scale Units, which is a nothing but a set of nodes which makes out a Windows Failover Cluster as is a fault domain. AzureStack also uses a stamp which consists of one or more Scale Units. After that, we have one or more Azure Stack stamps which can be added to a region.
Currently, AzureStack is limited to 16 nodes which consist of 3 Scale units and four nodes in each scale unit.
Azure Stack consists of a hyper-converged platform running Windows Server 2016 from one of the four OEMs (Dell, Cisco, HP or Lenovo). The purpose of a hyper-converged setup is that you have a server with local disks attached which then are connected and make a distributed file system. This is not a feature unique to Microsoft, there are many vendors in this market space already like Nutanix, VMware, and Simplicity but all have different approaches on how they store and access their data. This Hyper-converged setup also comes with other features like auto-tiering, deduplication and having these features only in software makes this a software-defined architecture. Because it is a hyper-converged setup, the computer will always scale up with the storage attached to it since this is the current setup with Storage Spaced Direct.
AzureStack needs the smartest controller which can keep all the details of networking and route records as well as make it function and has to be a centralized component in place, which can take care of all the network. This requirement gets handled by Network Controller which is a feature of Windows Server 2016.
On Azure Stack, the network controller in a highly available set of three virtual machines that operate as a single cluster across the different nodes. The network controller has 2 API interfaces, one which is the “Northbound API” which accepts requests using the REST interface, so for instance, if we go and change a firewall rule or create a software load balanced in the AzureStack UI, the Northbound API will get that request.
The “Southbound API” will then propagate the changes the different virtual switches on the various hosts. The Network controller is intended to be the centralized management component for both the physical and virtual network since it is using the Open vSwitch standard. However, do note that the schema it uses is still lacking some key features to be able to manage the physical network.
Additionally, the Network Controller is also responsible for managing the VPN connections and advertisement of BGP routing and maintaining sessions states across the hosts.
The network controller can be seamlessly integrated with Microsoft System Center, but that is not a part of Azure Stack.
Source: Microsoft Network Controller architecture – With Azure Stack
To have a full cloud platform you need to abstract away the physical network as well and move toward using network virtualization to be able to automate the tenant configuration fully. In the early days of Azure Pack, it used a tunneling protocol named NVGRE. This protocol encapsulates IP packets within a GRE segment which allowed to go away from the restrictions that the traditional layer two (L2) networking had with for instance limited VLAN space and to have tenants with overlapping IP ranges.
The distributed firewall feature is a virtualized network feature which runs on each of the hyper-v switches in an Azure Stack environment. In Azure Stack, the distributed firewall feature is presented as network security groups inside the platform. The function runs regardless of the operating system inside the guest virtual machine and can be attached to a vNIC directly, or to a virtual subnet. The distributed firewall also acts as a security layer directly to a VM or a subnet. This allows most of the underlying access list configurations, IP, PORT & PROTOCOL (Source & Destination) and does not replace in stateful or packet inspection firewall
The software load balanced is a feature of windows 2016 hyper-v switch as a host agent service and is also managed centrally by the network controller which acts as central management for the network. The load balancer works on layer two and is used to define a public IP with a port against a backend pool on a specific port.
Applicable to servers that are running Windows Server 2016 with Hyper-V as the underlying virtualization platform. The same servers are also running a feature called Storage Spaces Direct (S2D) which is Microsoft’s software-defined storage feature. S2D allows for the servers to share internal storage between themselves to provide a highly-available virtual storage solution as base storage for the virtualization layer. S2D will then be used to create a virtual volume with a defined resiliency type (Parity, Mirrored, Two-way mirror) which will host the CSV shares and will use a Windows Cluster role to maintain quorum among the nodes.
S2D can use a combination of regular HDD disks and SSD disks (Can also be all-flash) to enable capacity and caching tiers which are automatically balanced so hot data is placed on the fast layer and cold data on the capacity tier. So when a virtual machine is created, plus the storage is set on the CSV share, the virtual hard drive on the VM is chopped into interleaves of blocks.
Further Reference – https://azure.microsoft.com/mediahandler/files/resourcefiles/c512ccc0-0b86-4569-831d-5d7ec0a9a34f/Azure%20Stack%20-%20Building%20an%20end-to-end%20validation%20environment.pdf
Comments are closed.