2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 aspnet azure csharp debugging elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

Scenario: Deploy Multi-Region Elasticsearch Cluster on Azure

Previously, I wrote about my proposal for modular ARM templates. Here I'm going to go through the scenario of deploying a full, multi-location Elasticsearch node cluster with a single public endpoint and pretty DNS.

This is a sequel to Developing Azure Modular ARM Templates. Study that first. You need the Deploy ARM Template PowerShell at the bottom to complete this deployment.

Let's begin by reviewing the solution:

This solution is at https://github.com/davidbetz/azure-elasticsearch-nodes-scenario

solution files

Here we see:

  • A 3-phase Azure deployment
  • 3 PowerShell files
  • A VM setup script (install.sh)
  • A script that install.sh will run (create_data_generation_setup.sh)
  • A file that create_data_generation_setup.sh will indirectly use (hamlet.py)

The modular deployment phases are:

  • Setup of general resources (storage, vnet, IP, NIC, NSG)
  • Setup of VNet gateways
  • Setup of VMs

The first phase exists to lay out the general components. This is a very quick phase.

The second phase creates the gateways.

After the second phase, we will create the VNet connections with PowerShell

The third phase will create the VMs

Then we will generate sample data and test the ES cluster.

Then we will create an Azure Traffic Manager and test.

Finally we will add a pretty name to the traffic manager with Azure DNS.

Let's do this...

Creating Storage for Deployment

The first thing we're going to do is create a storage account for deployment files. In various examples online, you see this in Github. We're not going to do that. Here we create a storage account that future phases will reference. You only need one... ever. Here I'm creating one just for this specific example deployment.


    $uniquifier = $([guid]::NewGuid().tostring().substring(0, 8))
    $rg = "esnodes$uniquifier"
    _createdeploymentAccount -rg $rg -uniquifier $uniquifier

Reference the Deploy ARM Template PowerShell file at the end of Developing Azure Modular ARM Templates for the above and later code.

Output

VERBOSE: Performing the operation "Replacing resource group ..." on target "".
VERBOSE: 11:01:28 PM - Created resource group 'esnodesfbdac204' in location 'centralus'


ResourceGroupName : esnodesfbdac204
Location          : centralus
ProvisioningState : Succeeded
Tags              : 
ResourceId        : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204


ResourceGroupName      : esnodesfbdac204
StorageAccountName     : filesfbdac204
Id                     : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Storage/storageAccounts/filesfbdac204
Location               : centralus
Sku                    : Microsoft.Azure.Management.Storage.Models.Sku
Kind                   : BlobStorage
Encryption             : 
AccessTier             : Hot
CreationTime           : 7/22/2017 4:01:29 AM
CustomDomain           : 
Identity               : 
LastGeoFailoverTime    : 
PrimaryEndpoints       : Microsoft.Azure.Management.Storage.Models.Endpoints
PrimaryLocation        : centralus
ProvisioningState      : Succeeded
SecondaryEndpoints     : 
SecondaryLocation      : 
StatusOfPrimary        : Available
StatusOfSecondary      : 
Tags                   : {}
EnableHttpsTrafficOnly : False
Context                : Microsoft.WindowsAzure.Commands.Common.Storage.LazyAzureStorageContext
ExtendedProperties     : {}


CloudBlobContainer : Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer
Permission         : Microsoft.WindowsAzure.Storage.Blob.BlobContainerPermissions
PublicAccess       : Blob
LastModified       : 7/22/2017 4:02:00 AM +00:00
ContinuationToken  : 
Context            : Microsoft.WindowsAzure.Commands.Storage.AzureStorageContext
Name               : support

Phase 1 Deployment

Now we're ready for phase 1 of deployment. This phase is quick and easy. It simply creates the basic components that future phases will use.

Part of what phase 1 will deploy is a virtual network per region that we're requesting. In this example, we have "central us", "west us", and "east us". We need a virtual network in each.

But, to make this work we have to remember our IP addressing:

Very short IP address review

Given a 10.x.x.x network, and a 0.y.y.y mask, 10 is your network and the x.x.x is your host area.

Given a 10.1.x.x network, and a 0.0.y.y mask, 10.1 is your network and x.x is your mask.

The concept of subnetting is relative to the network and only shows up in discussions of larger scale networks and supernetting. Tell the pendantic sysadmins to take a hike when they they to confuse you with over emphasising the network vs. subnetwork aspects. This is a semantic concept, not a technical one. That is, it relates to the design, not the bits themselves.

The virtual networks in our modular deployment uses the following addressSpace:


    "addressSpace": {
        "addressPrefixes": [
            "[concat('10.', mul(16, add(copyIndex(), 1)), '.0.0/12')]"
        ]
    },

We see that our networks follow a 10.16*n+1.0.0/12 pattern.

This takes n to generate networks: n=0 => 10.16.0.0, n=1 => 10.32.0.0, and n=2 => 10.48.0.0.

Azure allows you to split your networks up into subnets as well. This is great for organization. Not only that, when you specify a NIC, you put it on a subnet. So, let's look at our subnet configuration:

    
    "subnets": [
        {
            "name": "subnet01",
            "properties": {
                "addressPrefix": "[concat('10.', add(mul(16, add(copyIndex(), 1)), 1), '.0.0/16')]"
            }
        },
        {
            "name": "subnet02",
            "properties": {
                "addressPrefix": "[concat('10.', add(mul(16, add(copyIndex(), 1)), 2), '.0.0/16')]"
            }
        },
        {
            "name": "GatewaySubnet",
            "properties": {
                "addressPrefix": "[concat('10.', mul(16, add(copyIndex(), 1)), '.0.', 16,'/28')]"
            }
        }
    ]

The NICs for our VMs will be on subnet01. We will not be using subnet02, but I always include it for future experiments and as an example of further subnetting.

GatewaySubnet is special and is used only by the VPN gateways. Don't mess with that.

Zooming into subnet01, we see a 10.(16*n+1)+1.0.0/16 pattern. It's basically the network + 1 with the next four bits defining the subnet (in our case the subnet is the network; it's only a subnet from the perspective of the network, but we're not viewing it from that perspective).

This takes n to generate networks: n=0 => 10.17.0.0, n=1 => 10.33.0.0, and n=2 => 10.49.0.0.

End of mini-lesson.

Now to deploy phase 1...

Output

(filtering for phase 1)
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\networkInterfaces\nic-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\networkSecurityGroups\nsg-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\publicIPAddresses\pip-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\virtualNetworks\vnet-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\storage\storageAccounts\storage-copyIndex.json...
(excluding \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\publicIPAddresses\2.pip-gateway-copyIndex.json)
(excluding \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\virtualNetworkGateways\2.gateway-copyIndex.json)
(excluding \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\compute\virtualMachines\3.vm-copyIndex.json)
(excluding \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\compute\virtualMachines\extensions\3.script.json)
------------------------------------

------------------------------------
Creating \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\deploy\07212017-110346.1...
Deploying template \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\deploy\azuredeploy-generated.json
VERBOSE: Performing the operation "Creating Deployment" on target "esnodesfbdac204".
VERBOSE: 07:03:49 PM - Template is valid.
VERBOSE: 07:03:51 PM - Create template deployment 'elasticsearch-secure-nodes07212017-110346'
VERBOSE: 07:03:51 PM - Checking deployment status in 5 seconds
VERBOSE: 07:03:56 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:01 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:06 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204alpha' provisioning status is running
VERBOSE: 07:04:06 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-gamma' provisioning status is running
VERBOSE: 07:04:06 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gamma' provisioning status is succeeded
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/publicIPAddresses 'pip-beta' provisioning status is succeeded
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/publicIPAddresses 'pip-alpha' provisioning status is succeeded
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/virtualNetworks 'vnet-gamma' provisioning status is succeeded
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/virtualNetworks 'vnet-alpha' provisioning status is succeeded
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/virtualNetworks 'vnet-beta' provisioning status is succeeded
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-alpha' provisioning status is running
VERBOSE: 07:04:12 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204gamma' provisioning status is running
VERBOSE: 07:04:12 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-beta' provisioning status is running
VERBOSE: 07:04:12 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204beta' provisioning status is running
VERBOSE: 07:04:12 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:17 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:22 PM - Resource Microsoft.Network/networkInterfaces 'nic-alpha' provisioning status is succeeded
VERBOSE: 07:04:22 PM - Resource Microsoft.Network/networkInterfaces 'nic-gamma' provisioning status is succeeded
VERBOSE: 07:04:22 PM - Resource Microsoft.Network/networkInterfaces 'nic-beta' provisioning status is succeeded
VERBOSE: 07:04:22 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-alpha' provisioning status is succeeded
VERBOSE: 07:04:22 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-beta' provisioning status is succeeded
VERBOSE: 07:04:22 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-gamma' provisioning status is succeeded
VERBOSE: 07:04:22 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:27 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:33 PM - Checking deployment status in 5 seconds
VERBOSE: 07:04:38 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204gamma' provisioning status is succeeded
VERBOSE: 07:04:38 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204beta' provisioning status is succeeded
VERBOSE: 07:04:38 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204alpha' provisioning status is succeeded
VERBOSE: 07:04:38 PM - Checking deployment status in 5 seconds


DeploymentName          : elasticsearch-secure-nodes07212017-110346
ResourceGroupName       : esnodesfbdac204
ProvisioningState       : Succeeded
Timestamp               : 7/23/2017 12:04:32 AM
Mode                    : Incremental
TemplateLink            : 
Parameters              : 
                          Name             Type                       Value     
                          ===============  =========================  ==========
                          admin-username   String                     dbetz     
                          ssh-public-key   String                     ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCxbo0LWWXCHEEGxgtraIhHBPPnt+kJGMjYMC6+9gBIsYz8R8bSFfge7ljHRxvJoye+4IrdSf2Ee2grgm2+xT9HjMvVR2/LQjPY+ocdinYHlM6miqvMgMblOMVm6/WwY0L
                          ZkozPKuSXzhO+/Q6HTZBr2pig/bclvJuFPBtClrzZx5R3NfV33/2rZpFZH9OdAf28q55jbZ1t9AJhtD27s34/cRVBXNBQtc2Nw9D8cEJ+raRdJitAOX3U41bjbrO1u3CQ/JtXg/35wZTJH1Yx7zmDl97cklfiArAfaxkgpWkGhob6A6Fu7LvEgLC25gO5NsY+g4CDqGJT5kzbcyQDDh
                          bf dbetz@localhost.localdomain
                          script-base      String                               
                          
Outputs                 : 
DeploymentDebugLogLevel : 

Now we have all kinds of goodies setup. Note how fast that was: template validation was at 07:03:49 PM and it finished at 07:04:38 PM.

post phase 1

Phase 2 Deployment

Now for phase 2. Here we're creating the VPN gateways. Why? Because we have multiple virtual networks in multiple regions. We need to create VPN connections between them to allow communication. To create VPN connections, we need VPN gateways.

Output

Be warned: this takes forever.

(filtering for phase 2)

Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\networkInterfaces\nic-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\networkSecurityGroups\nsg-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\publicIPAddresses\pip-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\virtualNetworks\vnet-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\storage\storageAccounts\storage-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\publicIPAddresses\2.pip-gateway-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\virtualNetworkGateways\2.gateway-copyIndex.json...
(excluding \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\compute\virtualMachines\3.vm-copyIndex.json)
(excluding \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\compute\virtualMachines\extensions\3.script.json)
------------------------------------

------------------------------------
Creating \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\deploy\07222017-074129.2...
Deploying template \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\deploy\azuredeploy-generated.json
VERBOSE: Performing the operation "Creating Deployment" on target "esnodesfbdac204".
VERBOSE: 7:41:42 PM - Template is valid.
VERBOSE: 7:41:43 PM - Create template deployment 'elasticsearch-secure-nodes07222017-074129'
VERBOSE: 7:41:43 PM - Checking deployment status in 5 seconds
VERBOSE: 7:41:49 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-beta' provisioning status is succeeded
VERBOSE: 7:41:49 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-alpha' provisioning status is succeeded
VERBOSE: 7:41:49 PM - Resource Microsoft.Network/virtualNetworks 'vnet-gamma' provisioning status is succeeded
VERBOSE: 7:41:49 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204alpha' provisioning status is succeeded
VERBOSE: 7:41:49 PM - Checking deployment status in 5 seconds
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/networkInterfaces 'nic-gamma' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/networkInterfaces 'nic-alpha' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/publicIPAddresses 'pip-alpha' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204gamma' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204beta' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/publicIPAddresses 'pip-beta' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gamma' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/virtualNetworks 'vnet-beta' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/virtualNetworks 'vnet-alpha' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-gamma' provisioning status is succeeded
VERBOSE: 7:41:54 PM - Checking deployment status in 5 seconds
VERBOSE: 7:41:59 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-gamma' provisioning status is running
VERBOSE: 7:41:59 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gateway-alpha' provisioning status is succeeded
VERBOSE: 7:41:59 PM - Resource Microsoft.Network/networkInterfaces 'nic-beta' provisioning status is succeeded
VERBOSE: 7:41:59 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gateway-gamma' provisioning status is succeeded
VERBOSE: 7:41:59 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gateway-beta' provisioning status is succeeded
VERBOSE: 7:41:59 PM - Checking deployment status in 10 seconds
VERBOSE: 7:42:09 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-beta' provisioning status is running
VERBOSE: 7:42:09 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-alpha' provisioning status is running
VERBOSE: 7:42:10 PM - Checking deployment status in 11 seconds

takes forever

VERBOSE: 8:11:11 PM - Checking deployment status in 11 seconds
VERBOSE: 8:11:22 PM - Checking deployment status in 11 seconds
VERBOSE: 8:11:33 PM - Checking deployment status in 10 seconds
VERBOSE: 8:11:43 PM - Checking deployment status in 5 seconds
VERBOSE: 8:11:48 PM - Checking deployment status in 9 seconds
VERBOSE: 8:11:58 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-beta' provisioning status is succeeded
VERBOSE: 8:11:58 PM - Checking deployment status in 11 seconds
VERBOSE: 8:12:09 PM - Checking deployment status in 5 seconds
VERBOSE: 8:12:14 PM - Checking deployment status in 7 seconds
VERBOSE: 8:12:21 PM - Checking deployment status in 11 seconds
VERBOSE: 8:12:33 PM - Checking deployment status in 11 seconds
VERBOSE: 8:12:44 PM - Checking deployment status in 5 seconds
VERBOSE: 8:12:49 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-alpha' provisioning status is succeeded
VERBOSE: 8:12:49 PM - Checking deployment status in 8 seconds
VERBOSE: 8:12:57 PM - Checking deployment status in 11 seconds
VERBOSE: 8:13:08 PM - Checking deployment status in 5 seconds
VERBOSE: 8:13:14 PM - Checking deployment status in 7 seconds
VERBOSE: 8:13:21 PM - Checking deployment status in 5 seconds
VERBOSE: 8:13:26 PM - Checking deployment status in 7 seconds
VERBOSE: 8:13:33 PM - Checking deployment status in 11 seconds
VERBOSE: 8:13:45 PM - Checking deployment status in 5 seconds
VERBOSE: 8:13:50 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-gamma' provisioning status is succeeded


DeploymentName          : elasticsearch-secure-nodes07222017-074129
ResourceGroupName       : esnodesfbdac204
ProvisioningState       : Succeeded
Timestamp               : 7/23/2017 1:13:44 AM
Mode                    : Incremental
TemplateLink            : 
Parameters              : 
                          Name             Type                       Value     
                          ===============  =========================  ==========
                          admin-username   String                     dbetz     
                          ssh-public-key   String                     ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCxbo0LWWXCHEEGxgtraIhHBPPnt+kJGMjYMC6+9gBIsYz8R8bSFfge7ljHRxvJoye+4IrdSf2Ee2grgm2+xT9HjMvVR2/LQjPY+ocdinYHlM6miqvMgMblOMVm6/WwY0LZkozPKuSXzhO
                          +/Q6HTZBr2pig/bclvJuFPBtClrzZx5R3NfV33/2rZpFZH9OdAf28q55jbZ1t9AJhtD27s34/cRVBXNBQtc2Nw9D8cEJ+raRdJitAOX3U41bjbrO1u3CQ/JtXg/35wZTJH1Yx7zmDl97cklfiArAfaxkgpWkGhob6A6Fu7LvEgLC25gO5NsY+g4CDqGJT5kzbcyQDDhbf 
                          dbetz@localhost.localdomain
                          script-base      String                               
                          
Outputs                 : 
DeploymentDebugLogLevel : 

At this point the gateways have been created.

gws

Creating a VPN Connection Mesh

You can follow whip out various topologies to connect the networks (i.e. hub-and-spoke, point-to-point, etc) . In this case I'm going for a full mesh topology. This connects everyone directly to every one else. It's the most hardcore option.

Given that a connection is unidirectional, you each toplogical unit between areas require both a to and from directional connection. So, A->B and B->A for a 2-point mesh. For a 3-point mesh, it's all over the board. The formula everyone who goes through network engineering training memorizes is n*(n-1). So, for n=3, you have 3 * 2 (6) connections. For n=5, this is 20 connections. That's a lot, but there's no lame bottleneck from tunneling traffic through a central hub (=hub-and-spoke topology).

When creating Azure VPN connections, you specify a shared key. This just makes sense. There needs to be private passcode to enable them to trust each other. In this example, I'm cracking open ASP.NET to auto-generate a wildly complex password. This thing is crazy. Here are some of the passwords it spits out:

  • rT64%nr*#OX/CR)O
  • XwX3UamErI@D)>N{
  • Ej.ZHSngc|yenaiD
  • @*KUz|$#^Jvp-9Vb
  • _7q)6h6/.G;8C?U(

Goodness.

Anyway, on to the races...


    function createmesh { param([Parameter(Mandatory=$true)]$rg,
                                [Parameter(Mandatory=$true)]$key)
    
        function getname { param($id)
            $parts = $id.split('-')
            return $parts[$parts.length-1]
        }
    
        $gateways = Get-AzureRmVirtualNetworkGateway -ResourceGroupName $rg
    
        ($gateways).foreach({
            $source = $_
            ($gateways).foreach({
                $target = $_
                $sourceName = getname $source.Name
                $targetName = getname $target.Name
                if($source.name -ne $target.name) {
                    $connectionName = ('conn-{0}2{1}' -f $sourceName, $targetName)
                    Write-Host "$sourceName => $targetName"
                    New-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName $rg -Location $source.Location -Name $connectionName `
                        -VirtualNetworkGateway1 $source `
                        -VirtualNetworkGateway2 $target `
                        -ConnectionType Vnet2Vnet `
                        -RoutingWeight 10 `
                        -SharedKey $key
                }
            })  
        })
    }
    function _virtualenv {
    
    Add-Type -AssemblyName System.Web
    $key = [System.Web.Security.Membership]::GeneratePassword(16,2)
    
    createmesh -rg $rgGlobal -key $key
    
    } _virtualenv

Output

beta => gamma
beta => alpha
gamma => beta
gamma => alpha
alpha => beta
alpha => gamma



Name                    : conn-beta2gamma
ResourceGroupName       : esnodesfbdac204
Location                : westus
Id                      : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/connections/conn-beta2gamma
Etag                    : W/"d9644745-03f4-461c-9efa-1477cd5e13d1"
ResourceGuid            : bfd89895-af3d-44ee-80c8-74413a18f6c4
ProvisioningState       : Succeeded
Tags                    : 
AuthorizationKey        : 
VirtualNetworkGateway1  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-beta"
VirtualNetworkGateway2  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-gamma"
LocalNetworkGateway2    : 
Peer                    : 
RoutingWeight           : 10
SharedKey               : oF[sa4n^sq)aIYSj
ConnectionStatus        : Unknown
EgressBytesTransferred  : 0
IngressBytesTransferred : 0
TunnelConnectionStatus  : []

Name                    : conn-beta2alpha
ResourceGroupName       : esnodesfbdac204
Location                : westus
Id                      : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/connections/conn-beta2alpha
Etag                    : W/"893dc827-a077-4003-b086-ec3aba8344ee"
ResourceGuid            : c8780c08-9678-4720-a90b-42a8509c059e
ProvisioningState       : Succeeded
Tags                    : 
AuthorizationKey        : 
VirtualNetworkGateway1  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-beta"
VirtualNetworkGateway2  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-alpha"
LocalNetworkGateway2    : 
Peer                    : 
RoutingWeight           : 10
SharedKey               : oF[sa4n^sq)aIYSj
ConnectionStatus        : Unknown
EgressBytesTransferred  : 0
IngressBytesTransferred : 0
TunnelConnectionStatus  : []

Name                    : conn-gamma2beta
ResourceGroupName       : esnodesfbdac204
Location                : eastus
Id                      : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/connections/conn-gamma2beta
Etag                    : W/"a99e47f4-fd53-4583-811f-a868d1c0f011"
ResourceGuid            : 50b8bc36-37b9-434f-badc-961266b19436
ProvisioningState       : Succeeded
Tags                    : 
AuthorizationKey        : 
VirtualNetworkGateway1  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-gamma"
VirtualNetworkGateway2  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-beta"
LocalNetworkGateway2    : 
Peer                    : 
RoutingWeight           : 10
SharedKey               : oF[sa4n^sq)aIYSj
ConnectionStatus        : Unknown
EgressBytesTransferred  : 0
IngressBytesTransferred : 0
TunnelConnectionStatus  : []

Name                    : conn-gamma2alpha
ResourceGroupName       : esnodesfbdac204
Location                : eastus
Id                      : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/connections/conn-gamma2alpha
Etag                    : W/"4dd2d765-4bb0-488f-9d28-dabbf618c28f"
ResourceGuid            : e9e4591f-998b-4318-b297-b2078409c7e9
ProvisioningState       : Succeeded
Tags                    : 
AuthorizationKey        : 
VirtualNetworkGateway1  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-gamma"
VirtualNetworkGateway2  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-alpha"
LocalNetworkGateway2    : 
Peer                    : 
RoutingWeight           : 10
SharedKey               : oF[sa4n^sq)aIYSj
ConnectionStatus        : Unknown
EgressBytesTransferred  : 0
IngressBytesTransferred : 0
TunnelConnectionStatus  : []

Name                    : conn-alpha2beta
ResourceGroupName       : esnodesfbdac204
Location                : centralus
Id                      : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/connections/conn-alpha2beta
Etag                    : W/"aafef4bf-d241-4cdd-88b7-b6ecd793a662"
ResourceGuid            : ef5bb61b-fcbe-4452-bf1f-b847f32dfa95
ProvisioningState       : Succeeded
Tags                    : 
AuthorizationKey        : 
VirtualNetworkGateway1  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-alpha"
VirtualNetworkGateway2  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-beta"
LocalNetworkGateway2    : 
Peer                    : 
RoutingWeight           : 10
SharedKey               : oF[sa4n^sq)aIYSj
ConnectionStatus        : Unknown
EgressBytesTransferred  : 0
IngressBytesTransferred : 0
TunnelConnectionStatus  : []

Name                    : conn-alpha2gamma
ResourceGroupName       : esnodesfbdac204
Location                : centralus
Id                      : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/connections/conn-alpha2gamma
Etag                    : W/"edf5f85a-0f7d-4883-8e45-433de9e045b2"
ResourceGuid            : 074c168c-1d42-4704-b978-124c8505a35b
ProvisioningState       : Succeeded
Tags                    : 
AuthorizationKey        : 
VirtualNetworkGateway1  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-alpha"
VirtualNetworkGateway2  : "/subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodesfbdac204/providers/Microsoft.Network/virtualNetworkGateways/gateway-gamma"
LocalNetworkGateway2    : 
Peer                    : 
RoutingWeight           : 10
SharedKey               : oF[sa4n^sq)aIYSj
ConnectionStatus        : Unknown
EgressBytesTransferred  : 0
IngressBytesTransferred : 0
TunnelConnectionStatus  : []

Phase 3 Deployment

Now to create the VMs...

In our scenario, it's really important to do this phase after creaitng the VPN connection mesh. During the VMs creation, Elasticsearch is automatically setup and the nodes will attempt to connect each other.

No mesh => no connection => you-having-a-fit.

During this deploy, you're going to see that everything from phases 1 and 2 is validated. That's just the idempotent nature of ARM template deployment.

Output

(filtering for phase 3)
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\networkInterfaces\nic-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\networkSecurityGroups\nsg-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\publicIPAddresses\pip-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\virtualNetworks\vnet-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\storage\storageAccounts\storage-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\publicIPAddresses\2.pip-gateway-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\network\virtualNetworkGateways\2.gateway-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\compute\virtualMachines\3.vm-copyIndex.json...
Merging \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\template\resources\compute\virtualMachines\extensions\3.script.json...
Uploading elasticsearch-secure-nodes\07222017-084857\create_data_generation_setup.sh
Uploading elasticsearch-secure-nodes\07222017-084857\install.sh
Uploading elasticsearch-secure-nodes\07222017-084857\generate\hamlet.py
Blob path: https://filesfbdac204.blob.core.windows.net/support/elasticsearch-secure-nodes/07222017-084857
------------------------------------

------------------------------------
Creating \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\deploy\07222017-084857.3...
Deploying template \\10.1.20.1\dbetz\azure\components\elasticsearch-secure-nodes\deploy\azuredeploy-generated.json
VERBOSE: Performing the operation "Creating Deployment" on target "esnodesfbdac204".
VERBOSE: 8:49:02 PM - Template is valid.
VERBOSE: 8:49:03 PM - Create template deployment 'elasticsearch-secure-nodes07222017-084857'
VERBOSE: 8:49:03 PM - Checking deployment status in 5 seconds
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gateway-alpha' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gamma' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-alpha' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gateway-gamma' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/virtualNetworks 'vnet-alpha' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/virtualNetworks 'vnet-beta' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204gamma' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/publicIPAddresses 'pip-gateway-beta' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204alpha' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/publicIPAddresses 'pip-alpha' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Resource Microsoft.Network/publicIPAddresses 'pip-beta' provisioning status is succeeded
VERBOSE: 8:49:08 PM - Checking deployment status in 5 seconds
VERBOSE: 8:49:13 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204beta' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-beta' provisioning status is running
VERBOSE: 8:49:13 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-beta' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Resource Microsoft.Network/virtualNetworks 'vnet-gamma' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-alpha' provisioning status is running
VERBOSE: 8:49:13 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204beta' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Resource Microsoft.Network/networkSecurityGroups 'nsg-gamma' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204gamma' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Resource Microsoft.Network/networkInterfaces 'nic-alpha' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Resource Microsoft.Storage/storageAccounts 'esnodesfbdac204alpha' provisioning status is succeeded
VERBOSE: 8:49:13 PM - Checking deployment status in 6 seconds
VERBOSE: 8:49:19 PM - Resource Microsoft.Compute/virtualMachines 'vm-beta' provisioning status is running
VERBOSE: 8:49:19 PM - Resource Microsoft.Network/networkInterfaces 'nic-gamma' provisioning status is succeeded
VERBOSE: 8:49:19 PM - Resource Microsoft.Compute/virtualMachines 'vm-alpha' provisioning status is running
VERBOSE: 8:49:19 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-gamma' provisioning status is running
VERBOSE: 8:49:19 PM - Resource Microsoft.Network/networkInterfaces 'nic-beta' provisioning status is succeeded
VERBOSE: 8:49:19 PM - Checking deployment status in 11 seconds
VERBOSE: 8:49:30 PM - Resource Microsoft.Compute/virtualMachines 'vm-gamma' provisioning status is running
VERBOSE: 8:49:30 PM - Checking deployment status in 11 seconds
VERBOSE: 8:49:42 PM - Checking deployment status in 11 seconds
VERBOSE: 8:49:53 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-beta' provisioning status is succeeded
VERBOSE: 8:49:53 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-alpha' provisioning status is succeeded
VERBOSE: 8:49:53 PM - Checking deployment status in 11 seconds
VERBOSE: 8:50:04 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:09 PM - Resource Microsoft.Network/virtualNetworkGateways 'gateway-gamma' provisioning status is succeeded
VERBOSE: 8:50:09 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:14 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:19 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:25 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:30 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:35 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:40 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:45 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:51 PM - Checking deployment status in 5 seconds
VERBOSE: 8:50:56 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:01 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vm-gamma/script' provisioning status is running
VERBOSE: 8:51:01 PM - Resource Microsoft.Compute/virtualMachines 'vm-gamma' provisioning status is succeeded
VERBOSE: 8:51:01 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:06 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:11 PM - Resource Microsoft.Compute/virtualMachines 'vm-alpha' provisioning status is succeeded
VERBOSE: 8:51:11 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:16 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vm-beta/script' provisioning status is running
VERBOSE: 8:51:16 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vm-alpha/script' provisioning status is running
VERBOSE: 8:51:16 PM - Resource Microsoft.Compute/virtualMachines 'vm-beta' provisioning status is succeeded
VERBOSE: 8:51:16 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:22 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:27 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:32 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:37 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:43 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:48 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:53 PM - Checking deployment status in 5 seconds
VERBOSE: 8:51:58 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:03 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:08 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vm-gamma/script' provisioning status is succeeded
VERBOSE: 8:52:08 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:14 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:19 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:24 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:29 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:35 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:40 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:45 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vm-beta/script' provisioning status is succeeded
VERBOSE: 8:52:45 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:50 PM - Checking deployment status in 5 seconds
VERBOSE: 8:52:55 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vm-alpha/script' provisioning status is succeeded


DeploymentName          : elasticsearch-secure-nodes07222017-084857
ResourceGroupName       : esnodesfbdac204
ProvisioningState       : Succeeded
Timestamp               : 7/23/2017 1:52:51 AM
Mode                    : Incremental
TemplateLink            : 
Parameters              : 
                          Name             Type                       Value     
                          ===============  =========================  ==========
                          admin-username   String                     dbetz     
                          ssh-public-key   String                     ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCxbo0LWWXCHEEGxgtraIhHBPPnt+kJGMjYMC6+9gBIsYz8R8bSFfge7ljHRxvJoye+4IrdSf2Ee2grgm2+xT9HjMvVR2/LQjPY+ocdinYHlM6miqvMgMblOMVm6/WwY0LZkozPKuSXzhO
                          +/Q6HTZBr2pig/bclvJuFPBtClrzZx5R3NfV33/2rZpFZH9OdAf28q55jbZ1t9AJhtD27s34/cRVBXNBQtc2Nw9D8cEJ+raRdJitAOX3U41bjbrO1u3CQ/JtXg/35wZTJH1Yx7zmDl97cklfiArAfaxkgpWkGhob6A6Fu7LvEgLC25gO5NsY+g4CDqGJT5kzbcyQDDhbf 
                          dbetz@localhost.localdomain
                          script-base      String                     https://filesfbdac204.blob.core.windows.net/support/elasticsearch-secure-nodes/07222017-084857
                          
Outputs                 : 
DeploymentDebugLogLevel : 

~4 minutes total to setup 3 VMs is pretty good. Keep in mind that this time frame includes running the post-VM-creation script (install.sh). I loaded that with a bunch of stuff. You can see this part of the deploy in lines like the following:

Resource Microsoft.Compute/virtualMachines/extensions 'vm-beta/script' provisioning status is running

and

Resource Microsoft.Compute/virtualMachines/extensions 'vm-gamma/script' provisioning status is succeeded

Why's it so fast? Two reasons: First, the storage accounts, VNets, IPs, NICs, and NSGs are already setup in phase 1. Second, Azure will parallel deploy whatever it can. Upon validating that the dependencies (dependsOn) are already setup, Azure will deploy the VMs. This means that phase 3 is a parallel deployment of three VMs.

Inspection

At this point the entire core infrastructure is in place, including VMs. We can verify this by looking at the Elasticsearch endpoint.

While we can easily derive the endpoint address, let's let Powershell tell us directly:


    (Get-AzureRmPublicIpAddress -ResourceGroupName $rgGlobal -Name "pip-alpha").DnsSettings.Fqdn 
    (Get-AzureRmPublicIpAddress -ResourceGroupName $rgGlobal -Name "pip-beta").DnsSettings.Fqdn 
    (Get-AzureRmPublicIpAddress -ResourceGroupName $rgGlobal -Name "pip-gamma").DnsSettings.Fqdn 

From this we have the following:

esnodesfbdac204-alpha.centralus.cloudapp.azure.com
esnodesfbdac204-beta.westus.cloudapp.azure.com
esnodesfbdac204-gamma.eastus.cloudapp.azure.com

With this we can access the following endpoints to see pairing. You only need to do this on one, but because these are public, let's look at all three:

http://esnodesfbdac204-alpha.centralus.cloudapp.azure.com:9200
http://esnodesfbdac204-beta.westus.cloudapp.azure.com:9200
http://esnodesfbdac204-gamma.eastus.cloudapp.azure.com:9200

Always use SSL. Consider HTTP deprecated.

nodes

Let's see if we have any indices.

w/o data

Nope. That makes sense because... well... I didn't create any yet.

Logging into alpha

We want to generate some data for Elasticsearch. I've provided a generation tool which the VMs setup during their provisioning.

Before we get to that point, we have to login to a VM.

Choose a VM DNS name and try to ssh to it. I don't care which one. I'm going with alpha.

[dbetz@core ~]$ ssh esnodesfbdac204-alpha.westus.cloudapp.azure.com
The authenticity of host 'esnodesfbdac204-alpha.centralus.cloudapp.azure.com (52.165.135.82)' can't be established.
ECDSA key fingerprint is 36:d7:fd:ab:39:b1:10:c2:88:9f:7a:87:30:15:8f:e6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'esnodesfbdac204-alpha.centralus.cloudapp.azure.com,52.165.135.82' (ECDSA) to the list of known hosts.
[dbetz@alpha ~]$

Troubleshooting

If you get a message like the following, then you don't have the private key that goes with the public key you gave the VM in the ARM template.

[dbetz@core ~]$ ssh esnodesfbdac204-alpha.westus.cloudapp.azure.com
The authenticity of host 'esnodesfbdac204-alpha.westus.cloudapp.azure.com (13.87.182.255)' can't be established.
ECDSA key fingerprint is 94:dd:1b:ca:bf:7a:fd:99:c2:70:02:f3:0c:fa:0b:9a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'esnodesfbdac204-alpha.westus.cloudapp.azure.com,13.87.182.255' (ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

To get your private key you can dump it out:

[dbetz@core ~]$ cat ~/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
...base64 stuff here...
-----END RSA PRIVATE KEY-----

You can take that and dump it into a different system:

[dbetz@core ~]$ export HISTCONTROL=ignorespace
[dbetz@core ~]$         cat > ~/.ssh/id_rsa <<\EOF
-----BEGIN RSA PRIVATE KEY-----
...base64 stuff here...
-----END RSA PRIVATE KEY-----
EOF

When HISTCONTROL is set to ignorespace is set and a command has a space in front of it, it won't be stored in shell history.

When you try it again, you'll get a sudden urge to throw your chair across the room:

[dbetz@core ~]$ ssh esnodesfbdac204-alpha.eastus.cloudapp.azure.com       
The authenticity of host 'esnodesfbdac204-alpha.eastus.cloudapp.azure.com (13.87.182.255)' can't be established.
ECDSA key fingerprint is 94:dd:1b:ca:bf:7a:fd:99:c2:70:02:f3:0c:fa:0b:9a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'esnodesfbdac204-alpha.eastus.cloudapp.azure.com,13.87.182.255' (ECDSA) to the list of known hosts.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0664 for '/home/dbetz/.ssh/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /home/dbetz/.ssh/id_rsa
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

Chill. Your permissions suck. File permissions are set via your umask settings. In this case, they're too open. Only you need permissions, read-only permissions at that: 400.

You just need to drop the permissions:

[dbetz@core ~]$ chmod 400 ~/.ssh/id_rsa 

Now you can get in:

[dbetz@core ~]$ ssh esnodesfbdac204-alpha.eastus.cloudapp.azure.com  
The authenticity of host 'linux04.jampad.net (192.80.189.178)' can't be established.
ECDSA key fingerprint is 7a:24:38:8c:05:c1:2c:f3:d0:fa:52:0d:2c:a4:04:9c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'linux04.jampad.net,192.80.189.178' (ECDSA) to the list of known hosts.
Last login: Sat Jul 22 17:39:07 2017 from 136.61.130.214
[dbetz@alpha ~]$ 

We're in.

Adding data

To generate sample data for this scenario run my data generation tool (based on my Github project Hamlet.

It's in the /home/root folder.

[root@alpha ~]# ./setup_data_generation.sh

Azure does its automated setup as root. That's why were here.

Running this installs all the required tool and writes instructions.

Follow the instructions the tool provides :

[root@alpha ~]# cd /srv/hamlet
[root@alpha hamlet]# source bin/activate
(hamlet) [root@alpha hamlet]# cd content
(hamlet) [root@alpha content]# python /srv/hamlet/content/hamlet.py
http://10.17.0.4:9200/librarygen
^CStopped (5.8595204026280285)

I let it run for a few seconds, then hit CTRL-C to exit.

Now let's refresh our Elasticsearch endpoints (the :9200 endpoints).

with data

The data is there and has replicated across all servers.

Looking at all three systems at once is just for the purpose of this demo. In reality, all you have to do is look at /_cat/shards on any node ins the cluster and be done with it:

shards

You can even see that there are multiple shards and replicas (p => primary, r => replica).

Create Traffic Manager

At this point we want to create a single point-of-contact for search. We do this with traffic manager. You create the traffic manager then add an endpoint for each system:

    function createtrafficmanager { param([Parameter(Mandatory=$true)]$rg,
                                          [Parameter(Mandatory=$true)]$count)
        clear
            
        $names = @("alpha", "beta", "gamma", "delta", "epsilon")
    
        $uniqueName = (Get-AzureRmStorageAccount -ResourceGroupName $rg)[0].StorageAccountName
    
        $tmProfile = New-AzureRmTrafficManagerProfile -ResourceGroupName $rg -name "tm-$rg" `
                        -TrafficRoutingMethod Performance `
                        -ProfileStatus Enabled `
                        -RelativeDnsName $uniqueName `
                        -Ttl 30 `
                        -MonitorProtocol HTTP `
                        -MonitorPort 9200 `
                        -MonitorPath "/"
    
        (1..$count).foreach({
            $name = $names[$_ - 1]
            $pip = Get-AzureRmPublicIpAddress -ResourceGroupName $rg -Name "pip-$name"
            Add-AzureRmTrafficManagerEndpointConfig -TrafficManagerProfile $tmProfile -EndpointName $name -TargetResourceId $pip.id -Type AzureEndpoints -EndpointStatus Enabled
        })
        Set-AzureRmTrafficManagerProfile -TrafficManagerProfile $tmProfile
        
    }
    
    createtrafficmanager -rg 'esnodes4dede7b0' -count 3

Output

Id                               : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodes4dede7b0/providers/Microsoft.Network/trafficManagerProfiles/tm-esnodes4dede7b0
Name                             : tm-esnodes4dede7b0
ResourceGroupName                : esnodes4dede7b0
RelativeDnsName                  : esnodes4dede7b0alpha
Ttl                              : 30
ProfileStatus                    : Enabled
TrafficRoutingMethod             : Performance
MonitorProtocol                  : HTTP
MonitorPort                      : 9200
MonitorPath                      : /
MonitorIntervalInSeconds         : 30
MonitorTimeoutInSeconds          : 10
MonitorToleratedNumberOfFailures : 3
Endpoints                        : {alpha, beta, gamma}

Id                               : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodes4dede7b0/providers/Microsoft.Network/trafficManagerProfiles/tm-esnodes4dede7b0
Name                             : tm-esnodes4dede7b0
ResourceGroupName                : esnodes4dede7b0
RelativeDnsName                  : esnodes4dede7b0alpha
Ttl                              : 30
ProfileStatus                    : Enabled
TrafficRoutingMethod             : Performance
MonitorProtocol                  : HTTP
MonitorPort                      : 9200
MonitorPath                      : /
MonitorIntervalInSeconds         : 30
MonitorTimeoutInSeconds          : 10
MonitorToleratedNumberOfFailures : 3
Endpoints                        : {alpha, beta, gamma}

Id                               : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodes4dede7b0/providers/Microsoft.Network/trafficManagerProfiles/tm-esnodes4dede7b0
Name                             : tm-esnodes4dede7b0
ResourceGroupName                : esnodes4dede7b0
RelativeDnsName                  : esnodes4dede7b0alpha
Ttl                              : 30
ProfileStatus                    : Enabled
TrafficRoutingMethod             : Performance
MonitorProtocol                  : HTTP
MonitorPort                      : 9200
MonitorPath                      : /
MonitorIntervalInSeconds         : 30
MonitorTimeoutInSeconds          : 10
MonitorToleratedNumberOfFailures : 3
Endpoints                        : {alpha, beta, gamma}

Id                               : /subscriptions/20e08d3d-d5c5-4f76-a454-4a1b216166c6/resourceGroups/esnodes4dede7b0/providers/Microsoft.Network/trafficManagerProfiles/tm-esnodes4dede7b0
Name                             : tm-esnodes4dede7b0
ResourceGroupName                : esnodes4dede7b0
RelativeDnsName                  : esnodes4dede7b0alpha
Ttl                              : 30
ProfileStatus                    : Enabled
TrafficRoutingMethod             : Performance
MonitorProtocol                  : HTTP
MonitorPort                      : 9200
MonitorPath                      : /
MonitorIntervalInSeconds         : 30
MonitorTimeoutInSeconds          : 10
MonitorToleratedNumberOfFailures : 3
Endpoints                        : {alpha, beta, gamma}

Just access your endpoint:

http://esnodes4dede7b0.trafficmanager.net:9200

shards

Now you have a central endpoint. In this case traffic will be send to whichever Elasticsearch endpooint is closest to the end user (=performance traffic routing method).

You'll want SSL for that. Well, after you go through the following DNS section.

Adding DNS

At this point everything is functional. Let's go beyond bare functionality. Normally, you'd want something like search.davidbetz.net as an endpoint.

The following will give my davidbetz.net domain a subdomain: esnodes4dede7b0.davidbetz.net.


    function creatednscname { param([Parameter(Mandatory=$true)]$dnsrg,
                                          [Parameter(Mandatory=$true)]$zonename,
                                          [Parameter(Mandatory=$true)]$cname,
                                          [Parameter(Mandatory=$true)]$target)
    
        New-AzureRmDnsRecordSet -ResourceGroupName $dnsrg -ZoneName $zonename -RecordType CNAME -Name $cname -Ttl 3600 -DnsRecords (
            New-AzureRmDnsRecordConfig -Cname $target
        )
    }
    
    function _virtualenv {
    
    $dnsrg = 'davidbetz01'
    $zone = 'davidbetz.net'
    
    creatednscname $dnsrg $zone $rgGlobal "$rgGlobal.trafficmanager.net"
    
    } _virtualenv

Done.

shards

Addition Thoughts

Remember, SSL. You do this with SSL termination. Putting SSL between each system internally (between nginx and an internal web server) is naive and foolish; you only need SSL to external systems. You do this with Nginx. See my secure Elasticsearch lab at https://linux.azure.david.betz.space/_/elasticsearch-secure for details.

You'll also want to protect various Elasticsearch operations via password (or whatever). See my Running with Nginx for more information.

You can learn more about interacting with Elasticsearch directly via my article Learning Elasticsearch with PowerShell.

Developing Azure Modular ARM Templates

Cloud architectures are nearly ubiquitous. Managers are letting go of their FUD and embracing a secure model that can extend their reach globally. IT guys, who don't lose any sleep over the fact that their company's finance data is on the same physical wire as their public data, because the data is separated by VLANs, are realizing that VNets on Azure function on the same principle. Developers are embracing a cross-platform eutopia where Python and .NET can live together as citizens in a harmonious cloud solution where everyone realizes that stacks don't actually exist. OK... maybe I'm dreaming about that last one, but the cloud is widely used.

With Azure 2.0 (aka Azure ARM), we finally have a model of managing our resources (database, storage account, network card, VM, load balancer, etc) is a declarative model where we can throw nouns at Azure and it let verb them into existence.

JSON templates give us a beautiful 100% GUI-free environment to restore sanity the stolen from us by years of dreadfully clicking buttons. Yet, there's gotta be a better way of dealing with our ARM templates than scrolling up and down all the time. Well, there is... what follows is my proposal for a modular ARM template architecture

Below is a link to a template that defines all kinds of awesome:

Baseline ARM Template

Take this magical spell and throw it at Azure and you'll get a full infrastucture of many Elasticsearch nodes, all talking to each other, each with their own endpoint, and a traffic manager to unify the endpoints to make sure everyone in the US gets a fast search connection. There's also the multiple VNets, mesh VPN, and the administative VM and all that stuff.

Yet, this isn't even remotely how I work with my templates. This is:

ARM Components

Synopsis

Before moving on, note that there are a lot of related concepts going on here. It's important that I give you a quick synopsis of what follows:

  • Modularly splitting ARM templates into managable, mergable, reusable JSON files
  • Deploying ARM templates in phases.
  • Proposal for symlinking for reusable architectures
  • Recording production deployments
  • Managing deployment arguments
  • Automating support files

Let's dive in...

Modular Resources

Notice that the above screenshot does not show monolith. Instead, I manage individual resources, not the entire template at once. This let's me find and add, remove, enable, disable, merge, etc things quickly.

Note that each folder represents "resource provider/resource type/resource.json". The root is where you would put the optional sections variables.json, parameters.json, and outputs.json. In this example, I have a PS1 file there just because it supports this particular template.

My deployment PowerShell script combines the appropriate JSON files together to create the final azuredeploy-generated.json file.

I originally started with grunt to handle the merging. grunt-contrib-concat + grunt-json-format worked for a while, but my Gruntfile.js became rather long, and the entire process was wildly unreliable anyway. Besides, it was just one extra moving part that I didn't need. I was already deploying with PowerShell. So, might as well just do that...

You can get my PowerShell Azure modular JSON magical script at the end of this article.

There's a lot to discuss here, but let's review some core benefits...

Core Benefits

Aside from the obvious benefit of modularity to help you sleep at night, there are at least two other core benefits:

First, is the ability to add and remove resources via files, but a much greater benefit is the ability to enable or disable resources. In my merge script, I exclude any file that starts with an underscore. This acts a a simple way to comment out a resource.

Second, is the ability to version and merge individual resources in Git (I'm assuming you're living in 2016 or beyond, there are are using Git, not that one old subversive version control thing or Terrible Foundation Server). The ability to diff and merge resources, not entire JSON monoliths is great.

Phased Deployment

When something is refactored, often fringe benefits naturally appear. In this case, modular JSON resources allows for programmaticly enabling and disabling of resources. More specifically, I'd like to mention a concept I integrate into my deployment model: phased deployment.

When deploying a series of VM and VNets, it's important to make sure your dependencies are setup correctly. That's fairly simple: just make sure dependsOn is setup right in each resource. Azure will take that information into account to see what to deploy in parallel.

That's epic, but I don't really want to wait around forever if part of my dependency tree is a network gateway. Those things take forever to deploy. Not only that, but I've some phases that are simply done in PowerShell.

Go back and look at the screenshot we started with. Notice that some of the resources start with 1., 2., etc.... So, starting a JSON resource with "#." states at what phase that resource will deploy. In my deployment script I'll state what phase I'm currently deploying. I might specify that I only want to deploy phase 1. This will do everything less than phase 1. If I like what I see, I'll deploy phase 2.

In my example, phase 2 is my network gateway phase. After I've aged a bit, I'll come back to run some PowerShell to create a VPN mesh (not something I'd try to declare in JSON). Then, I'll deploy phase 3 to setup my VMs.

Crazy SymLink Idea

This section acts more as an extended sidebar than part of the main idea.

Most benefits of this modular approach are obvious. What might not be obvious is the following:

You can symlink to symbols for reuse. For any local Hyper-V Windows VM I spin up, I usually have a Linux VM to go along with it. For my day-to-day stuff, I have a Linux VM that I for general development which I never turn off. I keep all my templates/Git repos on it.

On any *nix-based system, you can create symbolic links to expose the same file with multiple file names (similar to how myriad Git "filename" will point to the same blob based on a common SHA1 hash).

Don't drift off simply because you think it's some crazy fringe idea.

For this discussion, this can mean the following:

./storage/storageAccounts/storage-copyIndex.json
./network/publicIPAddresses/pip-copyIndex.json
./network/networkInterfaces/nic-copyIndex.json
./network/networkSecurityGroups/nsg-copyIndex.json
./network/virtualNetworks/vnet-copyIndex.json

These resources could be some epic, pristine awesomeness that you want to reuse somewhere. Now, do use the following Bash script:

#!/bin/bash

if [ -z "$1" ]; then
    echo "usage: link_common.sh type"
    exit 1
fi

TYPE=$1

mkdir -p `pwd`/$TYPE/template/resources/storage/storageAccounts
mkdir -p `pwd`/$TYPE/template/resources/network/{publicIPAddresses,networkInterfaces,networkSecurityGroups,virtualNetworks}

ln -sf `pwd`/_common/storage/storageAccounts/storage-copyIndex.json `pwd`/$TYPE/template/resources/storage/storageAccounts/storage-copyIndex.json
ln -sf `pwd`/_common/network/publicIPAddresses/pip-copyIndex.json `pwd`/$TYPE/template/resources/network/publicIPAddresses/pip-copyIndex.json
ln -sf `pwd`/_common/network/networkInterfaces/nic-copyIndex.json `pwd`/$TYPE/template/resources/network/networkInterfaces/nic-copyIndex.json
ln -sf `pwd`/_common/network/networkSecurityGroups/nsg-copyIndex.json `pwd`/$TYPE/template/resources/network/networkSecurityGroups/nsg-copyIndex.json
ln -sf `pwd`/_common/network/virtualNetworks/vnet-copyIndex.json `pwd`/$TYPE/template/resources/network/virtualNetworks/vnet-copyIndex.json

Run this:

chmod +x ./link_common.sh
./link_common.sh myimpressivearchitecture

This will won't create duplicate files, but it will create files that point to the same content. Change one => Change all.

Doing this, you might want to make the source-of-truth files read-only. There are a few days to do this, but the simplest is to give root ownership of the common stuff, then give yourself file-read and directory-list rights.

sudo chown -R root:$USER _common
sudo chmod -R 755 _common 

LINUX NOTE: directory-list rights are set with the directory execute bit

If you need to edit something, you'll have to do it as root (e.g. sudo). This will protect you from doing stupid stuff.

Linux symlinks look like normal files and folders to Windows. There's nothing to worry about there.

This symlinking concept will help you link to already established architectures. You can add/remove symlinks as you need to add/remove resources. This is an established practice in the Linux world. It's very common to create a folder for ./sites-available and ./sites-enabled. You never delete from ./sites-enabled, you simply create links to enable or disable.

Hmm, OK, yes, that is a crazy fringe idea. I don't even do it. Just something you can try on Linux, or on Windows with some sysinternals tools.

Deployment

When you're watching an introductory video or following a hello world example of ARM templates, throwing variables at a template is great, but I'd never do this in production.

In production, you're going to archive each script that is thrown at the server. You might even have a Git repo for each and every server. You're going to stamp everything with files and archive everything you did together. Because this is how you work anyway, it's best to keep that as an axiom and let everything else mold to it.

To jump to the punchline, after I deploy a template twice (perhaps once with gateways disabled, and one with them enabled, to verify in phases), here's what my ./deploy folder looks like:

./09232016-072446.1/arguments-generated.json
./09232016-072446.1/azuredeploy-generated.json
./09232016-072446.1/success.txt
./09242016-051529.2/arguments-generated.json
./09242016-051529.2/azuredeploy-generated.json
./09242016-051529.2/success.txt

Each deployment archives the generated files with the timestamp. Not a while lot to talk about there.

Let's back up a little bit and talk about deal with arguments and that arguments-generated.json listed above.

If I'm doing phased deployment, the phase will be suffixed to the deploy folder name (e.g. 09242016-051529.1).

Deployment Arguments

Instead of setting up parameters in the traditional ARM manner, I opt to generate an arguments file. So, my model is to not only generate the "azuredeploy.json", but also the "azuredeploy-parameters.json". Once these are generated, they can be stamped with a timestamp, then archived with the status.

Sure, zip them and throw them on a blob store if you want. Meh. I find it a bit overkill and old school. If anything, I'll throw my templates at my Elasticsearch cluster so I can view the archives that way.

While my azuredeploy-generated.json is generated from myriad JSON files, my arguments-generated.json is generated from my ./template/arguments.json file.

Here's my ./template/arguments.json file:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "admin-username": {
            "value": "{{admin-username}}"
        },
        "script-base": {
            "value": "{{blobpath}}/"
        },
        "ssh-public-key": {
            "value": "{{ssh-public-key}}"
        }
    }
}

My deployment script will add in the variables to generate the final arguments file.

$arguments = @{
    "blobpath" = $blobPath
    "admin-username" = "dbetz"
    "ssh-public-key" = (cat $sshPublicKeyPath -raw)
}

Aside from the benefits of automating the public key creation for Linux, there's that blobpath argument. That's important. In fact, dynamic arguments like this might not even make sense until you see my support file model.

Support Files

If you are going to upload assets/scripts/whatever to your server during deployment, you need to get them to a place they are accessible. One way to do this is to commit to Git every 12 seconds. Another way is to simply use blob storage.

Here's the idea:

You have the following folder structure:

./template
./support

You saw ./template in VS Code above, in this example, ./support looks like this:

support/install.sh
support/create_data_generation_setup.sh
[support/generate/hamlet.py](https://netfxharmonics.com/n/2015/03/brstrings)

These are files that I need to get on the server. Use Git if you want, but Azure can handle this directly:

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $deploymentrg -Name $deploymentaccount)[0].value
$ctx = New-AzureStorageContext -StorageAccountName $deploymentaccount -StorageAccountKey $key
$blobPath = Join-Path $templatename $ts
$supportPath = (Join-Path $projectFolder "support")
(ls -File -Recurse $supportPath).foreach({
    $relativePath = $_.fullname.substring($supportPath.length + 1)
    $blob = Join-Path $blobPath $relativePath
    Write-Host "Uploading $blob"
    Set-AzureStorageBlobContent -File $_.fullname -Container 'support' -Blob $blob -BlobType Block -Context $ctx -Force > $null
})

This PowerShell code in my ./support folder and replicates the structure to blob storage.

You ask: "what blob storage?"

Response: I keep a resource group named deploy01 around with a storage account named file (with 8 random letters to make it unique). I reuse this account for all my Azure deployments. You might duplicate this per client. Upon deployment, blobs are loaded with the fully qualified file path including the template that I'm using and my deployment timestamp.

The result is that by time the ARM template is thrown at Azure, the following URL was generated and the files are in place to be used:

https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-052804 

For each deployment, I'm going to have a different set of files in blob storage.

In this case, the following blobs were uploaded:

elasticsearch-secure-nodes/09232016-072446/generate/hamlet.py                                
elasticsearch-secure-nodes/09232016-072446/install.sh                                        
elasticsearch-secure-nodes/09232016-072446/create_data_generation_setup.sh 

SECURITY NOTE: For anything sensitive, disable public access, create a SAS token policy, and use that policy to generate a SAS token URL. Give this a few hours to live so your entire template can successfully complete. Remember, gateways take a while to create. Once again: this is why I do phased deployments.

When the arguments-generated.json is used, the script-base parameter is populated like this:

"setup-script": {
    "value": "https://files0c0a8f6c.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446"
},

You can then use this parameter to do things like this in your VM extensions:

"fileUris": [
    "[concat(parameters('script-base'), '/install.sh')]"
],
"commandToExecute": "[concat('sh install.sh ', length(variables('locations')), ' ''', parameters('script-base'), ''' ', variables('names')[copyindex()])]"

Notice that https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446/install.sh is the script to be called, but https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446 is also sends in as a parameter. This will tell the script itself where to pull the other files. Actually, in this case, that endpoint is passed a few levels deep.

In my script, when I'm doing phased deployment, I can set uploadSupportFilesAtPhase to whatever phase I want to upload support files. I generally don't do this at phase 1, because, for mat, that phase is everything up to the VM or gateway. The support files are for the VMs, so there's no need to play around with them while doing idempotent updates to phase 1.

Visual Studio Code

I've a lot of different editors that I use. Yeah, sure, there's Visual Studio, whatever. For me, it's .NET only. It's far too bulky for most anything else. For ARM templates, it's absolutely terrible. I feel like I'm playing with VB6 with it's GUI driven resource seeking.

While I use EditPlus or Notepad2 (scintilla) for most everything, this specific scenario calls for Visual Studio Code (Atom). It allows you to open a folder directly without the needs for pointless SLN files and lets you view the entire hierarchy at once. It also lets you quickly CTRL-C/CTRL-V a JSON file to create a new one (File->New can die). F2 also works for rename. Not much else you need in life.

Splitting a Monolith

Going from an exist monolithic template is simple. Just write a quick tool to open JSON and dump it in to various files. Below is my a subpar script I wrote in PowerShell to make this happen:

$templateBase = '\\10.1.40.1\dbetz\azure\armtemplates'
$template = 'python-uwsgi-nginx'
$templateFile = Join-Path $templateBase "$template\azuredeploy.json"
$json = cat $templateFile -raw
$partFolder = 'E:\Drive\Code\Azure\Templates\_parts'
$counters = @{ "type"=0 }

((ConvertFrom-Json $json).resources).foreach({
    $index = $_.type.indexof('/')
    $resourceProvider = $_.type.substring(0, $index).split('.')[1].tolower()
    $resourceType = $_.type.substring($index+ 1, $_.type.length - $index - 1)
    
    $folder = Join-Path $partFolder $resourceProvider
    if(!(Test-Path $folder)) {
        mkdir $folder > $null
    }

    $netResourceType = $resourceType
    while($resourceType.contains('/')) {    
        $index = $resourceType.indexof('/')
        $parentResourceType = $resourceType.substring(0, $index)
        $resourceType = $resourceType.substring($index+ 1, $resourceType.length - $index - 1)
        $netResourceType = $resourceType
        $folder = Join-Path $folder $parentResourceType
        if(!(Test-Path $folder)) {
            mkdir $folder > $null
        }
    }
    $folder = Join-Path $folder $netResourceType
    if(!(Test-Path $folder)) {
        mkdir $folder > $null
    }
    
    $counters[$_.type] = $counters[$_.type] + 1
    $file = $folder + "\" + $netResourceType + $counters[$_.type] + '.json'
    Write-Host "saving to $file"
    (ConvertTo-Json -Depth 100 $_ -Verbose).Replace('\u0027', '''') | sc $file
})

Here's a Python tool I wrote that does the same thing, but the JSON formatting is much better: https://jampadcdn01.azureedge.net/netfx/2016/09/modulararm/armtemplatesplit.py

This is compatible with Python 3 and legacy Python (2.7+).

Deploy script

Here's my current deploy armdeploy.ps1 script:

Deploy ARM Template (ps1)

LCSA Exam Tips

If you're going for the Linux on Azure certification, you'll be taking the LCSA exam. This is a practicum, so you're going to be going through a series of requirements that you'll have to implement.

Ignore the naive folk who say that this is a real exam, "unlike those that simply require to you memorize a bunch of stuff". The problem today is NOT with people having too much book knowledge, but nowhere near enough. Good for you for going for a practicum exam. Now study for the cross-distro LPIC exams so you don't get tunnel vision in your own little world. Those exams will expand your horizons into areas you may not have known about. Remember: it's easy to fool yourself into thinking that you have skill when you do the same thing every day. Perhaps your LCSA exam will simply match your day job and you'll think you're simply hardcore. Go study the books and get your humility with the written exams. You know, the ones where you can't simply type man every time you have a problem.

Here are some tips:

noclobber

In whatever account (e.g. some user or root) you'll be using, set the following:

set -o noclobber

This will make it so that if you accidentally use > instead of >>, you won't automatically fail. It will simply prevent overwriting some critical file.

For example, if you're trying to send a new line to /etc/fstab, you may want to do the following:

echo `blkid | grep sdb1`    /mng/taco    xfs    defaults    0 0 >> /etc/fstab

Yeah, you'll need to edit the UUID format after that, but regardless: the >> means append. If you accidentally use >, you're dead. You've already failed the exam. The system needs to boot. No fstab => no boot.

/root/notes

The requirements you'll be given will, like with all requirements, they require careful thought and interpretation. Your thought process at may be more clear later. This is why giving estimates on the job RIGHT THEN AND THERE is what we technically call "full retard". In this exam, you've got 2 hours. It's best not to sit there and dwell. Go onto other things and come back.

In this process, you'll want to store your mental process, but you can't write anything down and there's also no in-exam notepad. So, I like to throw notes in some random file.

echo review ulimits again >> /root/notes

Again, if you accidentally used > instead of >>, you'd overwrite your notes.

The forward/backward buttons in the exams are truly horrible. It's like trying to find a scene in a VHS. There's no seek, you have to scroll all the way through the questions to get back to what you want to see. Just write the question in your notes.

Using command comments

If you're in the middle of a command and you realize "uhhh.... I have no idea how to do this next part", you may want to hit the manpages.

BUT! What about the line you're currently on? Just throw # in front of it and hit enter. It will go into your history so you can pick it up later to finish.

Example:

$ #chmod 02660 /etc/s

You can't remember the rest of the path, do you have to look it up. CTRL-A to the begining of the line and put #. Hitting enter will put it into history without an error.

Consider using sed

Your initial thought may be "hmm vi!" That's usually a good bet for any question, but you better know sed. With sed you can do something like this:

sed "/^#/d" /etc/frank/data.txt 

This will delete all lines that start with #, but it will only send the output to YOU, it won't actually update the file.

If you like what you see, do this:

sed -i.original "/^#/d" /etc/frank/data.txt

This will update the file, but save the original to data.txt.original

Remember the absolute basics of awk

Always remember at least the following, this single command accounts for the 80/20 of all my awk usage:

awk '{print $2}'

Example:

ls -l /etc/hosts* | awk '{ print $9 }'

Output:

/etc/hosts
/etc/hosts.allow
/etc/hosts.deny

Probably the worst example in the world, but the point is: you get the 9th column of the output. Pipe that to whatever and move on.

Need two columns?

ls -l /etc/hosts* | awk '{ print $5, $9 }'

Output:

338 /etc/hosts
370 /etc/hosts.allow
460 /etc/hosts.den

Know vi

If you don't know vi, you aren't going to even take the LFCS exam. It's a moot point. The shortcuts, text editing capabilities, and absolute ubiquity of vi makes it something you do not have the option to ignore.

Know find

Believe me, the following command pattern will save you in the exam and on the job:

find . -name "*.txt" -exec cp {} /tmp\;

The stuff after -exec runs once per file found. Instead of doing stuff one-at-a-time, you can use find to process a bunch of stuff at once. The {} represents the file. So, this is copy each file found to /tmp.

Know head / tail (and all the other standard tools!)

If you're like me, when you're in a test, you could read "How many moons does Earth have?" and you'll quickly doubt yourself. Aside from the fact that Steven Fry on QI sometimes says Earth has two moons, and other times says Earth has none, the point is that it's easy to forget everything you think is obvious.

So, if you need to do something with certain files in a list and can't remember for the life of you how you to deal with files 90-130, perhaps you do this:

ls -l | head -n130 | tail -n40

Then do some type of awk to get the file name and do whatever.

That's one thing that's easy about this exam: you can forget your own name and still stumble through it since only the end result is graded.

Know for

I love for. I use this constantly. For example:

for n in `seq 10`; do touch $n; done

That just made 10 files.

Need to create 10 5MB files?

for n in `seq 10`; do dd if=/dev/zero of=/tmp/$n.img bs=1M count=5; done

I love using that to test synchronization.

Know how to login as other users

If you're asked to do something with rights, you'll probably want to jump over to that user to test what you've done.

su - dbetz

As root, that will get me into the dbetz account. Once in there I can make sure the rights I supposedly assigned are applied properly.

Not only that, but if I'm playing with sudo, I can go from root to dbetz then try to sudo from there.

Know how to figure stuff out

Obviously, there's the man pages. I'm convinced these exist primarily so random complete jerks on the Internet can tell you to read the manual. They are often a great reference, but they are just that: a reference. Nobody sits down and reads them like a novel. Often, they are so cryptic, that you're stuck.

So, have alternate ways of figuring stuff out. For example, you might want to know where other docs are:

find / -name "*share*" | grep docs

That's a wildly hacky way to find some docs, but that's the point. Just start throwing searches out there.

You'll also want to remember the mere existence of various files. For example, /etc/bashrc and /etc/profile. Which calls which? Just open them and look. This isn't a written test where you have to actually know this stuff. The system itself makes it an open book exam. For the LPIC exams, you need to know this upfront.

Running with Nginx

Stacks don't exist. As soon as you change your database you're no longer LAMP or MEAN. Drop the term. Even then, the term only applies to technology; it doesn't describe you. If you are a "Windows guy", learn Linux. If you are a "LAMP" dude, you have to at least have some clue about .NET. Don't marry yourself to only AWS or Azure. Learn both. Use both. Some features of Azure make me drool, while others remind me of VB6 (update: Azure ARM is perfectly solid, modern, and awesome!) Some features of AWS make me feel like a kid in a candy store, while others make me wonder if they are actually April Fool's jokes.

Regardless of whatever you're into, you really should learn this epic tool called Nginx. I've been using it for a while and now have almost all my web sites touching it in some way.

So, what is it? The marketing says it's a "reverse proxy". While I used to like this term, it's become fodder for mockery. No, Nginx is a web server. It's serves content for the web. Sometimes it gets the content from a file systems, at other times it gets it from HTTP. Regardless, it's a web server.

In the old days of web servers, your web server would handle BOTH the processing of the content AND the HTTP serving. It did too much. As much as IIS7 was an improvement over IIS6 (no more ISAPI), it still suffers from this. It's trying both to run .NET and to serve the content.

Modern web servers handle things differently: UWSGI runs Python, PM2 runs Node, and Kestrel runs .NET Core. In front of this is Nginx handling the HTTP traffic and dealing with all the SSL certs. The days of having to deal with both IIS and Apache are gone. Python, Node, and .NET Core each know how to run their own code and Nginx knows HTTP. The concepts are separate, now the processes are separate.

Adding SSL and Authentication

I'm going to start off with a classic example: adding SSL and username / password authentication to an existing web API.

Elasticsearch is one of my favorite database systems; yet, it doesn't have native support for SSL or authorization. There's a tool called Shield for that, but it's over kill when I don't care about multiple users. Nginx came to the rescue. Below is my basic Nginx config. You should be able to look at the following config to get an idea of what's going on. Of course, I'll add some commentary.

server {
    listen 10.1.60.3;

    auth_basic "ElasticSearch";
    auth_basic_user_file /etc/nginx/es-password;

    location / {
        proxy_pass http://127.0.0.1:9200;
        proxy_http_version 1.1;
        proxy_set_header Connection "Keep-Alive";
        proxy_set_header Proxy-Connection "Keep-Alive";
    }
}

In this example, I have a listener setup to listen on port 443. In the context of this listener, I'm setting configuration for /. I'm passing all traffic on to port 9200. This port is only bound locally, so HTTP isn't even publicly accessible. You can also see I'm setting some optional headers.

443 is SSL, so I have my SSL cert and SSL key configured (in my real config, there's a lot more SSL config; just stuff to configure the ciphers).

Finally, you can see that I've setup basic user authentication. Prior to creating this config I used the Apache htaccess command to create a password file:

sudo htpasswd -c /srv/es-htpasswd searchuser

If you stare at the config enough, it will be demystified. Nginx is simply adding SSL and username/password auth to an existing working, open HTTP-only server.

SSL Redirect

Let's lighten up a bit with a simpler example...

There are myriad ways to redirect from HTTP to HTTPS. Nginx is my new favorite way:

server {
    listen 222.222.222.222:80;

    server_name mydomain.net;
    server_name www.mydomain.net;

    return 301 https://mydomain.net$request_uri;
}

Accessing localhost only services

The other day I needed to download some files from my Google Drive to my Linux Server. rclone seemed to be an OK way to do that. During setup, it wanted me to go through the OpenID/OAuth stuff to give it access. Good stuff, but...

If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code

Uhh... 127.0.0.1? Dude, that's a remote server. I tried to go there with the text-based Lynx browser, but, wow... THAT. WAS. HORRIBLE. Then I had a random realization: Nginx! Here's what I did real quick:

server {
    listen 111.111.111.111:8080;

    location / {
        proxy_pass http://127.0.0.1:53682;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host 127.0.0.1;
    }
}

Then I could access the server in my own browser using http://111.111.111.111:53682/auth.

BOOM! I got the Google authorization screen right away and everything came together.

Making services local only

This brings up an intersting point: what if you had a public service you didn't want to be public, but didn't have a way to do it-- or, perhaps, you just wanted to change the port?

In a situation where I had to cheat, I'd cheat by telling iptables (Linux firewall) to block that port, then use Nginx to open the new one.

For example:

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
iptables -A INPUT -j DROP

This says: allow localhost and stuff to port 8080, but block everything else.

If you do this, you need to save the rules using something like iptables-save > /etc/iptables/rules.v4. On Ubuntu, you can get this via apt-get install iptables-persistent.

Then, you can do something like the previous Nginx example to take traffic from a different port.

Better yet, use firewall-d. iptables is as old and obsolete as Apache.

File Serving

My new architecture for my websites involves a few components: public Azure Blob Storage for my assets, ASP.NET WebAPI for all backend processing, and Python/Django for all my websites (+Elasticsearch for queries and Redis is always preloaded with a full mirror of my database) . My netfxharmonics.com follows this exact architecture. I don't like my websites existing in the same world as anything that serves content to it. The architecture I've promoted for years finally has a name: microservices (thank goodness for a non-lame name! *cough* AJAX *cough*) I take a clean architectural approach: no assets on my website, no database access on my website, and no backend processing on my website. My websites only displays content. Databases (thus all connection strings) are behind my WebAPI wall.

OK... I said no assets. That's not completely true, which brings us to the point: how do I serve robots.txt and favicon.ico if I don't allow local assets? Answer: Nginx.

location /robots.txt {
    alias /srv/robots.txt;
}

location /favicon.ico {
    alias /srv/netfx/netfxdjango/static/favicon.ico;
}

Azure

So, you've got a free/shared Azure Web App. You've got you free hosting, free subdomain, and even free SSL. Now you want your own domain and your own SSL. What do you do? Throw money at it? Uh... no. Well, assuming you were proactive and keep a Linux server around.

This is actually a true story of how I run some of my websites. You only get so much free bandwidth and computing with the free Azure Web Apps, so you have to be careful. The trick to being careful is Varnish.

The marketing for Varnish says it's a caching server. As with all marketing, they're trying to make something sound less cool than it really it (though that's never their goal). Varnish can be a load-balancer or something to handle fail-over as well. In this case, yeah, it's a caching server.

Basically: I tell Varnish to listen to port 8080 on localhost. It will take traffic and provide responses. If it needs something, it will go back to the source server to get the content. Most hits to the server will be handled with Varnish. Azure breathe easy.

Because the Varnish config is rather verbose and because it's only tangentially related to this topic, I really don't want to dump a huge Varnish config here. So, I'll give snippets:

backend mydomain {
    .host = "mydomain.azurewebsites.net";
    .port = "80";
    .probe = {
         .interval = 300s;
         .timeout = 60 s;
         .window = 5;
         .threshold = 3;
    }
  .connect_timeout = 50s;
  .first_byte_timeout = 100s;
  .between_bytes_timeout = 100s;
}

sub vcl_recv {
    #++ more here
    if (req.http.host == "123.123.123.123" || req.http.host == "www.mydomain.net" || req.http.host == "mydomain.net") {
        set req.http.host = "mydomain.azurewebsites.net";
        set req.backend = mydomain;
        return (lookup);
    }
    #++ more here
}

This won't make much sense without the Nginx piece:

server {
        listen 123.123.123.123:443 ssl;

        server_name mydomain.net;
        server_name www.mydomain.net;
        ssl_certificate /srv/cert/mydomain.net.crt;
        ssl_certificate_key /srv/cert/mydomain.net.key;

        location / {
            proxy_pass http://127.0.0.1:8080;
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto https;
            proxy_set_header X-Forwarded-Port 443;
            proxy_set_header Host mydomain.azurewebsites.net;
        }
}

Here's what to look for in this:

proxy_set_header Host mydomain.azurewebsites.net

Nginx sets up a listener for SSL on the public IP. It will send requests to localhost:8080.

On the way, it will make sure the Host header says "mydomain.azurewebsites.net". This does two things:

* First, Varnish will be able to detect that and send it to the proper backend configuration (above it).

* Second, Azure will give you a website based on the `Host` header. That needs to be right. That one line is the difference between getting your correct website or getting the standard Azure website template.

In this example, Varnish is checking the host because Varnish is handling multiple IP addresses, multiple hosts, and caching for multiple Azure websites. If you have only one, then these Varnish checks are superfluous.

Verb Filter

Back to Elasticsearch...

It uses various HTTP verbs to get the job done. You can POST, PUT, and to insert, update, or delete respectively, or you can use GET to do your searches. How about a security model where I only allow searches?

It might be a poorman's method, but it works:

server {
    listen 222.222.222.222:80;

    location / {
        limit_except GET {
            deny all;
        }
        proxy_pass http://127.0.0.1:9200;
        proxy_http_version 1.1;
        proxy_set_header Connection "Keep-Alive";
        proxy_set_header Proxy-Connection "Keep-Alive";
    }
}

Verb Filter (advanced)

When using Elasticsearch, you have the option of accessing your data directly without the need for a server-side anything. In face, your AngularJS (or whatever) applications can get data directly from ES. How? It's just an HTTP endpoint.

But, what about updating data? Surely you need some type of .NET/Python bridge to handle security, right? Nah.

Checkout the following location blocks:

location ~ /_count {
    proxy_pass http://elastic;
    proxy_http_version 1.1;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
}

location ~ /_search {
    proxy_pass http://elastic;
    proxy_http_version 1.1;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
}

location ~ /_ {
    limit_except OPTIONS {
        auth_basic "Restricted Access";
        auth_basic_user_file /srv/es-password;
    }

    proxy_pass http://elastic;
    proxy_http_version 1.1;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
}

location / {
    limit_except GET HEAD {
        auth_basic "Restricted Access";
        auth_basic_user_file /srv/es-password;
    }

    proxy_pass http://elastic;
    proxy_http_version 1.1;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
}

Here I'm saying: you can access anything with _count (this is how you get counts from ES), and anything with _search (this is how you query), but if you are accessing something else containing an underscore, you need to provide creds (unless it's an OPTION, which allows CORS to work). Finally, if you're accessing / directly, you can send GET and HEAD, but you need creds to do a POST, PUT, or DELETE.

You can add credential handling to your AngularJS/JavaScript application by sending creds via https://username:password@mydomain.net.

It works fine. Now you can throw away all your server-side code an stick with raw AngularJS (or whatever). If something requires a preprocessor, postprocessor, or server-side code at all (e.g. couldn't be developed in jdfiddle/plunkr directly), it's not web development (and you might not be a web developer). Here, you have solid, direct web development without the middle-man. Just the browser and the server-infrastructure. It's SPA with your own IAAS setup.

Domain Unification

In the previous example, we have an Elasticsearch service. What about our website? Do we really want to deal with both domain.com and search.domain.com, and the resulting CORS nonsense? Do really REALLY want to deal with multiple SSL certs?

No, we don't.

In this case, you can use Nginx to unify your infrastructure to use one domain.

Let's just update the / in the previous example:

location / {
    limit_except GET HEAD {
        auth_basic "Restricted Access";
        auth_basic_user_file /srv/es-password;
    }

    proxy_pass http://myotherwebendpoint;
    proxy_http_version 1.1;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
}

Now / uses gets its content from a different place than the other servers.

Let's really bring it home:

location /api {
    proxy_pass http://myserviceendpoint;
    proxy_http_version 1.1;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
}

Now /api points to your API service.

Now you only have to deal with domain.com while having three different services / servers internally.

Killing 1990s "www."

Nobody types "www.", it's never on business cards, nobody says it, and most people forgot it exists. Why? This isn't 1997. The most important part of getting a pretty URL is removing this nonsense. Nginx to the rescue:

server {
    listen 222.222.222.222:80

    server_name mydomain.net;
    server_name www.mydomain.net;

    return 301 https://mydomain.net$request_uri;
}

server {
    listen 222.222.222.222:443 ssl http2;

    server_name www.mydomain.net;

    # ... ssl stuff here ...

    return 301 https://mydomain.net$request_uri;
}

server {
    listen 222.222.222.222:443 ssl http2;

    server_name mydomain.net;

    # ... handle here ...
}

All three server blocks listen on the same IP, but the first listens on port 80 to redirect to the actual domain (there's no such thing as a "naked domain"-- it's just the domain; "www." is a subdomain), the second listens for the "www." subdomain on the HTTPS port (in this case using HTTP2), and the third is where everyone is being directed.

SSL

This example simply expands the previous one by showing the actual SSL implemenation. Keep in mind that to use HTTP2, you have to have at least Nginx 1.9 (at the time of writing, this meant compiling it yourself-- not a big deal).

server {
    listen 222.222.222.222:80;

    server_name mydomain.net;
    server_name www.mydomain.net;

    return 301 https://mydomain.net$request_uri;
}

server {
    listen 222.222.222.222:443 ssl http2;

    server_name www.mydomain.net;

    ssl_certificate /srv/_cert/mydomain/mydomain.net.chained.crt;
    ssl_certificate_key /srv/_cert/mydomain/mydomain.net.key;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

    ssl_prefer_server_ciphers on;

    ssl_dhparam /srv/_cert/dhparam.pem;

    return 301 https://mydomain.net$request_uri;
}

server {
    listen 222.222.222.222:443 ssl http2;

    server_name mydomain.net;

    ssl_certificate /srv/_cert/mydomain/mydomain.net.chained.crt;
    ssl_certificate_key /srv/_cert/mydomain/mydomain.net.key;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

    ssl_prefer_server_ciphers on;

    ssl_dhparam /srv/_cert/dhparam.pem;

    location / {
        add_header Strict-Transport-Security max-age=15552000;
        add_header Content-Security-Policy "default-src 'none'; font-src fonts.gstatic.com; frame-src accounts.google.com apis.google.com platform.twitter.com; img-src syndication.twitter.com bible.logos.com www.google-analytics.com 'self'; script-src api.reftagger.com apis.google.com platform.twitter.com 'self' 'unsafe-eval' 'unsafe-inline' www.google.com www.google-analytics.com; style-src fonts.googleapis.com 'self' 'unsafe-inline' www.google.com ajax.googleapis.com; connect-src search.jampad.net jampadcdn.blob.core.windows.net mydomain.net";

        include         uwsgi_params;
        uwsgi_pass      unix:///srv/mydomain/mydomaindjango/content.sock;

        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        proxy_set_header Host mydomain.net;
    }
}

The certs that I use require the chain certs to get a solid A rating on ssllabs.com, this is just a matter of merging your cert with the chain cert (just Google it):

cat mydomain.net.crt ../positivessl.ca-bundle > mydomain.net.chained.crt

Verb Routing

Speaking of verbs, you could whip out a pretty cool CQRS infrastructure by splitting GET from POST.

This is more of a play-along than a visual-aide. You can actually try this one at home.

Here's a demo using a quick node server:

http = require('http');
port = parseInt(process.argv[2]);
server = http.createServer( function(req, res) {
    res.writeHead(200, {'Content-Type': 'text/html'});
    res.end(req.method + ' server ' + port);
});
host = '127.0.0.1';
server.listen(port, host);

Here's our nginx config:

server {
    listen 222.222.222.222:8192;

    location / {
        limit_except GET {
            proxy_pass http://127.0.0.1:6001;
        }
        proxy_pass http://127.0.0.1:6002;
    }
}

use nginx -s reload to quickly reload config without doing a full service restart

Now, to spin up two of them:

node server.js 6001 &
node server.js 6002 &

& runs something as a background process

Now to call them (PowerShell and curl examples provided)...

(wget -method Post http://192.157.251.122:8192/).content

curl -XPOST http://192.157.251.122:8192/

Output:

POST server 6001
(wget -method Get http://192.157.251.122:8192/).content

curl -XGET http://192.157.251.122:8192/

Output:

GET server 6002

Cancel background tasks with fg then CTRL-C. Do this twice to kill both servers.

There we go, your inserts go to one location you read from a different one.

Development Environments

Another great thing about Nginx is that it's not Apache ("a patchy" web server, as the joke goes). Aside from Apache simply trying to do far too much, it's an obsolete product from the 90s that needs to be dropped. It's also often very hard to setup. The security permissions in Apache, for example, make no sense and the documentation is horrible.

Setting up Apache is a dev environment almost never happens, but Nginx is seamless enough for it not to interfere with day-to-day development.

The point: don't be afraid to use Nginx in your development setup.

Raw Python HTTP Processing

Python HTTP processing (it's not "web development" unless there's a web-browser) is all about WSGI: web software gateway interface. It's a pointless term, but the implementation is beautiful: it's a single interface that handles everything web-related for Python. The signature is as follows (with an example):

def name_does_not_matter(environment, response_code):
    response_code = '200 OK'
    return 'Your content type was {}'.format(environment['CONTENT_TYPE'])

This is even what Django does deep down.

You can use a service like UWSGI to do the processing for this. Like other things in Linux, this tool does one thing, does it well, and relies on other tools for other things. In the case of hosting, Nginx is a solid way to handle the HTTP hosting for UWSGI.

In addition to the config for UWSGI (not shown-- not relevant!), you have the following Nginx config:

server {
    listen 222.222.222.222:80;

    location / {
        include            uwsgi_params;
        uwsgi_pass         unix:/srv/raw_python_awesomeness/content/content.sock;

        proxy_redirect     off;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
    }
}

You could make UWSGI serve up something on localhost:8081 (or whatever port you want), but it's best to use sockets where you can.

You can see my WebAPI for Python project at https://github.com/davidbetz/pywebapi for a fuller example.

Bulk Download in Linux

Want to download a huge list of files on Linux? No problem...

Let's get a sample file list:

wget http://www.linuxfromscratch.org/lfs/view/stable/wget-list

Now, let's download them all:

sed 's/^/wget /e' wget-list

This says: execute wget for each line

Done.

Powered by
Python / Django / Elasticsearch / Azure / Nginx / CentOS 7

Mini-icons are part of the Silk Icons set of icons at famfamfam.com