2005 2006 2007 2008 2009 2010 2011 2015 2016 2017 2018 aspnet azure csharp debugging docker elasticsearch exceptions firefox javascriptajax linux llblgen mongodb powershell projects python security services silverlight training videos wcf wpf xag xhtmlcss

LCSA Exam Tips

If you're going for the Linux on Azure certification, you'll be taking the LCSA exam. This is a practicum, so you're going to be going through a series of requirements that you'll have to implement.

Ignore the naive folk who say that this is a real exam, "unlike those that simply require to you memorize a bunch of stuff". The problem today is NOT with people having too much book knowledge, but nowhere near enough. Good for you for going for a practicum exam. Now study for the cross-distro LPIC exams so you don't get tunnel vision in your own little world. Those exams will expand your horizons into areas you may not have known about. Remember: it's easy to fool yourself into thinking that you have skill when you do the same thing every day. Perhaps your LCSA exam will simply match your day job and you'll think you're simply hardcore. Go study the books and get your humility with the written exams. You know, the ones where you can't simply type man every time you have a problem.

Here are some tips:


In whatever account (e.g. some user or root) you'll be using, set the following:

set -o noclobber

This will make it so that if you accidentally use > instead of >>, you won't automatically fail. It will simply prevent overwriting some critical file.

For example, if you're trying to send a new line to /etc/fstab, you may want to do the following:

echo `blkid | grep sdb1`    /mng/taco    xfs    defaults    0 0 >> /etc/fstab

Yeah, you'll need to edit the UUID format after that, but regardless: the >> means append. If you accidentally use >, you're dead. You've already failed the exam. The system needs to boot. No fstab => no boot.


The requirements you'll be given will, like with all requirements, they require careful thought and interpretation. Your thought process at may be more clear later. This is why giving estimates on the job RIGHT THEN AND THERE is what we technically call "full retard". In this exam, you've got 2 hours. It's best not to sit there and dwell. Go onto other things and come back.

In this process, you'll want to store your mental process, but you can't write anything down and there's also no in-exam notepad. So, I like to throw notes in some random file.

echo review ulimits again >> /root/notes

Again, if you accidentally used > instead of >>, you'd overwrite your notes.

The forward/backward buttons in the exams are truly horrible. It's like trying to find a scene in a VHS. There's no seek, you have to scroll all the way through the questions to get back to what you want to see. Just write the question in your notes.

Using command comments

If you're in the middle of a command and you realize "uhhh.... I have no idea how to do this next part", you may want to hit the manpages.

BUT! What about the line you're currently on? Just throw # in front of it and hit enter. It will go into your history so you can pick it up later to finish.


$ #chmod 02660 /etc/s

You can't remember the rest of the path, do you have to look it up. CTRL-A to the begining of the line and put #. Hitting enter will put it into history without an error.

Consider using sed

Your initial thought may be "hmm vi!" That's usually a good bet for any question, but you better know sed. With sed you can do something like this:

sed "/^#/d" /etc/frank/data.txt 

This will delete all lines that start with #, but it will only send the output to YOU, it won't actually update the file.

If you like what you see, do this:

sed -i.original "/^#/d" /etc/frank/data.txt

This will update the file, but save the original to data.txt.original

Remember the absolute basics of awk

Always remember at least the following, this single command accounts for the 80/20 of all my awk usage:

awk '{print $2}'


ls -l /etc/hosts* | awk '{ print $9 }'



Probably the worst example in the world, but the point is: you get the 9th column of the output. Pipe that to whatever and move on.

Need two columns?

ls -l /etc/hosts* | awk '{ print $5, $9 }'


338 /etc/hosts
370 /etc/hosts.allow
460 /etc/hosts.den

Know vi

If you don't know vi, you aren't going to even take the LFCS exam. It's a moot point. The shortcuts, text editing capabilities, and absolute ubiquity of vi makes it something you do not have the option to ignore.

Know find

Believe me, the following command pattern will save you in the exam and on the job:

find . -name "*.txt" -exec cp {} /tmp\;

The stuff after -exec runs once per file found. Instead of doing stuff one-at-a-time, you can use find to process a bunch of stuff at once. The {} represents the file. So, this is copy each file found to /tmp.

Know head / tail (and all the other standard tools!)

If you're like me, when you're in a test, you could read "How many moons does Earth have?" and you'll quickly doubt yourself. Aside from the fact that Steven Fry on QI sometimes says Earth has two moons, and other times says Earth has none, the point is that it's easy to forget everything you think is obvious.

So, if you need to do something with certain files in a list and can't remember for the life of you how you to deal with files 90-130, perhaps you do this:

ls -l | head -n130 | tail -n40

Then do some type of awk to get the file name and do whatever.

That's one thing that's easy about this exam: you can forget your own name and still stumble through it since only the end result is graded.

Know for

I love for. I use this constantly. For example:

for n in `seq 10`; do touch $n; done

That just made 10 files.

Need to create 10 5MB files?

for n in `seq 10`; do dd if=/dev/zero of=/tmp/$n.img bs=1M count=5; done

I love using that to test synchronization.

Know how to login as other users

If you're asked to do something with rights, you'll probably want to jump over to that user to test what you've done.

su - dbetz

As root, that will get me into the dbetz account. Once in there I can make sure the rights I supposedly assigned are applied properly.

Not only that, but if I'm playing with sudo, I can go from root to dbetz then try to sudo from there.

Know how to figure stuff out

Obviously, there's the man pages. I'm convinced these exist primarily so random complete jerks on the Internet can tell you to read the manual. They are often a great reference, but they are just that: a reference. Nobody sits down and reads them like a novel. Often, they are so cryptic, that you're stuck.

So, have alternate ways of figuring stuff out. For example, you might want to know where other docs are:

find / -name "*share*" | grep docs

That's a wildly hacky way to find some docs, but that's the point. Just start throwing searches out there.

You'll also want to remember the mere existence of various files. For example, /etc/bashrc and /etc/profile. Which calls which? Just open them and look. This isn't a written test where you have to actually know this stuff. The system itself makes it an open book exam. For the LPIC exams, you need to know this upfront.

Generating MongoDB Sample Data

Last year I wrote about generating better random strings.

Lorem Ipsum is the devil. It messes with us who are students of Latin; Cicero is hard enough without people throwing randomized Cicero in our faces. It's better to use something that isn't part of a linguistic insurgency. Use my Hamlet generator instead.


Because MongoDB is a standard component of any modern architecture these days, we need the ability to generate, not simply strings, but full objects for our test databases.

The following MongoDB script will do just that. Change the value of the run function-call to set the number of objects to throw at MongoDB.

You run this with the MongoDB shell:

./mongo < hamlet.js

Note: The third-party tool Robomongo, while awesome for day-to-day usage, will not work for this. It doens't play nicely with initializeUnorderedBulkOp, which you need for bulk data import. It's like the BULK INSERT command in SQL.

You can use the following with abridged data or this with the full hamlet lexicon.

var raw = "o my offence is rank it smells to heaven hath the primal eldest curse upont a brothers murder pray can i not though inclination be as sharp will stronger guilt defeats strong intent and like man double business bound stand in pause where shall first begin both neglect what if this cursed hand were thicker than itself with blood there rain enough sweet heavens wash white snow whereto serves mercy but confront visage of whats prayer two-fold force forestalled ere we come fall or pardond being down then ill look up fault past form serve turn forgive me foul that cannot since am still possessd those effects for which did crown mine own ambition queen may one retain corrupted currents world offences gilded shove by justice oft tis seen wicked prize buys out law so above no shuffling action lies his true nature ourselves compelld even teeth forehead our faults give evidence rests try repentance can yet when repent wretched state bosom black death limed soul struggling free art more engaged help angels make assay bow stubborn knees heart strings steel soft sinews newborn babe all well";
var data = raw.split(" ");

function hamlet(count) {
    return data[parseInt(Math.random() * data.length)] + (count == 1 ? "" : " " + hamlet(count - 1));

function randrange(min, max) {
    if(!max) { max = min; min = 1;}
    return Math.floor(Math.random() * (max - min + 1)) + min;

function createArray(count, generator) {
    var list = [];
    for(var n=0; n<count; n++) {
    return list;

function pad(number){
    return ("0" + number).substr(-2);

function createItem() {
    item = {
        "_id": "9780" + randrange(100000000, 999999999),
        "title": hamlet(randrange(4, 8)),
        "authors": createArray(randrange(4), function() { return hamlet(2) }),
        "metadata": {
            "pages": NumberInt(randrange(1, 400)),
            "genre": createArray(randrange(2), function() { return hamlet(1) }),
            "summary": hamlet(randrange(100, 400)),
        "published": new Date(randrange(1960, 2016) + "-" + pad(randrange(12)) + "-" + pad(randrange(28)))

    if (randrange(4) == 1) {
        item.editor = hamlet(1);

    return item;

function run(count) {
    var bulk = db.book.initializeUnorderedBulkOp();
    for (var n = 0; n < count; n++) {


Using Azure DNS

Though you might be used to doing DNS via bind, there's nothing specific about bind that defines DNS. You can do DNS anywhere; all DNS details are blackboxed.

As such, Azure DNS is perfectly compatible with being a super Linux geek.

You can review the docs for Azure DNS on your own, but I'd like to provide a few "recipes".

Everything that follows assumes that you've done the prerequisite work of getting an active Azure account, creating your resource group (New-AzureRmResourceGroup) and creating a new DNS zone (New-AzureRmDnsZone).

New-AzureRmResourceGroup -Name spartan01
New-AzureRmDnsZone -ResourceGroupName spartan01 -Name "example.com"

In moving to Azure DNS, you have a few things that you need to do. These have NOTHING to do with Azure -- they're just facts of DNS.

PowerShell will be used for the most part, but the Azure CLI syntax will be provided where it's particularly interesting.

E-Mail (MX Records)

First, you need to realize that DNS affects your e-mail. This should be obvious since your e-mail has a domain name right in it. So, you need to make sure your MX records are legit.

Here's a block of PowerShell (UpdateAzureDNSMX.ps1) you should keep around:

    New-AzureRmDnsRecordSet -ResourceGroupName spartan01 -ZoneName "example.com" -RecordType A -Name "mail" -Ttl 3600 -DnsRecords (New-AzureRmDnsRecordConfig -Ipv4Address '')
    New-AzureRmDnsRecordSet -ResourceGroupName spartan01 -ZoneName "example.com" -RecordType MX -Name "mail" -Ttl 3600 -DnsRecords @(
        (New-AzureRmDnsRecordConfig -Preference 10 -Exchange smtp.example.com),
        (New-AzureRmDnsRecordConfig -Preference 20 -Exchange mailstore1.example.com)

This creates a record set, then adds the specific config.

Single IP Address

Let's say you have a bunch of sites and want a single server to handle them all. Let's also assume that you only have one IP address.

You should look into using WebApps, but let's pretend you absolutely need a VM for some wild reason.

What to do?

Simple: CNAME it.

After creating your shiny new VM (perhaps running ./create simple hosting01 from https://linux.azure.david.betz.space), you need to get the systems IP address:

Azure CLI

[dbetz@callisto ~]$ az network public-ip list --query "[?dnsSettings.domainNameLabel!=null]|[?starts_with(dnsSettings.domainNameLabel,'hosting01')].{ dns: dnsSettings.fqdn  }"           
    "dns": "hosting01figrg-alpha.centralus.cloudapp.azure.com"


    (Get-AzureRmPublicIpAddress -ResourceGroupName hosting01 -Name hosting01-ip-alpha).DnsSettings.Fqdn

Let's use that Azure domain name to create our CNAME records for our domains:

    function create { param($name)
        New-AzureRmDnsRecordSet -ResourceGroupName spartan01 -ZoneName "example.com" -RecordType CNAME -Name $name -Ttl 3600 -DnsRecords (New-AzureRmDnsRecordConfig -Cname "hosting01figrg-alpha.centralus.cloudapp.azure.com")
    create "subdomain01" 
    create "subdomain02" 
    create "subdomain03" 
    create "subdomain04" 

For the sake of this example, all domains will be as subdomains. Doing full, separate domains is the same, but the examples would be longer... and less fun. It's the same idea, though.

If you already have the records (e.g. you move to another VM later), you update:

    function update { param($name)
        $rs = Get-AzureRmDnsRecordSet -ResourceGroupName spartan01 -ZoneName "example.com"  -RecordType CNAME -Name $name
        $rs.Records[0] = (New-AzureRmDnsRecordConfig -Cname "hosting01figrg-alpha.centralus.cloudapp.azure.com")
        Set-AzureRmDnsRecordSet -RecordSet $rs
    update "subdomain01" 
    update "subdomain02" 
    update "subdomain03" 
    update "subdomain04" 

Now you have your domains all pointing to the same place.

Stopping here doesn't help you. You still have to figure out how to get your server to handle the multiple sites! Though it's not directly Azure related, I find partial examples to be distasteful and a sign of extreme ignorance on the part of an author.

So, what do you do on the server?

Let's assume you're reading this after 2016 (I'm writing this in 2017, sooooo...), therefore you're not using Apache. You're using Nginx:

server {
    listen 80;

    server_name mysubdomain01.example.com;
    return 301 https://mysubdomain01.example.com$request_uri;

server {
    listen 443 ssl http2;

    server_name mysubdomain01.example.com;

    ssl_certificate /etc/letsencrypt/live/mysubdomain01.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/mysubdomain01.example.com/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;


    ssl_prefer_server_ciphers on;

    ssl_dhparam /srv/_cert/dhparam.pem;

    location / {
        add_header Strict-Transport-Security max-age=15552000;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        proxy_set_header Host mysubdomain01.example.com;

I'm not into trivial examples, so the full SSL example is provided, with the letsencrypt paths.

The point is this:

  • listen on the port without the IP
  • server the server_name to your full domain name


Well, no, we're not done to my satisfaction. You still need some ability to test this. So, here's a quick Node.js application you can use to listen on port 80. Just curl from somewhere else and see if it works.

curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -
yum -y install nodejs

cat > server.js << EOF
http = require('http');
port = parseInt(process.argv[2]);
server = http.createServer( function(req, res) {
    res.writeHead(200, {'Content-Type': 'text/html'});
    res.end(req.method + ' server ' + port);
host = '$PUBLIC_IP';
server.listen(port, host);

node server.js 8081 &

Azure CDN

When using assets (e.g. png, css) on your website, you'll want to be sure to serve them via an Azure CDN.

I tend to avoid the term "blob" as that's a binary entity -- the moment you start talking about "SVG blobs" or "CSS blobs", you lose any connection to reality. They're assets.

If your CDNS provider (Azure uses both Akamai and Verizon) supports SSL for custom domains, then great. If not, you're stuck with the mycdnendpoint.azurewebsites.net address. You can consider HTTP obsolete. Use SSL or do not host anything.

Assuming your CDN provider supports SSL for custom domains, you have to tell it about your custom domain.

It requires that you prove ownership by creating a cdnverify CNAME entry. Most of the docs are deeply cryptic on this (honestly, RTFM means FU; it's rarely helpful), so I'll make it simple:

if you want to use cdn01.mydomain.net instead of http://mycdnendpoint.azurewebsites.net, you can do this in two ways (choose one):

  • create a cdn01 CNAME pointing to mycdnendpoint.azurewebsites.net

New-AzureRmDnsRecordSet -ResourceGroupName spartan01 -ZoneName "example.com" -RecordType CNAME -Name 'cd01' -Ttl 3600 -DnsRecords (New-AzureRmDnsRecordConfig -Cname "mycdnendpoint.azureedge.net")


  • create a cdnverify.cdn01 CNAME pointing to cdnverify.mycdnendpoint.azurewebsites.net.

New-AzureRmDnsRecordSet -ResourceGroupName spartan01 -ZoneName "example.com" -RecordType CNAME -Name 'cdnverify.cd01' -Ttl 3600 -DnsRecords (New-AzureRmDnsRecordConfig -Cname "cdnverify.mycdnendpoint.azureedge.net")

NS Records

Once your DNS is setup and ready for production, you need to tell your registrar about it.

You can't guess what these records are. You just ask Azure about them:

    (Get-AzureRmDnsZone -ResourceGroupName spartan01 -Name "example.com").NameServers

Here's a preview of something from namecheap.com (use whatever you want... as long as it's not GoDaddy)

ARM Components

Allowing access to your Azure VM

You created your VM, you installed and configured all services, and your firewalld/iptables is set correctly. Your nmap tests are even working between systems.

But, you can't access your services external to Azure?

You probably didn't enable access in Azure. You need to allow specific ports in your Azure Network Security Group.

In terms of your Azure objects, your VM uses a NIC, your NIC uses an NSG.

Using PowerShell

Using PowerShell, you can do something like this:

    $rg = 'hosting01'
    $nsg = Get-AzureRmNetworkSecurityGroup -ResourceGroupName $rg -Name "$rg-nsg-alpha"
    $maximum = ($nsg.SecurityRules | measure -Property priority -Maximum).Maximum + 100
    $httpRule = New-AzureRmNetworkSecurityRuleConfig -Name "http" -Protocol Tcp -SourceAddressPrefix * -DestinationAddressPrefix * -SourcePortRange * -DestinationPortRange 80 -Priority $maximum -Description "HTTP" -Direction Inbound -Access Allow
    $maximum = ($nsg.SecurityRules | measure -Property priority -Maximum).Maximum + 100
    $httpsRule = New-AzureRmNetworkSecurityRuleConfig -Name "https" -Protocol Tcp -SourceAddressPrefix * -DestinationAddressPrefix * -SourcePortRange * -DestinationPortRange 443 -Priority $maximum -Description "SSL" -Direction Inbound -Access Allow
    Set-AzureRmNetworkSecurityGroup -NetworkSecurityGroup $nsg

Use an ARM Template

Or you can just fix your initial ARM template by adding the resource:

See the https://linux.azure.david.betz.space/_/python-uwsgi-nginx on https://linux.azure.david.betz.space for a fuller example.

        "comments": "",
        "type": "Microsoft.Network/networkSecurityGroups",
        "name": "nsg-alpha",
        "apiVersion": "2017-03-01",
        "location": "[resourceGroup().location]",
        "properties": {
            "securityRules": [
                    "name": "default-allow-ssh",
                    "properties": {
                        "protocol": "Tcp",
                        "sourcePortRange": "*",
                        "destinationPortRange": "22",
                        "sourceAddressPrefix": "*",
                        "destinationAddressPrefix": "*",
                        "access": "Allow",
                        "priority": 1000,
                        "direction": "Inbound"
                    "name": "http",
                    "properties": {
                        "protocol": "Tcp",
                        "sourcePortRange": "*",
                        "destinationPortRange": "80",
                        "sourceAddressPrefix": "*",
                        "destinationAddressPrefix": "*",
                        "access": "Allow",
                        "priority": 1100,
                        "direction": "Inbound"
                    "name": "https",
                    "properties": {
                        "protocol": "Tcp",
                        "sourcePortRange": "*",
                        "destinationPortRange": "443",
                        "sourceAddressPrefix": "*",
                        "destinationAddressPrefix": "*",
                        "access": "Allow",
                        "priority": 1200,
                        "direction": "Inbound"
        "resources": [],
        "dependsOn": []

Add this to your NICs (Microsoft.Network/networkInterfaces) properties:

    "type": "Microsoft.Network/networkInterfaces",
    "properties": {
        "networkSecurityGroup": {
            "id": "[resourceId('Microsoft.Network/networkSecurityGroups', concat(variables('nsg-prefix'), variables('names')[0]))]"

...and dependsOn section:

    "dependsOn": [
        "[resourceId('Microsoft.Network/networkSecurityGroups', concat(variables('nsg-prefix'), variables('names')[0]))]"

Developing Azure Modular ARM Templates

Cloud architectures are nearly ubiquitous. Managers are letting go of their FUD and embracing a secure model that can extend their reach globally. IT guys, who don't lose any sleep over the fact that their company's finance data is on the same physical wire as their public data, because the data is separated by VLANs, are realizing that VNets on Azure function on the same principle. Developers are embracing a cross-platform eutopia where Python and .NET can live together as citizens in a harmonious cloud solution where everyone realizes that stacks don't actually exist. OK... maybe I'm dreaming about that last one, but the cloud is widely used.

With Azure 2.0 (aka Azure ARM), we finally have a model of managing our resources (database, storage account, network card, VM, load balancer, etc) is a declarative model where we can throw nouns at Azure and it let verb them into existence.

JSON templates give us a beautiful 100% GUI-free environment to restore sanity the stolen from us by years of dreadfully clicking buttons. Yet, there's gotta be a better way of dealing with our ARM templates than scrolling up and down all the time. Well, there is... what follows is my proposal for a modular ARM template architecture

Below is a link to a template that defines all kinds of awesome:

Baseline ARM Template

Take this magical spell and throw it at Azure and you'll get a full infrastucture of many Elasticsearch nodes, all talking to each other, each with their own endpoint, and a traffic manager to unify the endpoints to make sure everyone in the US gets a fast search connection. There's also the multiple VNets, mesh VPN, and the administative VM and all that stuff.

Yet, this isn't even remotely how I work with my templates. This is:

ARM Components


Before moving on, note that there are a lot of related concepts going on here. It's important that I give you a quick synopsis of what follows:

  • Modularly splitting ARM templates into managable, mergable, reusable JSON files
  • Deploying ARM templates in phases.
  • Proposal for symlinking for reusable architectures
  • Recording production deployments
  • Managing deployment arguments
  • Automating support files

Let's dive in...

Modular Resources

Notice that the above screenshot does not show monolith. Instead, I manage individual resources, not the entire template at once. This let's me find and add, remove, enable, disable, merge, etc things quickly.

Note that each folder represents "resource provider/resource type/resource.json". The root is where you would put the optional sections variables.json, parameters.json, and outputs.json. In this example, I have a PS1 file there just because it supports this particular template.

My deployment PowerShell script combines the appropriate JSON files together to create the final azuredeploy-generated.json file.

I originally started with grunt to handle the merging. grunt-contrib-concat + grunt-json-format worked for a while, but my Gruntfile.js became rather long, and the entire process was wildly unreliable anyway. Besides, it was just one extra moving part that I didn't need. I was already deploying with PowerShell. So, might as well just do that...

You can get my PowerShell Azure modular JSON magical script at the end of this article.

There's a lot to discuss here, but let's review some core benefits...

Core Benefits

Aside from the obvious benefit of modularity to help you sleep at night, there are at least two other core benefits:

First, is the ability to add and remove resources via files, but a much greater benefit is the ability to enable or disable resources. In my merge script, I exclude any file that starts with an underscore. This acts a a simple way to comment out a resource.

Second, is the ability to version and merge individual resources in Git (I'm assuming you're living in 2016 or beyond, there are are using Git, not that one old subversive version control thing or Terrible Foundation Server). The ability to diff and merge resources, not entire JSON monoliths is great.

Phased Deployment

When something is refactored, often fringe benefits naturally appear. In this case, modular JSON resources allows for programmaticly enabling and disabling of resources. More specifically, I'd like to mention a concept I integrate into my deployment model: phased deployment.

When deploying a series of VM and VNets, it's important to make sure your dependencies are setup correctly. That's fairly simple: just make sure dependsOn is setup right in each resource. Azure will take that information into account to see what to deploy in parallel.

That's epic, but I don't really want to wait around forever if part of my dependency tree is a network gateway. Those things take forever to deploy. Not only that, but I've some phases that are simply done in PowerShell.

Go back and look at the screenshot we started with. Notice that some of the resources start with 1., 2., etc.... So, starting a JSON resource with "#." states at what phase that resource will deploy. In my deployment script I'll state what phase I'm currently deploying. I might specify that I only want to deploy phase 1. This will do everything less than phase 1. If I like what I see, I'll deploy phase 2.

In my example, phase 2 is my network gateway phase. After I've aged a bit, I'll come back to run some PowerShell to create a VPN mesh (not something I'd try to declare in JSON). Then, I'll deploy phase 3 to setup my VMs.

Crazy SymLink Idea

This section acts more as an extended sidebar than part of the main idea.

Most benefits of this modular approach are obvious. What might not be obvious is the following:

You can symlink to symbols for reuse. For any local Hyper-V Windows VM I spin up, I usually have a Linux VM to go along with it. For my day-to-day stuff, I have a Linux VM that I for general development which I never turn off. I keep all my templates/Git repos on it.

On any *nix-based system, you can create symbolic links to expose the same file with multiple file names (similar to how myriad Git "filename" will point to the same blob based on a common SHA1 hash).

Don't drift off simply because you think it's some crazy fringe idea.

For this discussion, this can mean the following:


These resources could be some epic, pristine awesomeness that you want to reuse somewhere. Now, do use the following Bash script:


if [ -z "$1" ]; then
    echo "usage: link_common.sh type"
    exit 1


mkdir -p `pwd`/$TYPE/template/resources/storage/storageAccounts
mkdir -p `pwd`/$TYPE/template/resources/network/{publicIPAddresses,networkInterfaces,networkSecurityGroups,virtualNetworks}

ln -sf `pwd`/_common/storage/storageAccounts/storage-copyIndex.json `pwd`/$TYPE/template/resources/storage/storageAccounts/storage-copyIndex.json
ln -sf `pwd`/_common/network/publicIPAddresses/pip-copyIndex.json `pwd`/$TYPE/template/resources/network/publicIPAddresses/pip-copyIndex.json
ln -sf `pwd`/_common/network/networkInterfaces/nic-copyIndex.json `pwd`/$TYPE/template/resources/network/networkInterfaces/nic-copyIndex.json
ln -sf `pwd`/_common/network/networkSecurityGroups/nsg-copyIndex.json `pwd`/$TYPE/template/resources/network/networkSecurityGroups/nsg-copyIndex.json
ln -sf `pwd`/_common/network/virtualNetworks/vnet-copyIndex.json `pwd`/$TYPE/template/resources/network/virtualNetworks/vnet-copyIndex.json

Run this:

chmod +x ./link_common.sh
./link_common.sh myimpressivearchitecture

This will won't create duplicate files, but it will create files that point to the same content. Change one => Change all.

Doing this, you might want to make the source-of-truth files read-only. There are a few days to do this, but the simplest is to give root ownership of the common stuff, then give yourself file-read and directory-list rights.

sudo chown -R root:$USER _common
sudo chmod -R 755 _common 

LINUX NOTE: directory-list rights are set with the directory execute bit

If you need to edit something, you'll have to do it as root (e.g. sudo). This will protect you from doing stupid stuff.

Linux symlinks look like normal files and folders to Windows. There's nothing to worry about there.

This symlinking concept will help you link to already established architectures. You can add/remove symlinks as you need to add/remove resources. This is an established practice in the Linux world. It's very common to create a folder for ./sites-available and ./sites-enabled. You never delete from ./sites-enabled, you simply create links to enable or disable.

Hmm, OK, yes, that is a crazy fringe idea. I don't even do it. Just something you can try on Linux, or on Windows with some sysinternals tools.


When you're watching an introductory video or following a hello world example of ARM templates, throwing variables at a template is great, but I'd never do this in production.

In production, you're going to archive each script that is thrown at the server. You might even have a Git repo for each and every server. You're going to stamp everything with files and archive everything you did together. Because this is how you work anyway, it's best to keep that as an axiom and let everything else mold to it.

To jump to the punchline, after I deploy a template twice (perhaps once with gateways disabled, and one with them enabled, to verify in phases), here's what my ./deploy folder looks like:


Each deployment archives the generated files with the timestamp. Not a while lot to talk about there.

Let's back up a little bit and talk about deal with arguments and that arguments-generated.json listed above.

If I'm doing phased deployment, the phase will be suffixed to the deploy folder name (e.g. 09242016-051529.1).

Deployment Arguments

Instead of setting up parameters in the traditional ARM manner, I opt to generate an arguments file. So, my model is to not only generate the "azuredeploy.json", but also the "azuredeploy-parameters.json". Once these are generated, they can be stamped with a timestamp, then archived with the status.

Sure, zip them and throw them on a blob store if you want. Meh. I find it a bit overkill and old school. If anything, I'll throw my templates at my Elasticsearch cluster so I can view the archives that way.

While my azuredeploy-generated.json is generated from myriad JSON files, my arguments-generated.json is generated from my ./template/arguments.json file.

Here's my ./template/arguments.json file:

    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "",
    "parameters": {
        "admin-username": {
            "value": "{{admin-username}}"
        "script-base": {
            "value": "{{blobpath}}/"
        "ssh-public-key": {
            "value": "{{ssh-public-key}}"

My deployment script will add in the variables to generate the final arguments file.

$arguments = @{
    "blobpath" = $blobPath
    "admin-username" = "dbetz"
    "ssh-public-key" = (cat $sshPublicKeyPath -raw)

Aside from the benefits of automating the public key creation for Linux, there's that blobpath argument. That's important. In fact, dynamic arguments like this might not even make sense until you see my support file model.

Support Files

If you are going to upload assets/scripts/whatever to your server during deployment, you need to get them to a place they are accessible. One way to do this is to commit to Git every 12 seconds. Another way is to simply use blob storage.

Here's the idea:

You have the following folder structure:


You saw ./template in VS Code above, in this example, ./support looks like this:


These are files that I need to get on the server. Use Git if you want, but Azure can handle this directly:

$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $deploymentrg -Name $deploymentaccount)[0].value
$ctx = New-AzureStorageContext -StorageAccountName $deploymentaccount -StorageAccountKey $key
$blobPath = Join-Path $templatename $ts
$supportPath = (Join-Path $projectFolder "support")
(ls -File -Recurse $supportPath).foreach({
    $relativePath = $_.fullname.substring($supportPath.length + 1)
    $blob = Join-Path $blobPath $relativePath
    Write-Host "Uploading $blob"
    Set-AzureStorageBlobContent -File $_.fullname -Container 'support' -Blob $blob -BlobType Block -Context $ctx -Force > $null

This PowerShell code in my ./support folder and replicates the structure to blob storage.

You ask: "what blob storage?"

Response: I keep a resource group named deploy01 around with a storage account named file (with 8 random letters to make it unique). I reuse this account for all my Azure deployments. You might duplicate this per client. Upon deployment, blobs are loaded with the fully qualified file path including the template that I'm using and my deployment timestamp.

The result is that by time the ARM template is thrown at Azure, the following URL was generated and the files are in place to be used:


For each deployment, I'm going to have a different set of files in blob storage.

In this case, the following blobs were uploaded:


SECURITY NOTE: For anything sensitive, disable public access, create a SAS token policy, and use that policy to generate a SAS token URL. Give this a few hours to live so your entire template can successfully complete. Remember, gateways take a while to create. Once again: this is why I do phased deployments.

When the arguments-generated.json is used, the script-base parameter is populated like this:

"setup-script": {
    "value": "https://files0c0a8f6c.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446"

You can then use this parameter to do things like this in your VM extensions:

"fileUris": [
    "[concat(parameters('script-base'), '/install.sh')]"
"commandToExecute": "[concat('sh install.sh ', length(variables('locations')), ' ''', parameters('script-base'), ''' ', variables('names')[copyindex()])]"

Notice that https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446/install.sh is the script to be called, but https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446 is also sends in as a parameter. This will tell the script itself where to pull the other files. Actually, in this case, that endpoint is passed a few levels deep.

In my script, when I'm doing phased deployment, I can set uploadSupportFilesAtPhase to whatever phase I want to upload support files. I generally don't do this at phase 1, because, for mat, that phase is everything up to the VM or gateway. The support files are for the VMs, so there's no need to play around with them while doing idempotent updates to phase 1.

Visual Studio Code

I've a lot of different editors that I use. Yeah, sure, there's Visual Studio, whatever. For me, it's .NET only. It's far too bulky for most anything else. For ARM templates, it's absolutely terrible. I feel like I'm playing with VB6 with it's GUI driven resource seeking.

While I use EditPlus or Notepad2 (scintilla) for most everything, this specific scenario calls for Visual Studio Code (Atom). It allows you to open a folder directly without the needs for pointless SLN files and lets you view the entire hierarchy at once. It also lets you quickly CTRL-C/CTRL-V a JSON file to create a new one (File->New can die). F2 also works for rename. Not much else you need in life.

Splitting a Monolith

Going from an exist monolithic template is simple. Just write a quick tool to open JSON and dump it in to various files. Below is my a subpar script I wrote in PowerShell to make this happen:

$templateBase = '\\\dbetz\azure\armtemplates'
$template = 'python-uwsgi-nginx'
$templateFile = Join-Path $templateBase "$template\azuredeploy.json"
$json = cat $templateFile -raw
$partFolder = 'E:\Drive\Code\Azure\Templates\_parts'
$counters = @{ "type"=0 }

((ConvertFrom-Json $json).resources).foreach({
    $index = $_.type.indexof('/')
    $resourceProvider = $_.type.substring(0, $index).split('.')[1].tolower()
    $resourceType = $_.type.substring($index+ 1, $_.type.length - $index - 1)
    $folder = Join-Path $partFolder $resourceProvider
    if(!(Test-Path $folder)) {
        mkdir $folder > $null

    $netResourceType = $resourceType
    while($resourceType.contains('/')) {    
        $index = $resourceType.indexof('/')
        $parentResourceType = $resourceType.substring(0, $index)
        $resourceType = $resourceType.substring($index+ 1, $resourceType.length - $index - 1)
        $netResourceType = $resourceType
        $folder = Join-Path $folder $parentResourceType
        if(!(Test-Path $folder)) {
            mkdir $folder > $null
    $folder = Join-Path $folder $netResourceType
    if(!(Test-Path $folder)) {
        mkdir $folder > $null
    $counters[$_.type] = $counters[$_.type] + 1
    $file = $folder + "\" + $netResourceType + $counters[$_.type] + '.json'
    Write-Host "saving to $file"
    (ConvertTo-Json -Depth 100 $_ -Verbose).Replace('\u0027', '''') | sc $file

Here's a Python tool I wrote that does the same thing, but the JSON formatting is much better: https://jampadcdn01.azureedge.net/netfx/2016/09/modulararm/armtemplatesplit.py

This is compatible with Python 3 and legacy Python (2.7+).

Deploy script

Here's my current deploy armdeploy.ps1 script:

Deploy ARM Template (ps1)

PowerShelling Azure DNS Management

DNS is one of those topics that every developer has to deal with. It's also one of those topics some of us developers get to deal with. I love DNS. It allows all manner of flexible addressing, failover, and load-balancing techniques. Understanding DNS takes you a long way in a whole lot of scenarios.

However, the difference between the fun architecture and possibilities of DNS and the implementation has often been the difference between a great sounding class syllabus and the horrible reality of a boatload of homework. Windows DNS, for example, was historically cryptic GUI gibberish. BIND was OK and has a very simple config format, but it scared most people off (you simply cannot win with people who hate GUIs and command lines).

Azure DNS lets you config via the GUI (noooooob; jk, the GUI is pretty nice), ARM templates, Azure CLI (=Linux), and PowerShell.

But, HOW do you manage the config? What format? How do you deploy?

One way is to do everything in ARM templates. Check out this example:

  "resources": [
      "type": "Microsoft.Network/dnszones",
      "name": "example.org",
      "apiVersion": "2016-04-01",
      "location": "global",
      "properties": { }
      "type": "Microsoft.Network/dnszones/a",
      "name": "example.org/mysubdomain01",
      "apiVersion": "2016-04-01",
      "location": "global",
      "dependsOn": [
      "properties": {
        "TTL": 3600,
        "ARecords": [{
            "ipv4Address": ""

There's nothing wrong with that. If you want to have a billion lines of JSON for the sake of a single DNS record, you have fun with that. Ugh. Fine. Yes, it will be idempotent, but are severe overkill for quick updates.

Because PowerShell is effectively the .NET REPL, you can write a simple PowerShell (=.NET) tool to handle any custom format you want.

The following is one way of formatting your DNS updates:

            Name= "example2"
            Name= "mail"

I donno. I find that pretty simple. I'll use that when I setup something new.

Here's my function (with call) to make that work:

    function deployDns { param([Parameter(Mandatory=$true)]$rg, [Parameter(Mandatory=$true)]$zonename, [Parameter(Mandatory=$true)] $ttl, [Parameter(Mandatory=$true)]$records, [bool]$testOnly)
            $name = $_.Name
            Write-Host "Name: $name"
            $dnsrecords = @()
                $config = $_
                $type = $config.Type
                switch($type) {
                    "CNAME" {
                        Write-Host ("`tCNAME: {0}" -f $config.Value)
                        $dnsrecords += New-AzureRmDnsRecordConfig -Cname $config.Value
                    "MX" {
                        Write-Host ("`tPreference: {0}" -f $config.Preference)
                        Write-Host ("`tExchange: {0}" -f $config.Exchange)
                        $dnsrecords += New-AzureRmDnsRecordConfig -Preference $config.Preference -Exchange $config.Exchange
                    "A" {
                        Write-Host ("`tPreference: {0}" -f $config)
                        Write-Host ("`tExchange: {0}" -f $config.Ipv4Address)
                        $dnsrecords += New-AzureRmDnsRecordConfig -Ipv4Address $config.Value
                    "AAAA" {
                        Write-Host ("`tIpv6Address: {0}" -f $config.Ipv6Address)
                        $dnsrecords += New-AzureRmDnsRecordConfig -Ipv6Address $config.Value
                    "NS" {
                        Write-Host ("`tNS: {0}" -f $config.Value)
                        $dnsrecords += New-AzureRmDnsRecordConfig -Nsdname $config.Value
                    "PTR" {
                        Write-Host ("`tPtrdname: {0}" -f $config.Value)
                        $dnsrecords += New-AzureRmDnsRecordConfig -Ptrdname $config.Value
                    "TXT" {
                        Write-Host ("`tPtrdname: {0}" -f $config.Value)
                        $dnsrecords += New-AzureRmDnsRecordConfig -Value $config.Value
            Write-Host "Records:"
            Write-Host $dnsrecords
            if(!$testOnly) {
                New-AzureRmDnsRecordSet -ResourceGroupName $rg -ZoneName $zonename -RecordType $type -Name $name -Ttl $ttl -DnsRecords @dnsrecords
    deployDns -testOnly $true -rg 'davidbetz01' -zonename "davidbetz.net" -ttl 3600 -records @(
            Name= "example2"
            Name= "mail"

Is this insane? Probably. That's what I'm known for. ARM templates might be smarter given their idempotent nature, and I've found myself using the GUI now and again.

For now, just keep in mind that PowerShell lets you me ultra flexible with your Azure configuration, not just Azure DNS.

It's a NoSQL database

10 minutes after I did a webinar, explaining how we need to be specific about database storage and never use the unbelievably unhelpful and pointless term "NoSQL".

What I asked: What can you tell me about that type of database?

Response: It's a NoSQL database.

What I meant: What kind of structure does this database use? Relational? Hierarchical? Document? Graph? Key-value? Column-storage? Wide table? What kind of internal storage does this database use? B-tree? BW-tree? Memory mapping? Hash tables with linked lists? Is the data per collection/table or is that an abstraction for underlying partitions? Does the data format lend itself to easy horizontal partitioning (e.g. sharding)? If it's a sharded system, how does it deal with the scatter-gather problem and multi-node write confirmations? Is there a data router? Are replicas readable? Are there capabilities for edge nodes? How is data actually tracked (e.g. bitmaps, statistics)? Are records/documents mutable or immutable with delta structures? Are records deleted immediately or internally marked for deletion? What are this system's memory management strategy? What kind of aggregations are possible with this database? Are there silly 100MB aggregation limits or is it willing to use actually use the 128GB of RAM we just paid for specifically for the purpose of preventing our massive aggregates from needing to write to disk? Is there a transaction / an operations log? Is there support for transactions? How does this relate to rollbacks and commits? Are there various data isolation levels? Is locking pessimistic, optimistic, or just configurable? What can you tell me about available local partitioning strategies? Can you partition indexes to various local disks to optimize performance? What mechanisms allow us to optimize reads and writes separately? Is there native encryption at the database, table, field, or column level? Are there variable storage engines for this database? At what various levels does compression take place?

What I was told: You don't access it with SQL.


Deploying Docker Static Applications

When throwing together a basic UI, lately I've been using React.

It's fun for smaller projects, but it's entirely useless for major projects. Given that the HTML is inside the JS (JSX), your artists/designers who write the HTML are pretty much sidelined for all HTML designs after the initial one. All subsequent changes are made by engineers who should never have a need to know the difference between aqua and cyan and should not ever care about box dimensions. That's why you hired an HTML artist. UX engineer is an oxymoron.

In a different part of the client-side world is Angular, which forces you to deal with TypeScript. While it's one of the only programming languages, with Go and to some extend Python, to get interfaces right, that one good thing isn't enough to make me ever want to go back to dealing with types. 16 years of C# is enough, thank you. Types lead to false negatives. You don't care that something is an integer, you care that it's between 2 and 12. Tests always outrank types.

Regardless of the poison you drink, you have to strip out something to make it work on the web. In the case of React, JSX must be removed. In the case of Angular, TypeScript must be removed. In both cases, the concept of components must be flattened. Thus, you always end up with a build process for client-side applications.

Raw ES5 + flux pattern is raw legit power. No frameworks. Check it out.

Furthermore, there's always more than mere files. You always have to think about how those files will get to the end user. Static files contain no inherent execution mechanism. Something must serve them. Of course, this is what a web server is for.

To summarize: to get your application deployed, you need need a way to build the application and you need a way to serve it. How did you get these files? Build process. How do you deliver these files? You need a web server.

There's a very simple single bullet for this solution: Docker.


Examine the following single Dockerfile for building a React application:

#+ this staging area is thrown out, so no need to optimize too much
FROM node:8.11-alpine as staging

WORKDIR /var/app

RUN npm install -g create-react-app

#+ nginx.conf
RUN echo c2VydmVyIHsKICAgIGxpc3RlbiA4MDsKCiAgICBsb2NhdGlvbiAvIHsKICAgICAgICByb290IC92YXIvYXBwOwogICAgICAgIHRyeV9maWxlcyAkdXJpIC9pbmRleC5odG1sOwogICAgfQp9Cg== | base64 -d > /etc/nginx.conf

WORKDIR /var/app

COPY package.json /var/app

RUN npm install

COPY . /var/app

RUN npm run build

FROM nginx:1.13.9-alpine

COPY --from=staging /var/app/build /var/app/
COPY --from=staging /etc/nginx.conf /etc/nginx/conf.d/default.conf


ENTRYPOINT ["nginx", "-g", "daemon off;"]

There are two parts: staging and your application.

The staging area starts with a Node binary, setups up the React evironment by installing create-react-app (Facebook is horrible at naming things), then it does some magical voodoo (we'll come back to that), then it builds the application.

The second stage starts with an Nginx binary, copies over your application, a config file, then runs Nginx.

In the end, Docker will create a binary of your application that will run Nginx, which will serve your files.

That's literally everything you need.

You just build and run:

docker build . -t registry.gitlab.com/your_gitlab_name/example:prod-latest
docker run -p 80:80 registry.gitlab.com/your_gitlab_name/example:prod-latest

Your application is working and is production ready.


About that magic voodoo...

When using Docker, you don't always need to mess with files. If you can avoid adding files to your application, you should do so. Because Docker lets you run pipes and redirect stdout, you can do much of this inline.

The staging area contained the following line:

RUN echo c2VydmVyIHsKICAgIGxpc3RlbiA4MDsKCiAgICBsb2NhdGlvbiAvIHsKICAgICAgICByb290IC92YXIvYXBwOwogICAgICAgIHRyeV9maWxlcyAkdXJpIC9pbmRleC5odG1sOwogICAgfQp9Cg== | base64 -d > /etc/nginx.conf

When you run the command without the redirect in a shell, you get the following:

[dbetz@ganymede ~]$ echo c2VydmVyIHsKICAgIGxpc3RlbiA4MDsKCiAgICBsb2NhdGlvbiAvIHsKICAgICAgICByb290IC92YXIvYXBwOwogICAgICAgIHRyeV9maWxlcyAkdXJpIC9pbmRleC5odG1sOwogICAgfQp9Cg== | base64 -d
server {
    listen 80;

    location / {
        root /var/app;
        try_files $uri /index.html;

It's the nginx.conf file.

Now you can see why the second stage (FROM nginx:1.15-alpine), the one you're putting in production, is nginx. This is literally the web server that's serving up the production-ready files.

Run on your server and you're done.


Nothing is complete without SSL security, but I don't recommend doing that with your Docker binaries.

Your binaries represent the application and only the application. SSL is an infrastructure add-on to your application.

Do your TLS on your host machine. This will give you more flexibility too since a single nginx surface will listen on all addresses at once and you can simply use server_name to match, thus enabling you to use a single IP address for an army of FQDNs.

An Advanced, Practical Introduction to Docker

This is an advanced, practical introduction to Docker. It's mainly for people who have used Docker, but want a deeper understanding. It's similar to, after finishing Calculus I, II, III, and DiffEq, taking Advanced Calculus to go back over limits, derivatives, and integration at a deeper level.

There's so such thing as an "operating system" in a Docker container. There's such thing as being "in" a Docker container. What you call a Docker "container" is just an abstraction.

Docker "containers" aren't like virtual machines; you're not creating a general purpose environment with its own kernel. What you're doing is creating a runnable, deliverable Docker binary that contains the minimum that's needed to run a single application. You're delivering an application, not an environment. When you hear "container", think "application". When you hear "image", think "binary".

Docker hides so many of the underlying mechanics that you get the impression you're dealing with lightweight virtual machines. We should be grateful for the fact that we're victims of Docker's success.


The basic idea behind Docker is that Linux already has the capabilities for creating isolation, they just needed to be harnessed in a user-friendly manner. Docker is largely a front-end that abstracts already existing Linux cleverness, including namespaces (isolation) and cgroups (resource utilization).

The session "What Have Namespaces Done for You Lately?" by Liz Rice helps to demonstrate this concept; she effectively builds her own Docker-like tool from the ground up using Go (which is what Docker is written in!)

When you're running what is colloquially known as a Docker "container", you're running a process just like any other process, but with a different namespace ID. This namespace concept is the same concept you already know from C# and C++: it separates entities so they don't conflict. Thus, in Linux, you can have process ID 1 is one namespace and process ID 1 is another. They aren't in different environments like virtual machines. They're isolated, but not entirely separate.

Namespaces also let you have /some/random/file in one namespace and a different /some/random/file in another namespace: think super-chrooting. You can even have something listening on port 80 in one namespace and something entirely different listening on port 80 in a different namespace. No conflicts.

There's just a lot of namespace magic to give the illusion of various "micro-machines". In reality, there are no "micro-machines"; everything is running in the same space, but with a simple label separating them.

The term "container" and the preposition "in" lead to extreme confusion. There's nothing "in" a container, but the terminology is pretty much baked into the industry at this point. Note also, you never run something "in" Docker, but you can run something "using" Docker.

One way to prove to yourself that there's no voodoo subsystem is to look at how ps works on your machine: you see the processes across each namespace. You may be running Elasticsearch and MongoDB as separate Docker "containers", but both of them will show up in the same ps output on your host machine.

See example below:

[dbetz@ganymede src]$ docker run -d mongo:3.7

[dbetz@ganymede src]$ docker run -d docker.elastic.co/elasticsearch/elasticsearch:6.2.4

[dbetz@ganymede src]$ ps aux | grep -E "mongo|elastic"
polkitd   43993  1.1  0.7 985104 56008 ?        Ssl  15:11   0:01 mongod --bind_ip_all
dbetz     45304 74.0 14.3 4040776 1145040 ?     Ssl  15:13   0:02 /usr/lib/jvm/jre-1.8.0-openjdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.uHJg0AmQ -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.cgroups.hierarchy.override=/ -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/usr/share/elasticsearch/config -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch

A solid grasp of namespaces is critical to understanding Docker. Once you understand namespace concepts, you can move to understanding how namespaces can interact with each other. That's the larger world of Docker that extends deep into the design and deployment of orchestration.

To further review and reframe Docker concepts, let's recognize some of the resource types Docker uses. For the purpose of this discussion, let's use Azure's provider categories. This should keep the concepts general enough for reuse and specific enough to be practical.

The different resource types are:

  • compute (e.g. processes)
  • storage
  • networking

There are others as well, but they're usually very similar to the others (e.g. IPC is similar to networking).

When you spin something up using Docker (e.g. docker run), it will have everything in its own namespaces: the process, storage, and networking. You manage the mapping between namespaces yourself, per resource type.

Let's review with an example..

Run Elasticsearch ("ES"):

docker run -d docker.elastic.co/elasticsearch/elasticsearch:6.2.4

ES will run in it's own process (PID) namespace. It will listen on port 9200 in its own network namespace. It will store data at /usr/share/elasticsearch/data in its own mount namespace. It's entirely sandboxed.

To make ES practical, you need to map 9200 to something that can touch your network card and /usr/share/elasticsearch/data to something in a less ephemeral location.

Here's our new command:

docker run -d -p 9200:9200 -v /srv/elasticsearch6:/usr/share/elasticsearch/data docker.elastic.co/elasticsearch/elasticsearch:6.2.4

The point of reviewing these fundamental concepts is to further train your intuition in terms of namespaces. It's important have this intuitive training before going too deep into more Docker concepts like volumes or networks. Without training your intuitions to work in terms of namespaces, you'll inevitably end up with confused analogies with virtual machines, inefficient images, overly complex deployments, and unbelievably confused discussions.

On the other hand, with this understanding, it should be easy to understand that Docker represents applications, not operating systems. There are no kernels in Docker binaries or in "containers" just as there are no network drivers in your application's tarballs or zip files.

Namespaces are clever and very helpful. If you were to write a plug-in model for an application, you could instantiate each plug-in into a different namespace, then share an IPC namespace for communication. Supposedly, Google Chrome on Linux does something similar. Namespaces give you an easy, built-in way to do jailing/sandboxing.

Consider also this: because Docker spins-up processes just like any other process, each process has the same direct hardware access. Once you do a few device mappings to let the process in that namespace know where the real world hardware is, you're solid. So, you don't have to put too much thought into how to get something GPU access setup. Consider this is the context of this very confused SO discussion where people continue to cause confusion by talking about something "inside Docker containers". There is no "inside"; it's a process like any other.

Run man unshare on Linux to see the details for a native tool that creates namespaces.


Docker "containers" are created from binaries called images. Docker images are merely Docker binaries of your application just like any other deliverable binary format (e.g. binary tarball).

These images are nothing more than file-system layouts with some metadata. The blueprint that provides instructions on how to build the file-system layout for an image is a Dockerfile.

This file-system will contain the application binary you want Docker to run. When your application runs, it may reach for various files (e.g. /lib64/libstdc++.so.6), these files just need to be where the application would expect them.

A Dockerfile also provides metadata that either describes the resulting binary. It also adds an instruction for how to start your application (e.g. CMD, ENDPOINT).

The most important concept reframe here is this: the resulting Docker image is your complete deliverable application binary. It does not represent a system, just a single application.

Take care to avoid large multi-level image inheritance for the sake of "standardization". Standardization is the exact opposite of what you want with Docker. Tailor the deliverable to your specific application's needs.

Image Starting Points

Your application will run like any other application on your system. As such, it will follow the same rules of dependencies as any other application: if you application needs a file in order to run, you need to make sure it's within grasp. A solid understanding that these file-systems exist in different namespaces instead of different subsystems, enables flexible ways of satisying dependency requirements.

For example, if your machine already has a fairly large file (e.g. /lib64/liblarge.so.7), instead of putting it in each image, keep it on the host and map it at run time (-v /lib64/liblarge.so.7:/lib64/liblarge.so.7). When Docker sees the running application ask for /lib64/liblarge.so.7, it will get it from the host machine. This concept, similar to symlinking, is at the heart of some important techniques discussed later.

When creating images, one option you have is to create a file-system from scratch. This entails adding each and every file to the proper location in the image. Much of what follows a bit later will pursue this method and explain how to effectively create such lightweight images.

Another option you have is to build your file-system on an existing file-system template. This is the traditional approach most applications use. It maximizes portability, but the resulting images are larger, containing a huge number of unused files.

When not careful, this second approach leads to atrocious misunderstandings.

Consider the following Dockerfile:

FROM ubuntu

RUN groupadd user01 \
  && useradd --gid user01 user01

RUN apt-get install sometools

CMD [ "sometool" ]

This file could very well lead many to think that there's an Ubuntu operating system "base image" that you're using and extending .This is entirely wrong.

As mentioned previously, Docker is primarily a front-end for existing Linux functionality. There is no concept of a hypervisor subsystem or the like. Applications run as they have always ran. There are no kernels in images, therefore there are no operating-systems in images. Docker does not work with operating-systems, it works with applications. There is no OS "base image". There is no place for sysadmins to do any work with Docker at all. Your CMD/ENDPOINT instruction does not start init nor systemd, it starts your application.

Ubuntu is not in your image, only a file-system that looks like an Ubuntu file-system is in your image. FROM ubuntu merely states that the Dockerfile will start with a file-system template that looks like Ubuntu. You use it when you don't care about the size of your image and really need your application to work in an Ubuntu-like file-system. If your host system is RHEL, your binaries still run on RHEL -- Docker does not deal with operating systems.

For the most part, using Linux OS file-system templates are a very poor practice. They are largely not optimized for Docker. However, there is one OS file-system template that is optimized for Docker: Docker Alpine.

Docker Alpine provides a very small OS file-system template that maximizes application portability while minimizing binary size.

The previous Dockerfile would transform to Docker Alpine like this:

FROM alpine

RUN addgroup -g 1000 user01 && \
adduser -D -u 1000 -G user01 user01

RUN apk add --no-cache sometools

CMD [ "sometool" ]

The resulting binary would be much smaller. Yet, keep in mind that Alpine is not in your Docker image. Docker does not put operating systems into images. Your image is merely built on a file-system template that looks like an Alpine Linux file-system.

Do not confuse Docker Alpine with Alpine Linux. The former is a file-system template that looks like an Alpine file-system, while the latter is an operating system for routers, tiny linux deployments, and Raspberry Pi.

When creating portable images without extensive binary optimization, Docker Alpine is the only viable option. Do not use FROM centos or FROM ubuntu in any environment. These lead to extremely large and cause severe confusion.

This bears repeating: the entire point of Docker is to run your application. To do this, you just need to make sure your application has what it needs to run. The question is not "What do I build my application on?", the question is "What specific files does my application require?" Your application most likely doesn't need 90% of the files that a Linux file-system template provides, it probably just needs a few libraries. It may not even need the full XYZ library, but just file Y.

Docker lets you optimize your application like this. If you can identify the dependencies of your application, you'll be to build your file-system FROM scratch.


At runtime, you're working with namespaces. At build time, you're working with images. Your ability to create optimal images is directly proportional to your understanding of namespaces and your application.

Let's jump right to an example of creating a tiny, usable Docker image...

First let's look at the hello.asm file we want to run (taken from http://cs.lmu.edu/~ray/notes/x86assembly/):

        global  _start

        section .text
        ; write(1, message, 13)
        mov     rax, 1                  ; system call 1 is write
        mov     rdi, 1                  ; file handle 1 is stdout
        mov     rsi, message            ; address of string to output
        mov     rdx, 13                 ; number of bytes
        syscall                         ; invoke operating system to do the write

        ; exit(0)
        mov     eax, 60                 ; system call 60 is exit
        xor     rdi, rdi                ; exit code 0
        syscall                         ; invoke operating system to exit
        db      "Hello, World", 10      ; note the newline at the end

Our goal is to create a tiny, deliverable Docker binary that writes-out "Hello, World".

Here's how we'll do it:

FROM alpine as asm

WORKDIR /elephant

COPY hello.asm .

RUN apk add --no-cache binutils nasm && \
    nasm -f elf64 -p hello.o hello.asm && \
    ld -o hello hello.o

FROM scratch

COPY --from=asm /elephant/hello /

ENTRYPOINT ["./hello"]

This Dockerfile uses two stages: a build-stage and a run-stage. The first stage installs NASM, assembles the code, then links it the applcation, the second carefully places the application into the deliverable Docker binary. The second stage contains a single file: /elephant/hello. It does not contain NASM, the source code, nor any intemediate files.

You can use as many stages as you want: sometimes you'll need a CI-setup stage (setup tools), then a backend-build stage (setup node, run npm install), then a front-end build-stage (build Angular), then a final stage to carefully place files from previous stages (copy /node_modules/ and Angular/dist files to node application). Only the final stage is deployed, everything else is thrown out.

This results in the following:

[dbetz@ganymede tiny-image]$ docker build . -t local/tiny-image
Sending build context to Docker daemon  4.096kB
Step 1/7 : FROM alpine as asm
 ---> 3fd9065eaf02
Step 2/7 : WORKDIR /elephant
Removing intermediate container da8e9f72ebd2
 ---> 29896ad4bb3c
Step 3/7 : COPY hello.asm .
 ---> 9ccc8ab38794
Step 4/7 : RUN apk add --no-cache binutils nasm &&     nasm -f elf64 -p hello.o hello.asm &&     ld -o hello hello.o
 ---> Running in f99cbecc309d
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/3) Installing binutils-libs (2.30-r1)
(2/3) Installing binutils (2.30-r1)
(3/3) Installing nasm (2.13.01-r0)
Executing busybox-1.27.2-r7.trigger
OK: 17 MiB in 14 packages
Removing intermediate container f99cbecc309d
 ---> d840e66bfbdb
Step 5/7 : FROM scratch
Step 6/7 : COPY --from=asm /elephant/hello /
 ---> fd85715eaf85
Step 7/7 : ENTRYPOINT ["./hello"]
 ---> Running in e163f47f4d8a
Removing intermediate container e163f47f4d8a
 ---> 1f30c749e8b9
Successfully built 1f30c749e8b9
Successfully tagged local/tiny-image:latest

[dbetz@ganymede tiny-image]$ docker run local/tiny-image
Hello, World

[dbetz@ganymede tiny-image]$ docker image ls | grep "local/tiny-image"
local/tiny-image                                    latest                                     1f30c749e8b9        15 seconds ago      848B

It builds and it runs, and the entire binary is 848 bytes.

The more functionality you add to your binary, the larger it grows. Your image size should remain somewhat proportional to your functionality. That's what you'd expect from a tarball, that's how you should think with Docker.

This means that you should be careful with what files go into your resulting binary. This means being careful with how you satisfy your application's dependency needs.

Would you really throw an entire Linux OS operating system into your tarball?

Practical scratch

In the previous assembler example, we had an application with zero dependencies. When this is your situation, your Docker image size will be very near your application size. You want them to be as close as possible.

One popular way to satisfy this need is to use Go: this can output statically linked binaries that require zero dependencies. Go has many places where it fits nicely. You can see, for example, my recursivecall for Docker project. Docker itself is also written in Go.

On the other hand, Go doesn't have deep support for dynamic types. This means you won't have the JavaScript/Python dynamic object concept. Instead, you'll have to refresh yourself on those data-structures we all forgot decades ago.

Regardless, while Go is beautiful for many uses, you already have applications. Let's focus on deploying those application via Docker, not rewriting them in Go.

For this next section, let's assume that our appplication /var/app/runner requires /usr/lib64/libc.so.6. Our application will crash if it doesn't find /usr/lib64/libc.so.6.

In this situation, we have two options based on your understanding of Docker namespaces:

  1. copy /usr/lib64/libc.so.6 into the image with /var/app/runner

  2. link /usr/lib64/libc.so.6 from the host machine to your running application's namespace

The first option can be accomplished with a multi-stage build with a simple COPY from the first stage:

FROM ubuntu as os

FROM scratch

COPY ./runner /var/app/runner

COPY --from=os /usr/lib64/libc.so.6 /lib64/

CMD ["/var/app/runner"]

Remember, the Ubuntu stage will be thrown out, but, yeah, you should still try to use Alpine where possible.

This will create a portable binary; everything the application needs will be within reach.

As your portability increases, so does your binary size. When you need more than just a few files and you must maintain portability (e.g. posting to Docker Hub), it's time to use the Docker Alpine file-system template.

However, in the case where your control your environment, thus don't require portability, the second approach may be better.

It allows you to provide a much simpler Dockerfile:

FROM scratch

COPY ./runner /var/app/runner

CMD ["/var/app/runner"]

In stead of copying the dependency into the image, you tell Docker at run-time to use a different namespace to satisy the dependency.

Your build and run would look like this:

docker build . -t local/runner
docker run -v /usr/lib64/libc.so.6:/lib64/libc.so.6 local/runner

If you don't want to play around with each and every file, just map the entire /lib64/ folder.

docker run -v /lib64/:/lib64/ local/runner

Since most libraries are loaded from /lib64/, this technique will account for a large percentage of your scenarios.

Practical scratch with Node

Let's make this more real-world by manually building a Docker Node binary which our deliverable Docker application binaries will use.

Here's our Dockerfile:

FROM alpine

RUN apk add --no-cache curl && \
    mkdir -p /tmp/node && \
    mkdir -p /tmp/etc && \
    curl -s https://nodejs.org/dist/v8.11.2/node-v8.11.2-linux-x64.tar.xz | tar -Jx -C /tmp/node/

RUN addgroup -g 500 -S nodeuser && \
    adduser -u 500 -S nodeuser -G nodeuser

RUN grep nodeuser /etc/passwd > /tmp/etc/passwd && \
    grep nodeuser /etc/group > /tmp/etc/group

FROM scratch

COPY --from=0 /bin/sh /tmp/node/node-v8.11.2-linux-x64/bin/node /bin/
COPY --from=0 /usr/bin/env /usr/bin/
COPY --from=0 /tmp/etc/passwd /tmp/etc/group /etc/

CMD  ["/bin/node"]

The resulting deliverable Docker binary will contain the node binary, passwd/group, and env (as an example of copying something you may need in Node development).

The first stage downloads Node, creates a user and group, then simplifies /etc/passwd and /etc/group. Only the final stage represents the deliverable binary.

Build and run:

docker build . -t local/node8
docker run -it local/node8

Building and running results in the following error:

standard_init_linux.go:195: exec user process caused "no such file or directory"

Run it again with the mapping:

docker run -it -v /lib64/:/lib64/ local/node8

It works.

[dbetz@ganymede node8]$ docker run -it -v /lib64/:/lib64/ local/node8 node

Let's check the application version:

[dbetz@ganymede node8]$ docker run -it -v /lib64/:/lib64/ local/node8 node -v

What's our size?

[dbetz@ganymede ~]$ docker image ls | grep "local/node8"
local/node8                                           latest                                     901c2740deb9        12 seconds ago        36.4MB

It's 36.4MB. Pretty small.

Your Docker application binary will only contain 36.4MB of overhead when you ship your product.

Using our binary

With the Docker Node image built, we can build our deliverable application binary.

FROM node:8.11.2-alpine as swap-space

WORKDIR /var/app

COPY package.json /var/app/

RUN npm install

COPY . /var/app

FROM local/node8

WORKDIR /var/app

COPY --from=swap-space /var/app/ /var/app/


USER nodeuser:nodeuser

ENTRYPOINT ["node", "server.js"]

The first stage will use an official Docker Node binary to prepare our application. The second stage merely copies the application in. NPM isn't needed for your application to run. It only needs the application folder consisting of your code and node_modules/.

Build it and push it out (real-world example):

TAG=`date +%F_%H-%M-%S`
docker build . -t local/docker-sample-project:$TAG -t registry.gitlab.com/davidbetz/docker-sample-project:$TAG

docker push registry.gitlab.com/davidbetz/docker-sample-project:$TAG

/etc/passwd and /etc/group

The addition of /etc/passwd and /etc/group are artifacts of how most Linux tools work: they want a name, not UID or GID. You create a user and group just to name them. You can't simply specify UID 500.

Because /etc/passwd and /etc/group are part of a file-system in a specific namespace, tools use the files within the file-system they're looking at to do the ID to name lookup.

This gives us an opportunity to do an experiment...

Let's run MongoDB in the background:

[dbetz@ganymede ~]$ docker run -d mongo

Let's execute sh in that namespace:

Remember, you aren't going into a container, there is no container. But, it's still phenomenological language like "the sun rises". You'll end up talking about "containers", but remember they're merely abstractions.

[dbetz@ganymede ~]$ docker exec -it 8e4b sh

Let's see the processes from the perspective of that namespace:

# ps aux
mongodb       1  7.0  0.6 984072 55524 ?        Ssl  18:43   0:00 mongod --bind_

OK, so the user for mongod is mongodb. Let's get the UID and GID for mongodb:

# grep mongodb /etc/passwd
# grep mongodb /etc/group

It's 999.

Now exit sh and look for mongod in your processes on your host machinnne:

[dbetz@ganymede ~]$ ps aux | grep mongod
polkitd   40754  0.6  0.7 990396 58628 ?        Ssl  13:43   0:02 mongod --bind_ip_all

The user is polkitd, not mongodb.

Why polkitd?

Well, look for user 999 in your /etc/passwd file:

[dbetz@ganymede ~]$ grep 999 /etc/passwd
polkitd:x:999:997:User for polkitd:/:/sbin/nologin

ps saw 999 and used the /etc/passwd within reach to do the lookup, thus interpreted it as polkitd.

Key Takeaways

  • Docker uses existing Linux functionality.
  • There is no "subsystem" or hypervisor.
  • Images do not contain operating systems.
  • Docker images are merely Docker binaries of your application.
  • Your Docker images should not contain anything other than your application and it's dependencies.
  • Images are either from scratch or based on a file-system template like Docker Alpine.
  • You can map files already on your system's file-system to minimize the size of images.

Using Azure App Services with Node.js

We often hear how Azure is "Microsoft" whereas other cloud providers aren't. In the most obvious sense, they're right -- Microsoft owns it. However, when you look closer, what they actually mean is that Azure is Microsoft-only and Google/AWS are open to other programming models.

This is ridiculous and only said by people gulping down electrolyte-loaded propaganda by the water cooler. In reality, there's nothing proprietary or Microsoft-only about Azure as a whole. It's a nonsensical bias to say otherwise. Azure is platform agnostic. You don't need a VM (=IAAS) to do your Node (or Python development). Absolutely zero hacks are required to put your Node application in Azure App Services (=PAAS).

It reminds me of when HTML5 became popular: there were still Flash-zealots pushing their "LOL browsers can't do animation nonsense LOL". They forgot to leave their echo chamber before attempting to entering reality.

Software engineers steeped in Microsoft technologies for over a decade understand that you must make a distinction between Ballmer and non-Ballmer Microsoft. To give an extreme contrast: the former is VB6, the latter is Linux in Windows. You can see the transition from Ballmer-nonsense to non-Ballmer-sanity since around .NET 4 (especially in the adoption of ASP.NET MVC as an open-source tool). Ballmer is Silverlight, non-Ballmer is adoption-of-HTML5. You can go on down the line.

Yes, Microsoft still has some war crime-level trash software, .NET Core being one of the brightest shining examples, but viewed in the light of the fact that just about everything Oracle touches is trash -- they're doing pretty well in the brave new world.

In the end: Ballmer-Microsoft is Microsoft-as-evil-empire. Today's Microsoft is fully amiable toward Linux; they also rely on Github for many SDKs and for just about all Azure documentation. It's a different beast.

You need to examine Azure through this Ballmer / non-Ballmer paradigm. Put concretely: Windows Azure (and Azure ASM) was Ballmer whereas Microsoft Azure (and Azure ARM) is non-Ballmer. Much of the "LOL Azure is Microsoft-only LOL" nonsense comes from confusion about the transition between Azure "versions". Much of this is also Microsoft's fault: just about all the books out there are completely obsolete! The official-book for the 70-532 exam will absolutely guarantee that you fail the exam.

For the topic at hand, we shouldn't look at Azure in an ad hoc manner, but in the context of it's intimately related technologies. Specifically we need to look at the development of IIS as it passed from a Ballmer to a non-Ballmer implementation.

Working with IIS

My life with IIS started around the IIS3 era. I still remember taking the IIS3 exam as an elective (with the TCP/IP exam) for my NT4 MCSE. Thus, I've seen the various large upgrades and incremental updates over a good stretch of time.

The upgrade from IIS6 to IIS7 was easily the largest IIS upgrade; it laid the groundwork for eventually stripping out the last vestige of Ballmerisms via its flexible APIs. Till IIS7, the biggest upgrade was a silly configuration system update (=IIS4 metabase). The IIS7 upgrade consisted of a systematic, paradigmatic shift. It was the "classical" to the "integrated" pipeline upgrade. The upgrade to so deep, that you literally had to update your applications to add IIS7 support. After a while, all development was IIS7-first with IIS6 backwards compatibility added subsequently.

In practice, this classic -> integrated upgrade meant three things: First, instead of relying on the external ASP.NET ISAPI IIS plug-in, ASP.NET processing was integrated into IIS. No more interop. This made ASP.NET development more natural. It also gave .NET access to core extensibility functionality in IIS. You didn't need to whip out C++ for server extensibility. Second, if you had existing C++ functionality, you had easier access to IIS functionality with the new native IIS API. This second point is critical, because we see that the IIS7 upgrade wasn't just about .NET. Third, web.config was no longer about ASP.NET, but about IIS itself. This point is huge and points to the fact that the web.config format controls all over IIS7+, as seen in the global applicationHost.config file.

IIS6 used the rediculous ISAPI nonsense to do just about everything, including call ASP.NET. The .aspx extension was simply mapped to aspnet_isapi.dll. This wasn't removed from IIS7; it was just separated and called "classic" mode.

In this a IIS7 world, this meant that you literally had to add handler / module support for both IIS6 and IIS7 (more accurately, the classical and integrated models).

Furthermore, the low-level ASP.NET pipeline APIs were also affected. For my deeply low-level Themelia framework, I had to make checks between completely different pipelines. See the following snippet from my CoreModule (a typical module implementing the System.Web.IHttpModule interface):

View Themelia at Themelia Pro. View Themelia source at Themelia on Gitlab


    if (HttpRuntime.UsingIntegratedPipeline)
        httpApplication.PostResolveRequestCache += OnProcessRoute;
        httpApplication.PostMapRequestHandler += OnSetHandler;
        httpApplication.PostMapRequestHandler += OnProcessRoute;
        httpApplication.PostMapRequestHandler += OnSetHandler;

Reference: CoreModule.cs

The installation was also different between IIS6 ("classical") and IIS7 ("integrated"):

For IIS6, I would add the module to system.web:

            <add name="Themelia" type="Themelia.Web.CoreModule, Themelia.Web"/>

For IIS7, I would add the module to system.webServer:

            <remove name="Session"/>
            <add name="Session" type="System.Web.SessionState.SessionStateModule" preCondition=""/>
            <add name="Themelia" type="Themelia.Web.CoreModule, Themelia.Web"/>

There is also the much more popular concept of a handler. My framework was meant to be a full IIS6-era platform takeover, so I used a more greedy module, but if you're only doing specific framework development, handlers are your choice. ASP.NET MVC, for example, uses handler.

ASP.NET MVC is actually an excellent example. Assuming ASP.NET was properly installed, for IIS7, they were able to take advantage of the fact that IIS7 processed everything (e.g. /contact) as .NET (though you still needed runAllManagedModulesForAllRequests enabled to disable that slight perf boost). For IIS6, because it had to know when to call the ASP.NET ISAPI filter, you had to add a wildcard handling to get the ISAPI filter handle extenstionless paths (again, e.g. /contact).

            <add verb="*" path=".png" name="WatermarkHandler" type="WatermarkHandler"/>

Handlers and modules are still the standard was of tapping into the stream of raw power.

.NET is powerful. C# even has an unmanaged mode where you can crack open the covers (via unsafe mode) to do direct *pointer &manipulation. That said, the upgrade to IIS7 wasn't just about .NET; the upgrade provided a native IIS API as well.

Thus we enter the realm of C/C++ modules: Develop a Native C\C++ Module for IIS 7.0

By removing the ISAPI barrier and providing a clean, native IIS API C++ developers could more easily connect existing C++ functionality to IIS. It also made ASP.NET C++ code more expressive; familiar web terms like HttpContext, IHttpResponse, and BeginRequest (and other events) are all over IIS C++ code. No more DWORD WINAPI HttpExtensionProc(EXTENSION_CONTROL_BLOCK *pECB) nonsense.

Seriously. Review the C++ ISAPI docs. They're insane. 1990s Microsoft C++ was the worst code ever written. It's just plain satanic.

Consider the following IIS7-esque native C++ method:

        DWORD                           dwServerVersion,    
        IHttpModuleRegistrationInfo *   pModuleInfo,
        IHttpServer *                   pHttpServer            

That's exactly how you register your native modules in IIS7. That's not too terribly evil. You can see that it's registering a module, and bringing in points to core IIS entities.

This is also exactly how IIS handles Node hosting in Azure; it uses the iisnode module. You can see RegisterModule in main.cpp in iisnode:


If you review the following code from CProtocolBridge.cpp in iisnode, you'll see familiar things like IHttpContext and IHttpResponse:


It's clean interface programming.

Using iisnode

IIS handles most of it's config with your applications web.config. While there are a few global config files, you get tremendous control with your own config.

Hosting a Node application in Azure is as simple as deploying an Azure Web App with a properly configured web.config.

You can following along with the following activities by deploying the following repo -> https://gitlab.com/davidbetz/template-azure-node-api.

Per the previous explanation of IIS modules, you can see from the following web.config that iisnode is installed just as we would install our own handlers and modules. There are no hacks whatsoever.

    <?xml version="1.0" encoding="utf-8"?>
        <!-- leave false, you enable support in Azure -->
        <webSocket enabled="false" />
          <add name="iisnode" path="server.js" verb="*" modules="iisnode"/>
            <rule name="StaticContent">
              <action type="Rewrite" url="content{REQUEST_URI}"/>
            <rule name="DynamicContent">
                <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
              <action type="Rewrite" url="server.js"/>
            <rule name="Redirect to https" stopProcessing="true">
              <match url="(.*)" />
                <add input="{HTTPS}" pattern="off" ignoreCase="true" />
              <action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}" redirectType="Permanent" appendQueryString="false" />
              <remove segment="bin"/>
        <httpErrors existingResponse="PassThrough" />

The following section listens for any requests for all verbs accessing server.js and has iisnode process them:

        <add name="iisnode" path="server.js" verb="*" modules="iisnode"/>

The following rewrite rule sends all traffic to server.js:

    <rule name="DynamicContent">
            <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
        <action type="Rewrite" url="server.js"/>

The following doesn't have anything directly to with iisnode; it excludes the content folder from iisnode processing:

    <rule name="StaticContent">
        <action type="Rewrite" url="content{REQUEST_URI}"/>

I find putting static files on your web server to be naive, but if you really don't want to use the Azure CDN, that is how you host static content.

The following merely redirects HTTP to HTTPS:

    <rule name="HTTP to HTTPS redirect" stopProcessing="true">
        <match url="(.*)" />
            <add input="{HTTPS}" pattern="off" ignoreCase="true" />
        <action type="Redirect" url="https://{HTTP_HOST}/{REQUEST_URI}" redirectType="Permanent" />

Breaking iisnode

To prove that Node hosting is actually this basic, let's break it, then fix it.

First, let's see this work:


it works

That's the expected output from the application.

Now, let's go to web.config and break it:

  <rule name="DynamicContent">
      <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="True"/>
    <action type="Rewrite" url="server.js"/>

Change that Rewrite from server.js to server2.js:

Save. Refresh browser.



Go to Kudu. This is the .scm. URL. In this case it's the following:



Rename server.js to server2.js:

rename server.js to server2.js

Refresh again.


mmmk. Raw output.

The rewrite is telling everything to go to server.js, but nothing is processing it, so it just sends the file back.

This is exactly like accessing an old .aspx page and getting the raw ASP.NET webform code, because you forgot to install ASP.NET (and somehow managed to allow access to .aspx).

Now, let's fix this by telling our IIS module to process server2.js:

      <add name="iisnode" path="server2.js" verb="*" modules="iisnode"/>

Refresh and it's all well again:

it works

App Services and App Service Plan Mechanics

An explanation of Azure web apps using any web platform isn't complete without reviewing the mechanic of Azure web apps.

To begin, let's clarify a few Azure terms:

Azure App Service Plans are effectively managed VMs. You can scale these up and out. That is, you can turn an S1 into an S2 to double the RAM or you can turn a single instance into four. Because of this later ability, App Service Plans are also known as server farms. In fact, when developing ARM templates, the type you use to deploy an App Service Plan is Microsoft.Web/serverfarms.

You do not deploy a series of plans to create a farm. Plans are farms. Plans with a size of 1 is just a farm with 1 instance. You are always dealing with herds, you are never dealing with pets. You scale your farm out, you scale all those instances up.

Azure Web Apps are also known as Web Sites and App Services. You deploy these, you back these up, and you add SSL to these. These are similar to IIS virtual applications. When developing ARM templates, the type is Microsoft.Web/sites.

You do need to remember the various synonyms for each; you will see them all.

Given this distinction and given the fact that a VM can have multiple IIS applications, you can imagine that you can host multiple Azure Web Apps on a single App Service Plan. This is true. You do NOT deploy a plan every time you deploy a site. You plan your CPU/RAM usage capacity ahead of time and deploy a full solution at once.

To visualize the App Service / App Service Plan distinction, review the following image.

Here I've provide information for three services over two service plans. The first two services share a service plan, the third service is on a different plan.

Notice that the services with the same service plan have the same machine ID and instance ID, but their IIS specifics are different. The third service plan has a different machine ID altogether.

What's so special about the types of Web Apps?

If this is all just the same IIS, what's with the various Node-specific web app types?

web app types

The answer is simple: they exist solely to confuse you.

Fine. Whatever. The different types are hello-world templates, but you're going to overwrite them via deployment anyway.

You can literally deploy a Node.js web app, then deploy and ASP.NET site on it. It's just IIS. The website deployment will overwrite the web.config with its own.

Given the previous explanations of IIS handler/modules, iisnode-as-module, and the service/plan distinction, you can see that there's no magic. There's nothing Microsoft-only about any of this.

You can always use the normal "Web App" one and be done with it.

Single App Solutions

My websites are generally ASP.NET or Python/Django, but my APIs are always Node (ever since someone at Microsoft with an IQ of a Pennsylvania speed limit decided to deprecate Web API and rebuild it as "MVC" in ASP.NET Core). There was a time when my APIs and my website required separate... just about everything. Now adays I use nginx as a single point of contact to serve traffic from various internal sources: one source to handle the website as a whole (either or a Linux socket) and another to handle /api. This lets me use a single domain (therefore a single SSL cert) for my solution.

In Azure, this functionality is provided by Application Gateway.

Think back through all the mechanics we've covered so far: IIS can handle .NET and supports modules. iisnode is a module. IIS uses rewriting to send everything to server.js. iisnode handles all traffic sent to server.js.

Let's mix this up: instead of rewriting everything to server.js, let's only rewrite the /api branch of our URL.

To make this example a bit spicier, let's deploy an ASP.NET MVC application to our App Service, then send /api to Node.

To do this, go to the App Service (not the App Service Plan!), then select Deployment Options on the left.


In Choose source, select External Repository and put in the following:



A few minutes later, load the application normally. You'll see the "ASP.NET is a free web framework for..." propaganda.

Now go back into Kudu and the PowerShell Debug Console (explained earlier).

We need to do three things:

  • add our server.js
  • install express
  • tell web.config about server.js

To add server.js, go to site/wwwroot and type the following:

touch server.js

This will create the file. Edit the file and paste in server from the following:


Next we need to install express to handle the API processing. H

For the sake of a demo, type the following:

npm install express


For the sake of your long-term sanity, create package.json (same way you created server.js), edit in contents, save, then run:

npm install

See sample package.json at:


Finally, edit web.config.

You need the splice in the following config in the system:

        <rule name="DynamicContent">
          <match url="^api/(.*)" />
          <action type="Rewrite" url="server.js"/>
      <add name="iisnode" path="server.js" verb="*" modules="iisnode"/>

Upon saving, access the web app root and /api/samples. Click around the web app to prove to yourself that it's not just a static page.

asp.net and node together

You have ASP.NET and Node.js in the same Azure web app. According to a lot of the FUD out there, this shouldn't be possible.

In addition to hosting your SPA and your APIs in the same place, you also don't need to play with CORS nonsense. You also don't need an Application Gateway (=Microsoft's nginx) to do the same thing