Category Archives: Tutorials and guides

How to ask for a certificate the right way: CSR via Windows or Keytool with Subject Alternative Names (SANs)

Sooo you’re working in an enterprise and have to maintain an internal server. The security audit asks you to ensure all HTTP communications are encrypted, so you need to change to HTTPS. Boy is this SO not obvious. You’d think this should be quite easy by now, but there are A LOT of pitfalls in your way.

If you want the TL;DR version, to skip the explanation and go directly to the instructions, scroll directly to the Mandalorian below. No hard feelings, honest ๐Ÿ˜Š

Mistake #1: Use a self-signed certificate

Many, many, MANY tutorials you’ll find online are written with a developer in mind, leaving the maintainer/admin as an afterthought -if that. So what they care about is having some certificate, any certificate, as long as it works on the developer’s PC.

But what this certificate says is basically “I’m Jim because I say so”.

Do I need to say that it won’t work for other PCs? Yes? Well surprise, it won’t.

Mistake #2: Get a certificate from your PC’s certificate authority

I don’t know how some people don’t understand that this, while being a bit more complex, it’s basically the same as #1. What this certificate says is “I’m Jim because someone else who is also Jim says so”.

Yeah, no, it won’t work.

Mistake #3: Get a certificate from a trusted certificate authority using only a server name (or an alias).

Now we’re getting more serious.

Getting a certificate from a trusted certificate authority (CA for short) is the right thing to do. The certificate you get then says “I’m Jim because someone else who you already trust says so”.

So if you get a certificate that verifies you’re, say, server or is good enough. Right?



If you run a website (e.g. and want your HTTPS URL to work without giving a certificate warning that’s fine. You don’t need to do anything else. That’s why most tutorials that avoid the self-signed certificate mine stop here.

But remember, our scenario is that we’re working for an enterprise (a big company) and we’re maintaining an internal server. What that usually -not always, but a lot of the time- means is that communication to our server happens using different hostnames.

Let me give you my own example:

  • I run a service called Joint Information Module or JIM for short -that’s a totally real service name [1].
  • The server name is ch-zh-jim-01.mycompany.local.
  • The users use the web interface of the service by navigating to
  • Another application uses the REST API of the service using the server name (ch-zh-jim-01) without the domain name (mycompany.local).
  • The service uses a queuing software that is installed on the same server. We want to use the same certificate for this as well. The JIM service accesses the queues via https://localhost (and a port number).

Now, if the certificate you got says “ch-zh-jim-01.mycompany.local ” and you try to access the server via https://ch-zh-jim-01,, https://localhost or, you’ll get a certificate error much like the following:

certificate error chrome

Also, the REST API won’t work. The caller will throw an exception, e.g. in Java or System.Security.Authentication.AuthenticationException in DotNet. You can avoid this by forcing your code to not care about invalid certificates but this is a) lazy b) bad c) reaaaaaaaaaaly bad, seriously man, don’t do this unless the API you’re connecting to is completely out of your control (e.g. it belongs to a government).

The correct way

So you need a certificate that is trusted and valid for all the names that will be used to communicate with your server. How do you do that? SIMPLEZ!

  1. Generate a CSR (a certificate signing request, which is a small file you send to the CA) with the alternative names (SANs) you need. That’s what I’ll cover here.
  2. Send it to a trusted CA
    1. either the one your own company operates or
    2. a commercial one (which you have to pay), say Digicert.
  3. Get the signed certificate and install it on your software.

Important note: the CA you send the CSR to must support SANs. Not every CA supports this, for their own reasons. Make sure you read their FAQ or ask their helpdesk. Let’s Encrypt, a free and very popular CA, supports them.

Here I’ll show how you can generate a CSR, both in the “Microsoft World” (i.e. on a Windows machine) and in the “Java World” (i.e. on any machine that has Java installed).

A. Using Windows

Note that this is the GUI way to do this. There’s also a command line tool for this, certreq. I won’t cover it here as this post is already quite long, but you can read a nice guide here and Microsoft’s reference here. One thing to note though is that it’s a bit cumbersome to include SANs with this method.

  1. Open C:\windows\System32\certlm.msc (“Local Computer Certificates”).
  2. Expand “Personal” and right click on “Certificates”. Select “All tasks” > “Advanced Operations” > “Create Custom Request”.
  3. In the “Before you begin” page, click Next.
  4. In the “Select Certificate Enrollment Policy” page, click “Proceed without enrollment policy” and then Next.
  5. In the “Custom Request” page, leave the defaults (CNG key / PKCS #10) and click Next.
  6. In the “Certificate Information” page, click on Details, then on Properties.
  7. In the “General” tab:
    1. In the “Friendly Name” field write a short name for your certificate (that has nothing to do with the server). E.g. cert-jim-05-2021.
    2. In the “Description” field, write a description, duh ๐Ÿ˜Š
  8. In the “Subject” tab:
    1. Under “Subject Name” make sure the “Type” is set to “Full DN” and in the Value field paste the following (without the quotes): “CN=ch-zh-jim-01.mycompany.local, OU=IT, O=mycompany, L=Zurich, ST=ZH, C=CH” and click “Add”. Here:
      • Instead of “ch-zh-jim-01.mycompany.local” enter your full server name, complete with domain name. You can get it by typing ipconfig /all in a command prompt (combine Host Name and Primary Dns Suffix).
      • Instead of “IT” and “mycompany” enter your department and company name respectively.
      • Instead of “Zurich”, “ZH” and “CH” enter the city, state (or Kanton or Bundesland or region or whatever) and country respectively.
    2. Under “Alternative Name”:
      1. Change the type to “IP Address (v4)” and in the Value field type “”. Click “Add”.
      2. Change the type to “DNS” and in the Value field type the following, clicking “Add” every time:
        • localhost
        • ch-zh-jim-01 (i.e. the server name without the default domain)
        • (i.e. the alias that will be normally used)
        • (add as many names as needed)

Important note: all names you enter there must be resolvable (i.e. there’s a DNS entry for the name) by the CA that will generate your certificate. Otherwise there’s no way they can confirm you’re telling the truth and the request will most likely be rejected.

It should end up looking like this:

  1. In the “Extensions” tab, expand “Extended Key Usage (application policies)”. Select “Server Authentication” and “Client Authentication” and click “Add”.
  2. In the “Private Key” tab, expand “Key Options”.
    1. Set the “Key Size” to 2048 (recommended) or higher.
    2. Check the “Mark private key exportable” check box.
    3. (optional, but HIGHLY recommended) Check the “Strong private key protection” check box. This will make the process ask for a certificate password. Avoid only if your software doesn’t support this (although if it does, you really should question if you should be using it!).

At the end, click OK, then Next. Provide a password (make sure you keep it somewhere safe NOT ON A TEXT FILE ON YOUR DESKTOP, YOU KNOW THAT RIGHT???) and save the CSR file. That’s what you have to send to your CA, according to their instuctions.

B. Using Java

Here the process is sooo much simpler:

  1. Open a command prompt (I’m assuming your Java/bin is in the system path; if not, cd to the bin directory of your Java installation). You should have enough permissions to write to your Java security dir; in Windows, that means that you need an administrative command prompt.
  2. Create the certificate. Type the following, in one line, but given here splitted for clarity. Replace as explained below.
-alias cert-jim-05-2021 
-dname "CN=ch-zh-jim-01.mycompany.local, OU=IT, O=mycompany, L=Zurich, ST=ZH, C=CH" 
-keyalg RSA
-keysize 2048
-storepass changeit
  1. Create the certificate signing request (CSR). Type the following, in one line, but given here splitted for clarity. Replace as explained below.
-file c:\temp\cert-jim-05-2021.csr 
-alias cert-jim-05-2021 
-dname "CN=ch-zh-jim-01.mycompany.local, OU=IT, O=mycompany, L=Zurich, ST=ZH, C=CH" 
-ext "SAN=IP:,DNS:localhost,DNS:ch-zh-jim-01," 
-ext "EKU=serverAuth,clientAuth"
-storepass changeit 

In the steps above, you need to replace:

  • “cert-jim-05-2021”, both in the filename and the alias, with your certificate name (which is the short name for your certificate; this has nothing to do with the server itself).
  • “ch-zh-jim-01.mycompany.local” with the full DNS name of your server.
  • “IT” and “mycompany” with your department and company name respectively.
  • “Zurich”, “ZH” and “CH” with your city, state (or Kanton or Bundesland or region or whatever) and country respectively.
  • “ch-zh-jim-01” with your server name (without the domain name).
  • “” with the DNS alias you’re using. You can add as many as needed, e.g. “,,,”

Important note: all names you enter there must be resolvable (i.e. there’s a DNS entry for the name) by the CA that will generate your certificate. Otherwise there’s no way they can confirm you’re telling the truth and the request will most likely be rejected.

  • “changeit” is the default password of the Java certificate store (JAVA_HOME/jre/lib/security/cacerts). It should be replaced by the actual password of the certificate store you’re using. But 99.999% of all java installations never get this changed ๐Ÿ˜Š so if you don’t know otherwise, leave it as it is.
  • “MYSUPERSECRETPASSWORD” is a password for the certificate. Make sure you keep it somewhere safe NOT ON A TEXT FILE ON YOUR DESKTOP, YOU KNOW THAT RIGHT???

That’s it. The CSR is saved in the path you specified (in the “-file” option). That’s what you have to send to your CA, according to their instuctions.


[1] no it’s not, c’mon

RabbitMQ: How to move configuration, data and log directories on Windows

A good part of my job has to do with enterprise messaging. When a piece of data -a message- needs to be sent from, say, an invoicing system to an accounting system and then to a customer relationship system and then to the customer portal… it has to navigate treacherous waters.

Avast ye bilge-sucking scurvy dogs! A JSON message from accounting says they hornswaggled 1000 doubloons! Aarrr!!!

So we need to make sure that whatever happens, say if a system is overloaded while receiving the message, the message will not be lost.

A key component in this is message queues (MQ), like RabbitMQ. An MQ plays the middleman; it receives a message from a system and stores it reliably until the next system has confirmed that it picked it up.

My daily duties includes setting up, configuring and maintaining a few RabbitMQ instances. It works great! Honestly, so far -for loads up to a couple of 100s of messages per second- I haven’t even had the need to do any serious tuning.

But one thing that annoys me on Windows is that, after installation, the location of everything except the binaries -configuration, data, logs- is under the profile dir of the user (C:\Users\USERNAME\AppData\Roaming\RabbitMQ) that did the installation, even if the service runs as LocalSystem. Not very good, is it?

Therefore I’ve created this script to help me. The easiest way to use it is run it before you install RabbitMQ. Change the directories in this part and run it from an admin powershell:

# ========== Customize here ==========
$BaseLocation = "C:\mqroot\conf"
$DbLocation = "C:\mqroot\db"
$LogLocation = "C:\mqroot\log"
# ====================================

Then just reboot and run the installation normally; when it starts, RabbitMQ will use the directories you specified.

You can also do it after installation, if you have a running instance and want to move it. In this case do the following (you can find these steps also in the script):

  1. Stop the RabbitMQ service.
  2. From Task Manager, kill the epmd.exe process if present.
  3. Go to the existing base dir (usually C:\Users\USERNAME\AppData\Roaming\RabbitMQ)
    and move it somewhere else (say, C:\temp).
  4. Run this script (don’t forget to change the paths).
  5. Reboot the machine
  6. Run the “RabbitMQ Service (re)install” (from Start Menu).
  7. Copy the contents of the old log dir to $LogLocation.
  8. Copy the contents of the old db dir to $DbLocation.
  9. Copy the files on the root of the old base dir (e.g. advanced.config, enabled_plugins) to $BaseLocation.
  10. Start the RabbitMQ service.

Here’s the script. Have fun ๐Ÿ™‚

# Source: DotJim blog (
# Jim Andrakakis, March 2021

# What this script does is:
#   1. Creates the directories where the configuration, queue data and logs will be stored.
#   2. Downloads a sample configuration file (it's necessary to have one).
#   3. Sets the necessary environment variables.

# If you're doing this before installation: 
# Just run it, reboot and then install RabbitMQ.

# If you're doing this after installation, i.e. if you have a 
# running service and want to move its files:
#   1. Stop the RabbitMQ service
#   2. From Task Manager, kill the epmd.exe process if present
#   3. Go to the existing base dir (usually C:\Users\USERNAME\AppData\Roaming\RabbitMQ)
#      and move it somewhere else (say, C:\temp).
#   4. Run this script.
#   5. Reboot the machine
#   6. Run the "RabbitMQ Service (re)install" (from Start Menu)
#   7. Copy the contents of the old log dir to $LogLocation.
#   8. Copy the contents of the old db dir to $DbLocation.
#   9. Copy the files on the root of the old base dir (e.g. advanced.config, enabled_plugins) 
#      to $BaseLocation.
#   10. Start the RabbitMQ service.

# ========== Customize here ==========

$BaseLocation = "C:\mqroot\conf"
$DbLocation = "C:\mqroot\db"
$LogLocation = "C:\mqroot\log"

# ====================================

$exampleConfUrl = ""

$ErrorActionPreference = "Stop"

$dirList = @($BaseLocation, $DbLocation, $LogLocation)
foreach($dir in $dirList) {
    if (-not (Test-Path -Path $dir)) {
        New-Item -ItemType Directory -Path $dir

# If this fails (e.g. because there's a firewall) you have to download the file 
# from $exampleConfUrl manually and copy it to $BaseLocation\rabbitmq.conf
try {
    Invoke-WebRequest -Uri $exampleConfUrl -OutFile ([System.IO.Path]::Combine($BaseLocation, "rabbitmq.conf"))
catch {
    Write-Host "(!) Download of conf file failed. Please download the file manually and copy it to $BaseLocation\rabbitmq.conf"
    Write-Host "(!) Url: $exampleConfUrl"

&setx /M RABBITMQ_BASE $BaseLocation
&setx /M RABBITMQ_CONFIG_FILE "$BaseLocation\rabbitmq"
&setx /M RABBITMQ_LOG_BASE $LogLocation

Write-Host "Finished. Now you can install RabbitMQ."

SQL Server: How to shrink your DB Logs (without putting your job at risk)

This post is mostly a reminder for myself ๐Ÿ™‚

When your SQL Server DB log files are growing and your disk is close to being full (or, as it happened this morning, fill up completely thus preventing any DB operation whatsoever, bringing the affected system down!) you need to shrink them.

What this means, basically, is that you create a backup (do NOT skip that!) and then you delete information that allows you to recover the database to any point in time before the backup. That’s what SET RECOVERY SIMPLE & DBCC SHRINKFILE do. And since you kept a backup, you no longer need this information. You don’t need it for operations after the backup though, that’s why we go back to full recovery mode with SET RECOVERY FULL at the end.

So what you need is to login to your SQL Server with admin rights and:

USE DatabaseName
TO DISK = 'C:\dbbackup\DatabaseName.bak'
      MEDIANAME = 'DatabaseNameBackups',
      NAME = 'Full Backup of DatabaseName';
DBCC SHRINKFILE ('DatabaseName_Log', 10);

Notice the 10 there -that’s the size, in MB, that the DB Log file will shrink to. You probably need to change that to match your DB needs. Also, the DatabaseName_Log is the logical name of your DB Log. You can find it in the DB properties. You probably also need to change the backup path from the example C:\dbbackup\DatabaseName.bak.

Powershell & Microsoft Dynamics CRM: get results and update records with paging

I’ve written before an example on how to use Powershell and FetchXml to get records from a Dynamics CRM instance. But there’s a limit, by default 5000 records, on how many records CRM returns in a single batch -and for good reason. There are many blog posts out there on how to increase the limit or even turn it off completely but this is missing the point: you really really really don’t want tens or hundreds of thousand -or, god forbid, millions- of records being returned in a single operation. That would probably fail for a number of reasons, not to mention it would slow the whole system to a crawl for a very long time!

So we really should do it the right way, which is to use paging. It’s not even hard! It’s basically almost the same thing, you just need to add a loop.

That’s the code I wrote to update all active records (the filter is in the FetchXml, so you can just create yours and the code doesn’t change). I added a progress indicator so that I get a sense of performance.

# Source: DotJim blog (
# Jim Andrakakis, June 2020
# Prerequisites:
# 1. Install PS modules
#    Run the following in a powershell with admin permissions:
#       Install-Module -Name Microsoft.Xrm.Tooling.CrmConnector.PowerShell
#       Install-Module -Name Microsoft.Xrm.Data.PowerShell -AllowClobber
# 2. Write password file
#    Run the following and enter your user's password when prompted:
#      Read-Host -assecurestring | convertfrom-securestring | out-file C:\usr\crm\crmcred.pwd
# ============ Constants to change ============
$pwdFile = "C:\usr\crm\crmcred.pwd"
$username = ""
$serverurl = ""
$fetchxml = "C:\usr\crm\all_active.xml"
# =============================================

$ErrorActionPreference = "Stop"

# ============ Login to MS CRM ============
$password = get-content $pwdFile | convertto-securestring
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$password
    $connection = Connect-CRMOnline -Credential $cred -ServerUrl $serverurl
    Write-Host $_.Exception.Message 
if($connection.IsReady -ne $True)
    $errorDescr = $connection.LastCrmError
    Write-Host "Connection not established: $errorDescr"
    Write-Host "Connection to $($connection.ConnectedOrgFriendlyName) successful"

# ============ Fetch data ============
[string]$fetchXmlStr = Get-Content -Path $fetchxml

$list = New-Object System.Collections.ArrayList
# Be careful, NOT zero!
$pageNumber = 1
$pageCookie = ''
$nextPage = $true


    if ($pageNumber -eq 1) {
        $result = Get-CrmRecordsByFetch -conn $connection -Fetch $fetchXmlStr 
    else {
        $result = Get-CrmRecordsByFetch -conn $connection -Fetch $fetchXmlStr -PageNumber $pageNumber -PageCookie $pageCookie

    $ts1 = New-TimeSpan โ€“Start $StartDate1 โ€“End $EndDate1


    Write-Host "Fetched $($list.Count) records in $($ts1.TotalSeconds) sec"    

    $pageNumber = $pageNumber + 1
    $pageCookie = $result.PagingCookie
    $nextPage = $result.NextPage

# ============ Update records ============

$i = 0
foreach($rec in $list) {
    $crmId = $rec.accountid
    $entity = New-Object Microsoft.Xrm.Sdk.Entity("account")
    $entity.Id = [Guid]::Parse($crmId)
    $entity.Attributes["somestringfieldname"] = "somevalue"
    $entity.Attributes["somedatefieldname"] = [datetime]([DateTime]::Now.ToString("u"))
    $i = $i+1
    # this shows progress and time every 1000 records
    if (($i % 1000) -eq 0) {
        $ts2 = New-TimeSpan โ€“Start $StartDate2 โ€“End $EndDate2
        Write-Host "Updating $i / $($list.Count) in $($ts2.TotalSeconds) sec"

$ts2 = New-TimeSpan โ€“Start $StartDate2 โ€“End $EndDate2

Write-Host "Updated $($list.Count) records in $($ts2.TotalSeconds) sec"

For my purposes I used the following FetchXml. You can customize it or use CRM’s advanced filter to create yours:

<fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
  <entity name="account">
    <attribute name="accountid" />
    <order attribute="accountid" descending="false" />
    <filter type="and">
      <condition attribute="statecode" operator="eq" value="0" />

Something to keep in mind here is to minimize the amount of data being queried from CRM’s database and then downloaded. Since we’re talking about a lot of records, it’s wise to check your FetchXml and eliminate all fields that are not needed.


How to upgrade Ubuntu from an unsupported version

Some time ago, a friend of mine (the one of “How I fought off a Facebook hacker” fame) had problems with his Windows laptop, basically the machine became next to useless. Sadly, while I generally like Windows (there are exceptions) this is something that happens all too often. So I solved it by installing Ubuntu, and even though he’s not technically proficient he’s very happy -the machine isn’t exactly lightning fast, but it works and it’s stable.

But a small mistake I made was installing the latest-greatest Ubuntu version available at the time, 19.04. Now for those who don’t know, Ubuntu has some releases that are supported for a long time, called LTS for Long Term Support, and the ones in between that are… not. Full list here.

So as of January 2020, 19.04 went into End-Of-Life status, meaning you can’t download and install updates the normal way (apt upgrade) any more. And without updates, you can’t upgrade to a newer release (do-release-upgrade) as well. The first symptom is that, while trying to install updates, he was getting errors similar to the following:

E: Unable to locate package XXX

An additional problem is that we’re in different countries, so I couldn’t just do the usual routine backup-format-reinstall everything ๐Ÿ™‚

But as usual, Google is your friend! That’s how I solved it from the command line:

sudo sed -i -re 's/([a-z]{2}\.)?|' /etc/apt/sources.list
sudo apt update
sudo apt upgrade
# ...wait for like 30min, then restart...
sudo do-release-upgrade
# ...wait for a couple of hours, restart

What does this do? Well everything except the first line is the standard procedure to upgrade: update (i.e. refresh info for) the software repositories, upgrade (i.e. download and install the updates), restart and then do-release-upgrade which upgrades the complete Ubuntu system -always to the latest LTS release.

But the “magic” is in the first line (and let’s give credit where it’s due). This changes the list that keeps the repositories location (/etc/apt/sources.list) from the normal locations (under or to the “historic” servers, For more info, see “Update sources.list” here.

So after that is done, apt upgrade can now install whatever updates are available and then do-release-upgrade can do its job.

Design by contract Tutorial, part 6/6: [Swagger] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Sound check time: did it really work?

We already saw that the health page works. But it’s time to check if our objective was met.

Remember, the goal is to let developers outside our network use the mock service to help them with their implementation. To see if this works as intended, we can use the swagger file we created and the online Swagger editor.

So open the CustomerTrust.yaml file with a text editor, copy all its contents, navigate your browser to, delete the default content and paste ours. You’ll get this:

Select the mock service from the drop down, click on one of the services, click “Try it out” and then “Execute“. After a few seconds you… will get an error, something like “NetworkError when attempting to fetch resource.

Why? Well it’s the browser preventing us from doing do. If you press F12 and watch the Console, you’ll see something like “Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at xxxxxx”. More info here, but in short, it’s a security measure. You can either disable it in your browser’s settings (WHICH IS A BAD IDEA) or use the curl utility, which you can download here for your operating system.

[EDIT] or I could not be lazy, go back to the wiremock config and set the CORS-related HTTP response headers properly, as explained here, here and specifically for wiremock here.

So after you install curl, you can get the command line from the Swagger Editor:

For GET, the command should be:

curl -X GET "" -H "accept: application/json"

Which ever method you pick -GET or POST- we’ll add to this a -v at the end (for more info). So run it and at the end you’ll get this:

401 Unauthorized* Connection #0 to host left intact

Makes sense right? The mock service expects an authorization token which we haven’t provided. Let’s add this:

curl -X GET "" -H "accept: application/json" -H "Authorization: Bearer 1234" -v

And now you’ll get the json:

  "name": "GlarusAdvertising AG",
  "taxid": "CHE-123.456.789",
  "trustlevel": "OK"

Likewise, let’s try the POST:

curl -X POST "" -H  "accept: */*" -H  "Content-Type: application/json" -d "{\"reportid\":\"2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e\",\"reporttaxid\":\"CHE-123.456.789\",\"taxid\":\"CHE-123.456.789\",\"trustlevel\":\"OK\"}" -H "Authorization: Bearer 1234" -v

And you should see the id of the request in the json response:

    "reportid": "2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e",
    "status": "OK"

A small note on Windows: if you try this in Powershell, it seems that the json escaping is acting funny. If you try it through cmd, it works just fine.

That’s all folks

So now our kind-of-fictional-but-actually-quite-real developers can access the service and test their code against it. And wherever we make a change and push it, the service is updated automatically. Not bad, isn’t it? ๐Ÿ™‚

That concludes this guide and its introductory journey in the world of Devops (or, as a friend of mine more accurately calls the field, SRE -short for “Site Reliability Engineering”).

I hope you enjoyed it as much as I did writing it -I really did. I’m sure you’ll have many, many questions which I can try to answer -but no promises ๐Ÿ™‚ You can ask here in the comments (better) or in my twitter profile @JimAndrakakis.


I’ve put all the code in a github repository, here; the only change is that I moved the pipeline yaml in the devops folder and removed my name. You can also find the docker image in docker hub, here.

Have fun coding!

Design by contract Tutorial, part 5/6: [Azure Devops] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Put a little magic in your life: create the auto-deploy pipeline

We’re close to the end of our journey.

So far we’ve basically done everything we need. In this last step we’ll also make it happen automagically: we want to be able to do changes to our code (which, in our scenario, is the wiremock service configuration) and have them get deployed on Azure without us having to do anything.

We’ll use Azure Devops -formerly called Visual Studio Team System (VSTS) or, even earlier, Team Foundation Server (TFS) Online- for this. There are other services we could use as well, like Github or Bitbucket, and they’re equally good.

But whatever your service, in general this process is called CI/CD, short for Continuous Integration / Continuous Delivery. Simply put, CI means that your code is built and tested as soon as you push changes in source control. if the build or any test is not successful, the code changes are rolled back, guaranteeing (well, as far as your tests are concerned) that the code in the repo is correct. CD is the next step, taking the build and deploying it, usually in a test server, then in a staging one and then to production.

So as a first step, create a free account in Azure Devops. You can use the same Microsoft account you used in Azure or different. Once you’ve logged in, create a new project. Let’s call it GraubFinanceMockService.

By default we got a Git repository with the same name as the project. Let’s clone it in our development PC (i’m using C:\src\test, but feel free to use whatever you like).

Make sure you have git installed (or download it from here), then open a command prompt and type (replace the URL with your details):

cd c:\src\test
git clone

You’ll be asked for credentials of course (you might want to cache them). After that you’ll get a folder named GraubFinanceMockService. Move in there the folders we created during our previous steps: openapi, wiremock and devops.

Additionally, to avoid committing unwanted files in the repository, create an empty text file on the top folder named .gitignore, open it with a text editor and paste the following:


Now we’re ready to commit for the first time. Type the following in the command line:

cd c:\src\test\GraubFinanceMockService
git add .
git commit -m 'initial commit'
git push

And our code is there:

Now we’ll start setting up our build. “But wait”, you might reasonably ask, “we don’t really have any code to build, that’s no C# or Java or whatever project, why do we need a build?”.

Well, we do need to build our docker image, and push it in Docker Hub. This way when we change anything in our wiremock config, we’ll get a new image to reflect that.

But before we continue, remember that we have some variables in our tfvars files that we need to replace? Now it’s time to do that. Under Pipelines go to Library, then (+) Variable Group. Name the variable group azureconnectioncredentials, then add four variables (click the lock to set them as secret!):


Be sure to check that “Allow access from all pipelines” is enabled.

But how do you get these values? From Azure CLI. The process is described by Microsoft here, but in short, open a command prompt (remember that from the previous step, we are logged in with Azure CLI already) and write:

az account show
# note the id, that's the subscription id, and the tenant id
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTIONID"

You’ll get something like the following, which you need to keep secret (the recommended way is to use a password manager):

  "appId": "XXXXXXXXXX",
  "displayName": "azure-cli-YYYYYY",
  "name": "http://azure-cli-2019-YYYYYY",
  "password": "ZZZZZZZZZZZZZZZZ",
  "tenant": "TTTTTTTTTTTT"

So paste these values to the respective variables in Azure Devops. You got the subscription id and tenant id from the first command (az account show). From the second (az ad sp create-for-rbac) get the appId and put it in the client_id variable, and get the password and put it in the client_secret variable. At the end, click Save.

You did set the variables to secret right? ๐Ÿ™‚

We need one more variable group for the not-secret stuff. Create a new variable group, name it azurenames and add the following variables (here with sample values):

azurelocation = westeurope
basename = graubfinancemock
dockerimage = YOURUSERNAME/graubfinancemock
dockerimageversion = latest
envtype = test
SKUsize = B1
SKUtier = Basic

Also here we need “Allow access from all pipelines” to be enabled.

Now we’re ready to create a new pipeline. In Azure Devops go to Pipelines > Builds > New Pipeline. You can click “Use the classic editor” if you’re not comfortable with YAML, but here I’ll use Azure Repos Git (YAML) as I can copy paste the result here. Select your code repository and then, to see how it works step by step, Starter Pipeline.

Our new build will get the sources in a directory on the build server, but nothing more than that. Let’s start telling the build server what to do.

First, we need to tell it to use our variable groups. Delete whatever default code is there and paste the following:


- master
  vmImage: 'ubuntu-latest'
- group: azureconnectioncredentials
- group: azurenames

We don’t really need distinct stages, we’ll just set up two jobs, build and deploy.

Now let’s get it to create the docker image.

  - job: Build
    displayName: Build docker image

Now click on Show Assistant on the right, search for Docker and pick Docker (description: Build or push Docker images etc etc). Connect your container registry as follows:

In Container repository you enter the full name of the docker hub repository (YOURUSERNAME/graubfinancemock in our example) but even better, we can use our variable (same for the desired version). So enter $(dockerimage), then change to tags to:


Leave everything else to default values, click Add. Under steps you should have the following:

    - task: Docker@2
      enabled: false
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |

Now click Save and Run. Et voila:

Having built our service, let’s deploy it. Paste the following at the end of the YAML file:

  - job: Deploy
    displayName: Deploy to Azure

Now we need to run our cleanup script, then replace the variables in the tfvars files, then run terraform. Search for task Azure CLI, then configure the Azure subscription. Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/cleanup.ps1 and script arguments is “-rgName ‘$(baseName)’” (without the double quotes, but note the single quotes). But remember, this is not on the root of our code repository. Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops (without the double quotes). You should end up with the following:

      - task: AzureCLI@2
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
          arguments: '-rgName ''$(baseName)'''
          workingDirectory: '$(Build.SourcesDirectory)/devops/'

Time to replace the variable values. Add another task named Replace Tokens. Change the target files to **/*.tfvars, uncheck the BOM (it creates problems somtimes). Done.

      - task: replacetokens@3
          targetFiles: '**/*.tfvars'
          encoding: 'auto'
          writeBOM: false
          actionOnMissing: 'warn'
          keepToken: false
          tokenPrefix: '#{'
          tokenSuffix: '}#'

Next up, terraform. We have the batch file ready, but we need terraform.exe to be available. So add a task named Terraform tool installer. Change the version to the latest (find it here, at the time of writing it’s 0.12.15).

      - task: TerraformInstaller@0
          terraformVersion: '0.12.15'

Everything’s ready to run our batch script. As we need Azure CLI to be available for terraform to work the way we want to, add another Azure CLI task . Pick the Azure subscription from the drop down (you don’t have to configure it again). Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1 (it’s the one that uses the replaced .tfvars file). Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops“(without the double quotes). At the end it should look like this:

      - task: AzureCLI@2
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
          workingDirectory: '$(Build.SourcesDirectory)/devops'

We’re ready. The build definition, now complete, should be look this:

name: GraubFinanceMockServiceAutoDeploy

- master
  vmImage: 'ubuntu-latest'
- group: azureconnectioncredentials
- group: azurenames

  - job: Build
    displayName: Build docker image
    - task: Docker@2
      enabled: true
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |
  - job: Deploy
    displayName: Deploy to Azure
    - task: AzureCLI@2
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
        arguments: '-rgName ''$(baseName)'''
        workingDirectory: '$(Build.SourcesDirectory)/devops/'
    - task: replacetokens@3
        targetFiles: '**/*.tfvars'
        encoding: 'auto'
        writeBOM: false
        actionOnMissing: 'warn'
        keepToken: false
        tokenPrefix: '#{'
        tokenSuffix: '}#'
    - task: TerraformInstaller@0
        terraformVersion: '0.12.15'
    - task: AzureCLI@2
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
        workingDirectory: '$(Build.SourcesDirectory)/devops'

Did it work? Navigate your browser to and:

Ta da!

We’re basically done. Let’s see how helpful our service is to our developers.

Design by contract Tutorial, part 4/6: [Terraform] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Infrastructure as Code: time to ship these containers

So we built our mock service and we dockerized it. Next up, run the container on the cloud.

Remember that in our scenario -and in my everyday work life- the mock service has to be accessible from people outside our local network. Of course, one way to do this would be to run it in-house and open a hole in your firewall.

…if you didn’t scream “BAD IDEA!” when you read the last sentence, now it would be the right time to do so ๐Ÿ™‚

So, cloud to the rescue. We’ll use Azure here; we’ll create a subscription and then deploy with a terraform infrastructure-as-service (IaC) configuration. So our steps will be:

  1. Create the Azure subscription (manual step)
  2. Create the terraform config that creates a resource group , an app service plan and a web app for containers.
  3. Deploy to azure
  4. Test that it works by calling the /servicehealth path of the mock service.

If you’re deploying an actual application (say, a REST API that connects to a database) on the cloud you probably need more. For example, you might need a firewall, a virtual LAN so that different servers talk to each other but are isolated from the world, an API gateway, a cloud sql database and maybe more. But for our mock service, which has no data that need protection, we can keep it really simple.

  1. Open the azure portal and either create a new subscription or login if you have one already. For new subscriptions, Microsoft gives $200 or usage for free so you can experiment a bit. Running this tutorial has taken me less than $1 out of this amount, so no money actually left my pocket ๐Ÿ™‚

After you created the subscription, you need to download the Azure Command-Line Interface (CLI), which is basically a powershell module. If you’re running on Linux -as I am at home- you also need Powershell Core (get it here). After installing, open a powershell prompt (you can also do it from ye olde command prompt) and run:

az login

Follow the instructions and you’re done.

2. Create an devops folder and create an empty text file inside. Name it and paste the following:

# Configure the Azure provider
provider "azurerm" {
  # for production deployments it's wise to fix the provider version
  #version = "~>1.32.0"

  subscription_id = var.subscription_id
  client_id       = var.client_id
  client_secret   = var.client_secret
  tenant_id       = var.tenant_id   

# Create a new resource group
resource "azurerm_resource_group" "rg" {
    name     = var.basename
    location = var.azurelocation
    tags = {
        environment = var.envtype

# Create an App Service Plan with Linux
resource "azurerm_app_service_plan" "appserviceplan" {
  name                = "${}-APPPLAN"
  location            = azurerm_resource_group.rg.location
  resource_group_name =

  # Define Linux as Host OS
  kind = "Linux"
  reserved = true # Mandatory for Linux plans

  # Choose size
  sku {
    tier = var.SKUtier
    size = var.SKUsize

# Create an Azure Web App for Containers in that App Service Plan
resource "azurerm_app_service" "appsvc" {
  name                =
  location            = azurerm_resource_group.rg.location
  resource_group_name =
  app_service_plan_id =

  # Do not attach Storage by default
  app_settings = {

    # Settings for private Container Registires  

  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.DockerImage}"
    #always_on        = "false"
    #ftps_state       = "FtpsOnly"

  logs {
    http_logs {
      file_system {
        retention_in_days = var.logdays
        retention_in_mb   = var.logsizemb

  identity {
    type = "SystemAssigned"

output "DockerUrl" {
    value = azurerm_app_service.appsvc.default_site_hostname

Inside this configuration you may have noticed that we used a few variables, like var.basename. In terraform, we define variables and their values in separate files so that we can use the same base configuration with different details. A common scenario is the same configuration for testing, staging and production environments but with different names (think graubfinance-test for testing, graubfinance-staging for preprod and graubfinance for prod), different service levels etc.

Following best practice, these variables should be defined. Create another empty file called and paste the following:

variable "basename" {
  type    = string

variable "azurelocation" {
  type    = string

variable "subscription_id" {
  type    = string

variable "client_id" {
  type    = string

variable "client_secret" {
  type    = string

variable "tenant_id" {
  type    = string

variable "envtype" {
    type    = string

variable "SKUsize" {
    type    = string

variable "SKUtier" {
    type    = string

variable "DockerImage" {
    type    = string

variable "logdays" {
    type    = number

variable "logsizemb" {
    type    = number

Now we need one or more “variable values” (.tfvars) files to define the values for our intended environment. Create yet another file, name it service-varvalues-dev.tfvars and paste the following:

basename = "graubfinancemock"

# when logging in as a user via Azure CLI, these values must be null
subscription_id = null
client_id       = null
client_secret   = null
tenant_id       = null

envtype = "test"

# this can change depending on your preferences
# you can get location codes using
# az account list-locations
# e.g. try "eastus" or "centralindia"
azurelocation = "westeurope"

# Using the free tier generates an error.
# Seems that Microsoft does not want people to
# use their resources *completely* free?
# Who knew! ๐Ÿ™‚
#SKUtier = "Free"
#SKUsize = "F1"

# This is still very cheap though
SKUtier = "Basic"
SKUsize = "B1"

DockerImage = "dandraka/graubfinancemock:latest"

logdays = 30
logsizemb = 30

We’ll use this when testing locally but for later (when we deploy via Azure Devops) we’ll need the same but with placeholders for the deployment process to change. So copy-paste this file as service-varvalues-pipeline.tfvars and change it to look like this:

basename = "#{basename}#"

# when logging in as a service, these must NOT be null
subscription_id = "#{subscription_id}#"
client_id       = "#{client_id}#"
client_secret   = "#{client_secret}#"
tenant_id       = "#{tenant_id}#" 

envtype = "#{envtype}#"
azurelocation = "#{azurelocation}#"

SKUtier = "#{SKUtier}#"
SKUsize = "#{SKUsize}#"

DockerImage = "#{dockerimage}#"

logdays = 30
logsizemb = 30

Obviously the parts between #{…}# are placeholders. We’ll talk about these when we create the pipeline.

3. Now we’ll use terraform to deploy this configuration. Install terraform (instructions here, but basically it’s just an exe that you put in your path), then create a text file in your devops dir, name it terraformdeploy-dev.ps1 and paste the following:

terraform init
# here you need to see stuff happening and then
# "Terraform has been successfully initialized!"
terraform plan -out="out.plan" -var-file="service-varvalues-dev.tfvars"
# if everything went well, apply
terraform apply "out.plan"

Run it. If everything went well, you should get the following (or similar) output at the end:


DockerUrl =

In order to prepare ourselves for the automated deployment again, copy-paste this small script, name it terraformdeploy-pipeline.ps1 and just change the tfvars name. So the new file will look like this (I’ve stripped the comments here):

terraform init
terraform plan -out="out.plan" -var-file="service-varvalues-pipeline.tfvars"
terraform apply "out.plan"

4. Let’s see if it works

Navigate your browser to (or similar if you made any changes). That’s what you should see:

Hurray! ๐Ÿ™‚

Notice also how we got https for free -we didn’t install any certificate or configured anything. Azure took care of it.

Out of curiosity, let’s head over to to see what happened. Once there, click on “resource groups” and then “graubfinancemock” (or whatever you named it). You’ll see something like this:

Did it cost much? Click “Cost analysis” on the left, for scope select your subscription (by default named “Free trial”) and you see what you paid for our experiment:

It didn’t break the bank, did it? ๐Ÿ™‚

To be fair, we didn’t really do much. Most of the CPU usage we were charged for went into getting the system -our linux container running wiremock- up and running. Just out of curiosity, how much does it cost if we use it a little more?

You can try the following experiment: have it answer 1000 (or whatever) requests and see what it costs. Try this powershell script:

cd $env:TEMP
mkdir testrequests
cd testrequests
for ($i=1;$i -le 1000;$i++) { Invoke-WebRequest -Uri "" -OutFile "out-$i.txt"; $i }

After it finishes, click refresh and let’s see the cost analysis again:

No joke: after 1000 requests, it didn’t change a cent. You can see why companies love the cloud! Though again, we didn’t use our CPU heavily -and that’s what Azure charges mostly for.

We’re close to finishing. The last thing to do is to automate the process via Azure Devops (a.k.a. VSTS, a.k.a. TFS Online). Just one last thing: since we’ll be doing the terraform deploy automatically, let’s delete everything we’ve done. Create a file named cleanup.ps1 inside our devops dir and paste the following:

param ([string]$rgName)

[bool]$rgExists = ((az group exists -n $rgName) -eq 'true')

if ($rgExists) 
        az group delete -n $rgName -y  
        Write-Host "Resource group $rgName does not exist, nothing to do"

Now in the command prompt, run:

 ./cleanup.ps1 -rgName graubfinancemock

A couple of minutes later, everything’s gone.

[EDIT] Just to be clear, this means that every time we deploy, we first delete everything and then we redo it from scratch.

This is fine for our scenario, the mock service, and in general it’s ok when both of these conditions are true:

1. Our Azure components have no state to lose (no databases etc) and
2. The down time doesn’t hurt.

For more complex scenarios, where you have real productive services, state, data etc this approach is not possible. In such cases you need to keep somewhere your plan and state files. This, and the best practice to do so, is explained here by the Terraform team and here by Microsoft.

Having created the auto deployment process, let’s add the missing sparkle and create the auto deployment pipeline.

Design by contract Tutorial, part 3/6: [Docker] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Let’s put the “Build once, Run anywhere” promise to the test: build the container for the mock service

Though there are many, many, many ways to run a service you’ve built in different environments, most of them require extensive reconfiguration, are problem prone and break easily. Just ask any developer that has built apps for IIS .

Docker is very popular because it solves this problem neatly. It builds a container -a box- within which your application lives. This stays the same everywhere, be it the dev PC, your local staging and production environment or the cloud. You still need, of course, to know how to communicate to other services or how the world reaches you, but this is reduced to a few configuration files.

And, while usually I’m suspicious against overhyped products and technologies, Docker really is easy to use.

How easy? Well, that’s what we need to do for our mock service:

  1. Install docker for your OS and create an account in docker hub
  2. Inside the wiremock folder, create an empty text file named Dockerfile (no extension)
  3. Open it with a text editor and paste the following (I’ll explain below):
FROM rodolpheche/wiremock
LABEL maintainer="Your Name <>"

ADD mappings/*.json /home/wiremock/mappings/
ADD __files/*.* /home/wiremock/__files/

CMD ["java", "-cp", "/var/wiremock/lib/*:/var/wiremock/extensions/*", "com.github.tomakehurst.wiremock.standalone.WireMockServerRunner", "--global-response-templating", "--verbose"]

4. Open a command window in the wiremock folder and enter the following command:

docker image build -t graubfinancemock:1.0 .

We’ve built our container image, now lets run it:

docker container run --publish 80:8080 --detach --name graubfinancemock graubfinancemock:1.0

To test if it works, open a browser and navigate to http://localhost/servicehealth .

docker running

Ta da!

The last step is to publish it so that it’s available for others (like our cloud instance which we’ll create next) to use. In the command window enter the following commands (use the login details you created in dockhub, step 1):

docker login
docker image tag graubfinancemock:1.0 YOURUSERNAME/graubfinancemock:1.0
docker image tag graubfinancemock:1.0 YOURUSERNAME/graubfinancemock:latest
docker image push YOURUSERNAME/graubfinancemock:1.0
docker image push YOURUSERNAME/graubfinancemock:latest

That’s it. Seriously, we’re done. But let’s take a moment and explain what we did.

First of all, the dockerfile. It contains all the info for your container and, in our case, states the following:

  1. FROM rodolpheche/wiremock“: don’t begin from an empty environment; instead, use the image named “wiremock” from account “rodolpheche“, who has already created and published a suitable docker configuration (thanks!)
  2. The two “ADD” lines tell it to add (duh) files into the filesystem of the container
  3. The “CMD” tells the container what to do when it starts. In our case, it runs the java package of wiremock, passing a few command line options, like –global-response-templating

Now the docker commands.

  1. The “docker image build” builds the image, i.e. creates the docker file system and stores the configuration. It gives it a name (graubfinancemock) and a version (1.0). A version is just a string; it could also be, say, 1.0-alpha, 2.1-RC2, 4.2.1 and so on.
  2. The “docker container run”, obviously, runs the image. The important thing here is the “–publish 80:8080”. By default, the wiremock server listens to port 8080. So here we instruct docker to map port 80 (visible from the world) to port 8080 (inside the docker container). That’s why we can use the url http://localhost/servicehealth and not http://localhost:8080/servicehealth .
  3. The last this is to publish the image. You need to login, obviously, and then you have to tag the image. You can assign as many tags as you want, so you can e.g. publish to many repositories. The format is REPOSITORY/IMAGE:VERSION. In docker hub the repo name is your username, but it can be different in private repositories. After tagging, you push the tag, which uploads the image.

Note that apart from the normal version (graubfinancemock:1.0) we also tag the image as latest (graubfinancemock:latest). This way when using the image we won’t need to update the version every time we upload a new one; we’ll just say “get the latest”.

But be careful here: if you build a service -forget our mock for a minute, let’s say we’re building an actual service- and people are using your image as :latest, they might unwillingly jump to an incompatible version with breaking changes (say, from 1.x to 2.0). So it’s a much safer strategy to tag your images as :1.0-latest, :2.0-latest etc instead of just :latest. This way, consumers are fairly certain that they only get non-breaking changes.

Private repositories are out of scope for this post, but in an enterprise setting you’ll probably need them; you usually don’t want the whole world to be able to use your container images. Creating a private repo is not hard, but it’s not free anymore. The easiest ways to do that is in Dockerhub itself or in Azure, where it’s called a container registry. Using it, though, is exactly the same as a public one. If it’s not on docker hub, you just have to prefix the tag with the repo address (see “Now the new feature!” in this post).

So now we have our docker configuration. Ready to ship?