Tag Archives: dotnet

New version of XMLSlurper: 2.0

I just published a new version of my open source C# Dandraka.Slurper library in Github and Nuget.org.

The new version, 2.0, implements the existing XML functionality but, additionally, for Json. And because it’s not just about XML anymore, I had to rename it from Dandraka.XmlUtilities (which, if I’m being honest, sounded bad anyway) to Dandraka.Slurper. It also targets .Net Standard 2.1.

Here’s a quick usage example:

using Dandraka.Slurper;

public void PrintJsonContents1_Simple()
	string json = 
  'id': 'bk101',
  'isbn': '123456789',
  'author': 'Gambardella, Matthew',
  'title': 'XML Developer Guide'
}".Replace("'", "\"");
	var book = JsonSlurper.ParseText(json);

	// that's it, now we have everything            
	Console.WriteLine("id = " + book.id);
	Console.WriteLine("isbn = " + book.isbn);
	Console.WriteLine("author = " + book.author);
	Console.WriteLine("title = " + book.title);

Separately, there are a couple of changes that don’t impact the users of the library:

  1. A Github Codespace configuration was added to the repository (in case I want to fix a bug on my iPad while traveling 😊).
  2. The test project was migrated from DotNet Core 3.1 to 6.0.

The new version is backwards compatible with all previous versions. So if you use it, updating your projects is effortless and strongly recommended.

Git: how to avoid checking in secrets (using a Powershell pre-commit hook)

Who among us hasn’t found him- or herself in this very awkward position: committing a config or code file with secrets (such as passwords or API keys) and then semi-panicked googling how to delete it from source control.

Been there and let me tell you the easiest way to delete it: copy all the code on disk, delete the repository completely and then re-create it.

(if this is not an option, well, there’s still a way but with much more work and risk, so do keep that code backup around!)

But you know what’s even better? That’s right, avoid this in the first place! That’s why Git hooks are so useful: they work without you neededing to remember to check your config files every time.

So here’s my solution to this:

  1. In the repository, go to .git/hooks and rename pre-commit.sample to pre-commit (i.e. remove the extension)
  2. Open pre-commit with a text editor and replace its contents with the following:
C:/Windows/System32/WindowsPowerShell/v1.0/powershell.exe -ExecutionPolicy Bypass -Command '.\hooks\pre-commit.ps1'
  1. Add a new directory on the root of the repository named hooks.
  2. Inside this, add a text file named pre-commit.ps1 with the following code:
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, July 2022

# ===== Change here =====
# ===== Change here =====

$codePath = (Get-Item -Path $PSScriptRoot).Parent.Parent.FullName

$errorList=New-Object -TypeName 'System.Collections.ArrayList'

foreach($ext in $listOfExtensions) {
    $list = Get-ChildItem -Path $codePath -Recurse -Filter $ext

    foreach($file in $list) {
        $fileName = $file.FullName
        if ($fileName.Contains('\bin\')) {
        Write-Host "Checking $fileName for secrets"
        [xml]$xml=[xml]((Get-Content -Path $fileName).ToLowerInvariant())
        foreach($secretName in $listOfSecretNodes) {
            $nodes = $xml.SelectNodes("//*[contains(local-name(), '$secretName')]")
            foreach($node in $nodes) {
                if ($node.InnerText.ToLowerInvariant() -ne $acceptableString) {
                    $str = "[$fileName] $($node.Name) contains text other than '$acceptableString', please replace this with $acceptableString before commiting."
                    $errorList.Add($str) | Out-Null
                    Write-Warning $str

if ($errorList.Count -gt 0) {
    Write-Error 'Commit cancelled, please correct before commiting.'

So there you have it. I’m getting automatically stopped every time I tried to commit any .xml or .config file that contains a node with a name that contains username, password, clientid, secret or connectionstring, whenever the value of it is not ‘lalala’.

Obviously the extensions, node names and acceptable string can be changed at the top of the script. You can also change this quite easily to check JSON files as well.

Also note that this works on Windows (because of the Powershell path in the pre-commit hook) but with a minor change in the pre-commit bash script, you should be able to make it work cross-platform with Powershell core. I haven’t tested it but it should be:

#!/usr/bin/env pwsh -File '.\hooks\pre-commit.ps1'

Have fun coding!

New version of Zoro: 2.0

I just published a new version of my open source C# Zoro library in Github and Nuget.org.

Zoro is a data masking/anonymization utility. It fetches data from a database or a CSV file, masks (i.e. anonymizes) them according to the configuration provided and uses the masked data to create a CSV file or run SQL statements such as INSERTs or UPDATEs.

The new version, 2.0, has been converted to DotNet Standard 2.1 to take advantage of some useful DotNet features. The command line utility and the test project are written with DotNet Core 5.0.

The issue from 1.0.2, where the Nuget package did not contain the executables, has been corrected. The package now contains both a Win64 and a Linux64 executable. Since they are self-contained programs, no prior installation of DotNet is needed.

But the most important new feature is a new MaskType, “Query”. With this, the library can retrieve values from a database and pick a random one. In previous versions this was only possible with lists that were fixed in the XML (MaskType=List).

For example, let’s say you are masking the following data:

Table “customers”

In the database you might also have a table with cities and countries:

Table “cities”

In order to anonymize the above data, your config could look like this:

<?xml version="1.0"?>
<MaskConfig xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
      	Query="SELECT cityname, countrycode FROM cities" />
SELECT * FROM customers
INSERT INTO customers_anonymous
(ID, Name, City, Country)
($ID, $Name, $City, $Country)

This will result in a table looking like this:

Table “customers_anonymous”

If you have any questions, please write in the comments.


Please don’t write logs inside Program Files (here’s how to do it right)

So the other day I’m troubleshooting a Windows Service that keeps failing on a server, part of a product we’re using in the company. Long story short, that’s what the problem was:

Access to the path 'C:\Program Files\whatever\whatever.log is denied'

I mean, dear programmer, look. You want to write your application’s logs as simple text files. I get it. Text files are simple, reliable (if the file system doesn’t work, you have bigger problems than logging) and they’re shown in virtually every coding tutorial in every programming language. Depending on the case, there might be better ways to do that such as syslog, eventlog and others.

But sure, let’s go with text files. Take the following example somewhere in the middle of a Python tutorial. Look at line 3:

import logging

logging.basicConfig(filename='app.log', filemode='w', format='%(name)s - %(levelname)s - %(message)s')
logging.warning('This will get logged to a file')

Did you notice? This code writes the log in the same place as the binary. It’s not explicitly mentioned and usually you wouldn’t give it a second thought, right?

To be clear, I don’t want to be hard on the writers of this or any other tutorial; it’s just a basic tutorial, and as such it should highlight the core concept. A professional developer writing an enterprise product should know a bit more!

But the thing is, these examples are everywhere. Take another Java tutorial and look at line 16:

package com.javacodegeeks.snippets.core;

import java.util.logging.Logger;
import java.util.logging.FileHandler;
import java.util.logging.SimpleFormatter;
import java.io.IOException;

public class SequencedLogFile {

    public static final int FILE_SIZE = 1024;
    public static void main(String[] args) {

        Logger logger = Logger.getLogger(SequencedLogFile.class.getName());
        try {
            // Create an instance of FileHandler with 5 logging files sequences.
            FileHandler handler = new FileHandler("sample.log", FILE_SIZE, 5, true);
            handler.setFormatter(new SimpleFormatter());
        } catch (IOException e) {
            logger.warning("Failed to initialize logger handler.");
        logger.info("Logging info message.");
        logger.warning("Logging warn message.");

Or this Dot Net tutorial, which explains how to set up Log4Net (which is great!) and gives this configuration example. Let’s see if you can spot this one. Which line is the problem?

    <level value="ALL" />
    <appender-ref ref="LogFileAppender" />
  <appender name="LogFileAppender" type="log4net.Appender.RollingFileAppender">
    <file value="proper.log" />
    <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
    <appendToFile value="true" />
    <rollingStyle value="Size" />
    <maxSizeRollBackups value="2" />
    <maximumFileSize value="1MB" />
    <staticLogFileName value="true" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%d [%t] %-5p %c %m%n" />

If you answered “7”, congrats, you’re starting to get it. Not using a path -this should be obvious, I know, but it’s easy to forget nevertheless- means writing in the current path, which by default is wherever the binary is.

So this works fine while you’re developing. It works fine when you do your unit tests. It probably works when your testers do the user acceptance testing or whatever QA process you have.

But when your customers install the software, the exe usually goes to C:\Program Files (that’s in Windows; in Linux there are different possibilities as explained here, but let’s say /usr/bin). Normal users do not have permission to write there; an administrator can grant this, but they really really really shouldn’t. You’re not supposed to tamper with the executables! Unless you’re doing some maintenance or an upgrade of course.

So how do you do this correctly?

First of all, it’s a good idea to not reinvent the wheel. There are many, many, MANY libraries to choose from, some of them very mature, like log4net for Dot Net or log4j for Java.

But if you want to keep it very simple, fine. There are basically two ways to do it.

If it’s a UI-based software, that your users will use interactively:

Create a directory under %localappdata% (by default C:\Users\SOMEUSER\AppData\Local) with the brand name of your company and/or product, and write in there.

You can get the localappdata path using the following line in Dot Net:

string localAppDataPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);

Take for example the screen-capturing software called Greenshot. These guys do it right:

If it’s a non-interactive software, like a Windows Service:

You can do the same as above, but instead of Environment.SpecialFolder.LocalApplicationData use Environment.SpecialFolder.CommonApplicationData, which by default is C:\ProgramData. So your logs will be in C:\ProgramData\MyAmazingCompany\myamazingproduct.log.

Or -not recommended, but not as horrible as writing in Program Files- you can create something custom like C:\MyAmazingCompany\logs. I’ll be honest with you, it’s ugly, but it works.

But in any case, be careful to consider your environment. Is your software supposed to run on Windows, Linux, Mac, everything? A good place to start is here, for Dot Net, but the concept is the same in every language.

And, also important, make your logging configurable! The location should be changeable via a config file. Different systems have different requirements. Someone will need the logs somewhere special for their own reasons.


New version of Zoro: 1.0.2

I just published a new version of my open source C# Zoro(*) library in Github and Nuget.org.

Zoro is a data masking/anonymization utility. It fetches data from a database or a CSV file, and creates a CSV file with masked data.

The new version, 1.0.2, has been converted to DotNet Standard 2.0. The command line utility and the test project have been converted to Dotnet Core 5.0.

There is a known issue, not with the code but with the Nuget package. The description claims, as was intended, that the package contains not only the library but also the exe, which can be used as a standalone command line utility. But due to some wrong path in the Github Action, it doesn’t.

I’ll try to get that fixed in the next weeks. Until then, if you need the exe, please checkout the code and build with Visual Studio or Visual Studio Code.


New version of XMLSlurper: 1.3.0

I just published a new version of my open source C# XmlSlurper library in Github and Nuget.org.

The new version, 1.3.0, contains two major bug fixes:

  1. In previous versions, when the xml contained CDATA nodes, an error was thrown (“Type System.Xml.XmlCDataSection is not supported”). This has been fixed, so now the following works:
    <Title><![CDATA[DOCUMENTO N. 1234-9876]]></Title>

This xml can be used as follows:

var cdata = XmlSlurper.ParseText(getFile("CData.xml"));
// produces 'DOCUMENTO N. 1234-9876'
  1. In previous versions, when the xml contained xml comments, an error was thrown (“Type System.Xml.XmlComment is not supported”). This has been fixed; the xml comments are now ignored.

Separately, there are a few more changes that don’t impact the users of the library:

  1. A Github action was added that, when the package version changes, automatically builds and tests the project, creates a Nuget package and publishes it to Nuget.org. That will save me quite some time come next version 🙂
  2. The test project was migrated from DotNet Core 2.2 to 3.1.
  3. The tests were migrated from MSTest to xUnit, to make the project able to be developed both in Windows and in Linux -my personal laptop runs Ubuntu.

The new version is backwards compatible with all previous versions. So if you use it, updating your projects is effortless and strongly recommended.

Design by contract Tutorial, part 6/6: [Swagger] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Sound check time: did it really work?

We already saw that the health page works. But it’s time to check if our objective was met.

Remember, the goal is to let developers outside our network use the mock service to help them with their implementation. To see if this works as intended, we can use the swagger file we created and the online Swagger editor.

So open the CustomerTrust.yaml file with a text editor, copy all its contents, navigate your browser to https://editor.swagger.io, delete the default content and paste ours. You’ll get this:

Select the mock service from the drop down, click on one of the services, click “Try it out” and then “Execute“. After a few seconds you… will get an error, something like “NetworkError when attempting to fetch resource.

Why? Well it’s the browser preventing us from doing do. If you press F12 and watch the Console, you’ll see something like “Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at xxxxxx”. More info here, but in short, it’s a security measure. You can either disable it in your browser’s settings (WHICH IS A BAD IDEA) or use the curl utility, which you can download here for your operating system.

[EDIT] or I could not be lazy, go back to the wiremock config and set the CORS-related HTTP response headers properly, as explained here, here and specifically for wiremock here.

So after you install curl, you can get the command line from the Swagger Editor:

For GET, the command should be:

curl -X GET "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H "accept: application/json"

Which ever method you pick -GET or POST- we’ll add to this a -v at the end (for more info). So run it and at the end you’ll get this:

401 Unauthorized* Connection #0 to host graubfinancemock.azurewebsites.net left intact

Makes sense right? The mock service expects an authorization token which we haven’t provided. Let’s add this:

curl -X GET "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H "accept: application/json" -H "Authorization: Bearer 1234" -v

And now you’ll get the json:

  "name": "GlarusAdvertising AG",
  "taxid": "CHE-123.456.789",
  "trustlevel": "OK"

Likewise, let’s try the POST:

curl -X POST "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H  "accept: */*" -H  "Content-Type: application/json" -d "{\"reportid\":\"2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e\",\"reporttaxid\":\"CHE-123.456.789\",\"taxid\":\"CHE-123.456.789\",\"trustlevel\":\"OK\"}" -H "Authorization: Bearer 1234" -v

And you should see the id of the request in the json response:

    "reportid": "2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e",
    "status": "OK"

A small note on Windows: if you try this in Powershell, it seems that the json escaping is acting funny. If you try it through cmd, it works just fine.

That’s all folks

So now our kind-of-fictional-but-actually-quite-real developers can access the service and test their code against it. And wherever we make a change and push it, the service is updated automatically. Not bad, isn’t it? 🙂

That concludes this guide and its introductory journey in the world of Devops (or, as a friend of mine more accurately calls the field, SRE -short for “Site Reliability Engineering”).

I hope you enjoyed it as much as I did writing it -I really did. I’m sure you’ll have many, many questions which I can try to answer -but no promises 🙂 You can ask here in the comments (better) or in my twitter profile @JimAndrakakis.


I’ve put all the code in a github repository, here; the only change is that I moved the pipeline yaml in the devops folder and removed my name. You can also find the docker image in docker hub, here.

Have fun coding!

Design by contract Tutorial, part 5/6: [Azure Devops] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Put a little magic in your life: create the auto-deploy pipeline

We’re close to the end of our journey.

So far we’ve basically done everything we need. In this last step we’ll also make it happen automagically: we want to be able to do changes to our code (which, in our scenario, is the wiremock service configuration) and have them get deployed on Azure without us having to do anything.

We’ll use Azure Devops -formerly called Visual Studio Team System (VSTS) or, even earlier, Team Foundation Server (TFS) Online- for this. There are other services we could use as well, like Github or Bitbucket, and they’re equally good.

But whatever your service, in general this process is called CI/CD, short for Continuous Integration / Continuous Delivery. Simply put, CI means that your code is built and tested as soon as you push changes in source control. if the build or any test is not successful, the code changes are rolled back, guaranteeing (well, as far as your tests are concerned) that the code in the repo is correct. CD is the next step, taking the build and deploying it, usually in a test server, then in a staging one and then to production.

So as a first step, create a free account in Azure Devops. You can use the same Microsoft account you used in Azure or different. Once you’ve logged in, create a new project. Let’s call it GraubFinanceMockService.

By default we got a Git repository with the same name as the project. Let’s clone it in our development PC (i’m using C:\src\test, but feel free to use whatever you like).

Make sure you have git installed (or download it from here), then open a command prompt and type (replace the URL with your details):

cd c:\src\test
git clone https://dev.azure.com/YOURUSERNAME/GraubFinanceMockService/_git/GraubFinanceMockService

You’ll be asked for credentials of course (you might want to cache them). After that you’ll get a folder named GraubFinanceMockService. Move in there the folders we created during our previous steps: openapi, wiremock and devops.

Additionally, to avoid committing unwanted files in the repository, create an empty text file on the top folder named .gitignore, open it with a text editor and paste the following:


Now we’re ready to commit for the first time. Type the following in the command line:

cd c:\src\test\GraubFinanceMockService
git add .
git commit -m 'initial commit'
git push

And our code is there:

Now we’ll start setting up our build. “But wait”, you might reasonably ask, “we don’t really have any code to build, that’s no C# or Java or whatever project, why do we need a build?”.

Well, we do need to build our docker image, and push it in Docker Hub. This way when we change anything in our wiremock config, we’ll get a new image to reflect that.

But before we continue, remember that we have some variables in our tfvars files that we need to replace? Now it’s time to do that. Under Pipelines go to Library, then (+) Variable Group. Name the variable group azureconnectioncredentials, then add four variables (click the lock to set them as secret!):


Be sure to check that “Allow access from all pipelines” is enabled.

But how do you get these values? From Azure CLI. The process is described by Microsoft here, but in short, open a command prompt (remember that from the previous step, we are logged in with Azure CLI already) and write:

az account show
# note the id, that's the subscription id, and the tenant id
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTIONID"

You’ll get something like the following, which you need to keep secret (the recommended way is to use a password manager):

  "appId": "XXXXXXXXXX",
  "displayName": "azure-cli-YYYYYY",
  "name": "http://azure-cli-2019-YYYYYY",
  "password": "ZZZZZZZZZZZZZZZZ",
  "tenant": "TTTTTTTTTTTT"

So paste these values to the respective variables in Azure Devops. You got the subscription id and tenant id from the first command (az account show). From the second (az ad sp create-for-rbac) get the appId and put it in the client_id variable, and get the password and put it in the client_secret variable. At the end, click Save.

You did set the variables to secret right? 🙂

We need one more variable group for the not-secret stuff. Create a new variable group, name it azurenames and add the following variables (here with sample values):

azurelocation = westeurope
basename = graubfinancemock
dockerimage = YOURUSERNAME/graubfinancemock
dockerimageversion = latest
envtype = test
SKUsize = B1
SKUtier = Basic

Also here we need “Allow access from all pipelines” to be enabled.

Now we’re ready to create a new pipeline. In Azure Devops go to Pipelines > Builds > New Pipeline. You can click “Use the classic editor” if you’re not comfortable with YAML, but here I’ll use Azure Repos Git (YAML) as I can copy paste the result here. Select your code repository and then, to see how it works step by step, Starter Pipeline.

Our new build will get the sources in a directory on the build server, but nothing more than that. Let’s start telling the build server what to do.

First, we need to tell it to use our variable groups. Delete whatever default code is there and paste the following:


- master
  vmImage: 'ubuntu-latest'
- group: azureconnectioncredentials
- group: azurenames

We don’t really need distinct stages, we’ll just set up two jobs, build and deploy.

Now let’s get it to create the docker image.

  - job: Build
    displayName: Build docker image

Now click on Show Assistant on the right, search for Docker and pick Docker (description: Build or push Docker images etc etc). Connect your container registry as follows:

In Container repository you enter the full name of the docker hub repository (YOURUSERNAME/graubfinancemock in our example) but even better, we can use our variable (same for the desired version). So enter $(dockerimage), then change to tags to:


Leave everything else to default values, click Add. Under steps you should have the following:

    - task: Docker@2
      enabled: false
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |

Now click Save and Run. Et voila:

Having built our service, let’s deploy it. Paste the following at the end of the YAML file:

  - job: Deploy
    displayName: Deploy to Azure

Now we need to run our cleanup script, then replace the variables in the tfvars files, then run terraform. Search for task Azure CLI, then configure the Azure subscription. Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/cleanup.ps1 and script arguments is “-rgName ‘$(baseName)’” (without the double quotes, but note the single quotes). But remember, this is not on the root of our code repository. Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops (without the double quotes). You should end up with the following:

      - task: AzureCLI@2
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
          arguments: '-rgName ''$(baseName)'''
          workingDirectory: '$(Build.SourcesDirectory)/devops/'

Time to replace the variable values. Add another task named Replace Tokens. Change the target files to **/*.tfvars, uncheck the BOM (it creates problems somtimes). Done.

      - task: replacetokens@3
          targetFiles: '**/*.tfvars'
          encoding: 'auto'
          writeBOM: false
          actionOnMissing: 'warn'
          keepToken: false
          tokenPrefix: '#{'
          tokenSuffix: '}#'

Next up, terraform. We have the batch file ready, but we need terraform.exe to be available. So add a task named Terraform tool installer. Change the version to the latest (find it here, at the time of writing it’s 0.12.15).

      - task: TerraformInstaller@0
          terraformVersion: '0.12.15'

Everything’s ready to run our batch script. As we need Azure CLI to be available for terraform to work the way we want to, add another Azure CLI task . Pick the Azure subscription from the drop down (you don’t have to configure it again). Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1 (it’s the one that uses the replaced .tfvars file). Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops“(without the double quotes). At the end it should look like this:

      - task: AzureCLI@2
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
          workingDirectory: '$(Build.SourcesDirectory)/devops'

We’re ready. The build definition, now complete, should be look this:

name: GraubFinanceMockServiceAutoDeploy

- master
  vmImage: 'ubuntu-latest'
- group: azureconnectioncredentials
- group: azurenames

  - job: Build
    displayName: Build docker image
    - task: Docker@2
      enabled: true
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |
  - job: Deploy
    displayName: Deploy to Azure
    - task: AzureCLI@2
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
        arguments: '-rgName ''$(baseName)'''
        workingDirectory: '$(Build.SourcesDirectory)/devops/'
    - task: replacetokens@3
        targetFiles: '**/*.tfvars'
        encoding: 'auto'
        writeBOM: false
        actionOnMissing: 'warn'
        keepToken: false
        tokenPrefix: '#{'
        tokenSuffix: '}#'
    - task: TerraformInstaller@0
        terraformVersion: '0.12.15'
    - task: AzureCLI@2
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
        workingDirectory: '$(Build.SourcesDirectory)/devops'

Did it work? Navigate your browser to https://graubfinancemock.azurewebsites.net/servicehealth and:

Ta da!

We’re basically done. Let’s see how helpful our service is to our developers.

Design by contract Tutorial, part 4/6: [Terraform] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Infrastructure as Code: time to ship these containers

So we built our mock service and we dockerized it. Next up, run the container on the cloud.

Remember that in our scenario -and in my everyday work life- the mock service has to be accessible from people outside our local network. Of course, one way to do this would be to run it in-house and open a hole in your firewall.

…if you didn’t scream “BAD IDEA!” when you read the last sentence, now it would be the right time to do so 🙂

So, cloud to the rescue. We’ll use Azure here; we’ll create a subscription and then deploy with a terraform infrastructure-as-service (IaC) configuration. So our steps will be:

  1. Create the Azure subscription (manual step)
  2. Create the terraform config that creates a resource group , an app service plan and a web app for containers.
  3. Deploy to azure
  4. Test that it works by calling the /servicehealth path of the mock service.

If you’re deploying an actual application (say, a REST API that connects to a database) on the cloud you probably need more. For example, you might need a firewall, a virtual LAN so that different servers talk to each other but are isolated from the world, an API gateway, a cloud sql database and maybe more. But for our mock service, which has no data that need protection, we can keep it really simple.

  1. Open the azure portal and either create a new subscription or login if you have one already. For new subscriptions, Microsoft gives $200 or usage for free so you can experiment a bit. Running this tutorial has taken me less than $1 out of this amount, so no money actually left my pocket 🙂

After you created the subscription, you need to download the Azure Command-Line Interface (CLI), which is basically a powershell module. If you’re running on Linux -as I am at home- you also need Powershell Core (get it here). After installing, open a powershell prompt (you can also do it from ye olde command prompt) and run:

az login

Follow the instructions and you’re done.

2. Create an devops folder and create an empty text file inside. Name it service.tf and paste the following:

# Configure the Azure provider
provider "azurerm" {
  # for production deployments it's wise to fix the provider version
  #version = "~>1.32.0"

  subscription_id = var.subscription_id
  client_id       = var.client_id
  client_secret   = var.client_secret
  tenant_id       = var.tenant_id   

# Create a new resource group
resource "azurerm_resource_group" "rg" {
    name     = var.basename
    location = var.azurelocation
    tags = {
        environment = var.envtype

# Create an App Service Plan with Linux
resource "azurerm_app_service_plan" "appserviceplan" {
  name                = "${azurerm_resource_group.rg.name}-APPPLAN"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  # Define Linux as Host OS
  kind = "Linux"
  reserved = true # Mandatory for Linux plans

  # Choose size
  # https://azure.microsoft.com/en-us/pricing/details/app-service/linux/
  sku {
    tier = var.SKUtier
    size = var.SKUsize

# Create an Azure Web App for Containers in that App Service Plan
resource "azurerm_app_service" "appsvc" {
  name                = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  app_service_plan_id = azurerm_app_service_plan.appserviceplan.id

  # Do not attach Storage by default
  app_settings = {

    # Settings for private Container Registires  

  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.DockerImage}"
    #always_on        = "false"
    #ftps_state       = "FtpsOnly"

  logs {
    http_logs {
      file_system {
        retention_in_days = var.logdays
        retention_in_mb   = var.logsizemb

  identity {
    type = "SystemAssigned"

output "DockerUrl" {
    value = azurerm_app_service.appsvc.default_site_hostname

Inside this configuration you may have noticed that we used a few variables, like var.basename. In terraform, we define variables and their values in separate files so that we can use the same base configuration with different details. A common scenario is the same configuration for testing, staging and production environments but with different names (think graubfinance-test for testing, graubfinance-staging for preprod and graubfinance for prod), different service levels etc.

Following best practice, these variables should be defined. Create another empty file called service-vars.tf and paste the following:

variable "basename" {
  type    = string

variable "azurelocation" {
  type    = string

variable "subscription_id" {
  type    = string

variable "client_id" {
  type    = string

variable "client_secret" {
  type    = string

variable "tenant_id" {
  type    = string

variable "envtype" {
    type    = string

variable "SKUsize" {
    type    = string

variable "SKUtier" {
    type    = string

variable "DockerImage" {
    type    = string

variable "logdays" {
    type    = number

variable "logsizemb" {
    type    = number

Now we need one or more “variable values” (.tfvars) files to define the values for our intended environment. Create yet another file, name it service-varvalues-dev.tfvars and paste the following:

basename = "graubfinancemock"

# when logging in as a user via Azure CLI, these values must be null
subscription_id = null
client_id       = null
client_secret   = null
tenant_id       = null

envtype = "test"

# this can change depending on your preferences
# you can get location codes using
# az account list-locations
# e.g. try "eastus" or "centralindia"
azurelocation = "westeurope"

# Using the free tier generates an error.
# Seems that Microsoft does not want people to
# use their resources *completely* free?
# Who knew!
#SKUtier = "Free"
#SKUsize = "F1"

# This is still very cheap though
SKUtier = "Basic"
SKUsize = "B1"

DockerImage = "dandraka/graubfinancemock:latest"

logdays = 30
logsizemb = 30

We’ll use this when testing locally but for later (when we deploy via Azure Devops) we’ll need the same but with placeholders for the deployment process to change. So copy-paste this file as service-varvalues-pipeline.tfvars and change it to look like this:

basename = "#{basename}#"

# when logging in as a service, these must NOT be null
subscription_id = "#{subscription_id}#"
client_id       = "#{client_id}#"
client_secret   = "#{client_secret}#"
tenant_id       = "#{tenant_id}#" 

envtype = "#{envtype}#"
azurelocation = "#{azurelocation}#"

SKUtier = "#{SKUtier}#"
SKUsize = "#{SKUsize}#"

DockerImage = "#{dockerimage}#"

logdays = 30
logsizemb = 30

Obviously the parts between #{…}# are placeholders. We’ll talk about these when we create the pipeline.

3. Now we’ll use terraform to deploy this configuration. Install terraform (instructions here, but basically it’s just an exe that you put in your path), then create a text file in your devops dir, name it terraformdeploy-dev.ps1 and paste the following:

terraform init
# here you need to see stuff happening and then
# "Terraform has been successfully initialized!"
terraform plan -out="out.plan" -var-file="service-varvalues-dev.tfvars"
# if everything went well, apply
terraform apply "out.plan"

Run it. If everything went well, you should get the following (or similar) output at the end:


DockerUrl = graubfinancemock.azurewebsites.net

In order to prepare ourselves for the automated deployment again, copy-paste this small script, name it terraformdeploy-pipeline.ps1 and just change the tfvars name. So the new file will look like this (I’ve stripped the comments here):

terraform init
terraform plan -out="out.plan" -var-file="service-varvalues-pipeline.tfvars"
terraform apply "out.plan"

4. Let’s see if it works

Navigate your browser to https://graubfinancemock.azurewebsites.net/servicehealth (or similar if you made any changes). That’s what you should see:

Hurray! 🙂

Notice also how we got https for free -we didn’t install any certificate or configured anything. Azure took care of it.

Out of curiosity, let’s head over to portal.azure.com to see what happened. Once there, click on “resource groups” and then “graubfinancemock” (or whatever you named it). You’ll see something like this:

Did it cost much? Click “Cost analysis” on the left, for scope select your subscription (by default named “Free trial”) and you see what you paid for our experiment:

It didn’t break the bank, did it? 🙂

To be fair, we didn’t really do much. Most of the CPU usage we were charged for went into getting the system -our linux container running wiremock- up and running. Just out of curiosity, how much does it cost if we use it a little more?

You can try the following experiment: have it answer 1000 (or whatever) requests and see what it costs. Try this powershell script:

cd $env:TEMP
mkdir testrequests
cd testrequests
for ($i=1;$i -le 1000;$i++) { Invoke-WebRequest -Uri "http://graubfinancemock.azurewebsites.net/servicehealth" -OutFile "out-$i.txt"; $i }

After it finishes, click refresh and let’s see the cost analysis again:

No joke: after 1000 requests, it didn’t change a cent. You can see why companies love the cloud! Though again, we didn’t use our CPU heavily -and that’s what Azure charges mostly for.

We’re close to finishing. The last thing to do is to automate the process via Azure Devops (a.k.a. VSTS, a.k.a. TFS Online). Just one last thing: since we’ll be doing the terraform deploy automatically, let’s delete everything we’ve done. Create a file named cleanup.ps1 inside our devops dir and paste the following:

param ([string]$rgName)

[bool]$rgExists = ((az group exists -n $rgName) -eq 'true')

if ($rgExists) 
        az group delete -n $rgName -y  
        Write-Host "Resource group $rgName does not exist, nothing to do"

Now in the command prompt, run:

 ./cleanup.ps1 -rgName graubfinancemock

A couple of minutes later, everything’s gone.

[EDIT] Just to be clear, this means that every time we deploy, we first delete everything and then we redo it from scratch.

This is fine for our scenario, the mock service, and in general it’s ok when both of these conditions are true:

1. Our Azure components have no state to lose (no databases etc) and
2. The down time doesn’t hurt.

For more complex scenarios, where you have real productive services, state, data etc this approach is not possible. In such cases you need to keep somewhere your plan and state files. This, and the best practice to do so, is explained here by the Terraform team and here by Microsoft.

Having created the auto deployment process, let’s add the missing sparkle and create the auto deployment pipeline.

Design by contract Tutorial, part 3/6: [Docker] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Let’s put the “Build once, Run anywhere” promise to the test: build the container for the mock service

Though there are many, many, many ways to run a service you’ve built in different environments, most of them require extensive reconfiguration, are problem prone and break easily. Just ask any developer that has built apps for IIS .

Docker is very popular because it solves this problem neatly. It builds a container -a box- within which your application lives. This stays the same everywhere, be it the dev PC, your local staging and production environment or the cloud. You still need, of course, to know how to communicate to other services or how the world reaches you, but this is reduced to a few configuration files.

And, while usually I’m suspicious against overhyped products and technologies, Docker really is easy to use.

How easy? Well, that’s what we need to do for our mock service:

  1. Install docker for your OS and create an account in docker hub
  2. Inside the wiremock folder, create an empty text file named Dockerfile (no extension)
  3. Open it with a text editor and paste the following (I’ll explain below):
FROM rodolpheche/wiremock
LABEL maintainer="Your Name <youremail@somewhere.com>"

ADD mappings/*.json /home/wiremock/mappings/
ADD __files/*.* /home/wiremock/__files/

CMD ["java", "-cp", "/var/wiremock/lib/*:/var/wiremock/extensions/*", "com.github.tomakehurst.wiremock.standalone.WireMockServerRunner", "--global-response-templating", "--verbose"]

4. Open a command window in the wiremock folder and enter the following command:

docker image build -t graubfinancemock:1.0 .

We’ve built our container image, now lets run it:

docker container run --publish 80:8080 --detach --name graubfinancemock graubfinancemock:1.0

To test if it works, open a browser and navigate to http://localhost/servicehealth .

docker running

Ta da!

The last step is to publish it so that it’s available for others (like our cloud instance which we’ll create next) to use. In the command window enter the following commands (use the login details you created in dockhub, step 1):

docker login
docker image tag graubfinancemock:1.0 YOURUSERNAME/graubfinancemock:1.0
docker image tag graubfinancemock:1.0 YOURUSERNAME/graubfinancemock:latest
docker image push YOURUSERNAME/graubfinancemock:1.0
docker image push YOURUSERNAME/graubfinancemock:latest

That’s it. Seriously, we’re done. But let’s take a moment and explain what we did.

First of all, the dockerfile. It contains all the info for your container and, in our case, states the following:

  1. FROM rodolpheche/wiremock“: don’t begin from an empty environment; instead, use the image named “wiremock” from account “rodolpheche“, who has already created and published a suitable docker configuration (thanks!)
  2. The two “ADD” lines tell it to add (duh) files into the filesystem of the container
  3. The “CMD” tells the container what to do when it starts. In our case, it runs the java package of wiremock, passing a few command line options, like –global-response-templating

Now the docker commands.

  1. The “docker image build” builds the image, i.e. creates the docker file system and stores the configuration. It gives it a name (graubfinancemock) and a version (1.0). A version is just a string; it could also be, say, 1.0-alpha, 2.1-RC2, 4.2.1 and so on.
  2. The “docker container run”, obviously, runs the image. The important thing here is the “–publish 80:8080”. By default, the wiremock server listens to port 8080. So here we instruct docker to map port 80 (visible from the world) to port 8080 (inside the docker container). That’s why we can use the url http://localhost/servicehealth and not http://localhost:8080/servicehealth .
  3. The last this is to publish the image. You need to login, obviously, and then you have to tag the image. You can assign as many tags as you want, so you can e.g. publish to many repositories. The format is REPOSITORY/IMAGE:VERSION. In docker hub the repo name is your username, but it can be different in private repositories. After tagging, you push the tag, which uploads the image.

Note that apart from the normal version (graubfinancemock:1.0) we also tag the image as latest (graubfinancemock:latest). This way when using the image we won’t need to update the version every time we upload a new one; we’ll just say “get the latest”.

But be careful here: if you build a service -forget our mock for a minute, let’s say we’re building an actual service- and people are using your image as :latest, they might unwillingly jump to an incompatible version with breaking changes (say, from 1.x to 2.0). So it’s a much safer strategy to tag your images as :1.0-latest, :2.0-latest etc instead of just :latest. This way, consumers are fairly certain that they only get non-breaking changes.

Private repositories are out of scope for this post, but in an enterprise setting you’ll probably need them; you usually don’t want the whole world to be able to use your container images. Creating a private repo is not hard, but it’s not free anymore. The easiest ways to do that is in Dockerhub itself or in Azure, where it’s called a container registry. Using it, though, is exactly the same as a public one. If it’s not on docker hub, you just have to prefix the tag with the repo address (see “Now the new feature!” in this post).

So now we have our docker configuration. Ready to ship?