Tag Archives: dotnet

Design by contract Tutorial, part 6/6: [Swagger] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Sound check time: did it really work?

We already saw that the health page works. But it’s time to check if our objective was met.

Remember, the goal is to let developers outside our network use the mock service to help them with their implementation. To see if this works as intended, we can use the swagger file we created and the online Swagger editor.

So open the CustomerTrust.yaml file with a text editor, copy all its contents, navigate your browser to https://editor.swagger.io, delete the default content and paste ours. You’ll get this:

Select the mock service from the drop down, click on one of the services, click “Try it out” and then “Execute“. After a few seconds you… will get an error, something like “NetworkError when attempting to fetch resource.

Why? Well it’s the browser preventing us from doing do. If you press F12 and watch the Console, you’ll see something like “Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at xxxxxx”. More info here, but in short, it’s a security measure. You can either disable it in your browser’s settings (WHICH IS A BAD IDEA) or use the curl utility, which you can download here for your operating system.

[EDIT] or I could not be lazy, go back to the wiremock config and set the CORS-related HTTP response headers properly, as explained here, here and specifically for wiremock here.

So after you install curl, you can get the command line from the Swagger Editor:

For GET, the command should be:

curl -X GET "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H "accept: application/json"

Which ever method you pick -GET or POST- we’ll add to this a -v at the end (for more info). So run it and at the end you’ll get this:

401 Unauthorized* Connection #0 to host graubfinancemock.azurewebsites.net left intact

Makes sense right? The mock service expects an authorization token which we haven’t provided. Let’s add this:

curl -X GET "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H "accept: application/json" -H "Authorization: Bearer 1234" -v

And now you’ll get the json:

{
  "name": "GlarusAdvertising AG",
  "taxid": "CHE-123.456.789",
  "trustlevel": "OK"
}

Likewise, let’s try the POST:

curl -X POST "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H  "accept: */*" -H  "Content-Type: application/json" -d "{\"reportid\":\"2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e\",\"reporttaxid\":\"CHE-123.456.789\",\"taxid\":\"CHE-123.456.789\",\"trustlevel\":\"OK\"}" -H "Authorization: Bearer 1234" -v

And you should see the id of the request in the json response:

{
    "reportid": "2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e",
    "status": "OK"
}

A small note on Windows: if you try this in Powershell, it seems that the json escaping is acting funny. If you try it through cmd, it works just fine.

That’s all folks

So now our kind-of-fictional-but-actually-quite-real developers can access the service and test their code against it. And wherever we make a change and push it, the service is updated automatically. Not bad, isn’t it? 🙂

That concludes this guide and its introductory journey in the world of Devops (or, as a friend of mine more accurately calls the field, SRE -short for “Site Reliability Engineering”).

I hope you enjoyed it as much as I did writing it -I really did. I’m sure you’ll have many, many questions which I can try to answer -but no promises 🙂 You can ask here in the comments (better) or in my twitter profile @JimAndrakakis.

Resources

I’ve put all the code in a github repository, here; the only change is that I moved the pipeline yaml in the devops folder and removed my name. You can also find the docker image in docker hub, here.

Have fun coding!

Design by contract Tutorial, part 5/6: [Azure Devops] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Put a little magic in your life: create the auto-deploy pipeline

We’re close to the end of our journey.

So far we’ve basically done everything we need. In this last step we’ll also make it happen automagically: we want to be able to do changes to our code (which, in our scenario, is the wiremock service configuration) and have them get deployed on Azure without us having to do anything.

We’ll use Azure Devops -formerly called Visual Studio Team System (VSTS) or, even earlier, Team Foundation Server (TFS) Online- for this. There are other services we could use as well, like Github or Bitbucket, and they’re equally good.

But whatever your service, in general this process is called CI/CD, short for Continuous Integration / Continuous Delivery. Simply put, CI means that your code is built and tested as soon as you push changes in source control. if the build or any test is not successful, the code changes are rolled back, guaranteeing (well, as far as your tests are concerned) that the code in the repo is correct. CD is the next step, taking the build and deploying it, usually in a test server, then in a staging one and then to production.

So as a first step, create a free account in Azure Devops. You can use the same Microsoft account you used in Azure or different. Once you’ve logged in, create a new project. Let’s call it GraubFinanceMockService.

By default we got a Git repository with the same name as the project. Let’s clone it in our development PC (i’m using C:\src\test, but feel free to use whatever you like).

Make sure you have git installed (or download it from here), then open a command prompt and type (replace the URL with your details):

cd c:\src\test
git clone https://dev.azure.com/YOURUSERNAME/GraubFinanceMockService/_git/GraubFinanceMockService

You’ll be asked for credentials of course (you might want to cache them). After that you’ll get a folder named GraubFinanceMockService. Move in there the folders we created during our previous steps: openapi, wiremock and devops.

Additionally, to avoid committing unwanted files in the repository, create an empty text file on the top folder named .gitignore, open it with a text editor and paste the following:

*.jar
lock.json
**/.terraform/*
*.plan
*.tfstate
*.info
*.backup

Now we’re ready to commit for the first time. Type the following in the command line:

cd c:\src\test\GraubFinanceMockService
git add .
git commit -m 'initial commit'
git push

And our code is there:

Now we’ll start setting up our build. “But wait”, you might reasonably ask, “we don’t really have any code to build, that’s no C# or Java or whatever project, why do we need a build?”.

Well, we do need to build our docker image, and push it in Docker Hub. This way when we change anything in our wiremock config, we’ll get a new image to reflect that.

But before we continue, remember that we have some variables in our tfvars files that we need to replace? Now it’s time to do that. Under Pipelines go to Library, then (+) Variable Group. Name the variable group azureconnectioncredentials, then add four variables (click the lock to set them as secret!):

subscription_id
tenant_id
client_id
client_secret

Be sure to check that “Allow access from all pipelines” is enabled.

But how do you get these values? From Azure CLI. The process is described by Microsoft here, but in short, open a command prompt (remember that from the previous step, we are logged in with Azure CLI already) and write:

az account show
# note the id, that's the subscription id, and the tenant id
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTIONID"

You’ll get something like the following, which you need to keep secret (the recommended way is to use a password manager):

{
  "appId": "XXXXXXXXXX",
  "displayName": "azure-cli-YYYYYY",
  "name": "http://azure-cli-2019-YYYYYY",
  "password": "ZZZZZZZZZZZZZZZZ",
  "tenant": "TTTTTTTTTTTT"
}

So paste these values to the respective variables in Azure Devops. You got the subscription id and tenant id from the first command (az account show). From the second (az ad sp create-for-rbac) get the appId and put it in the client_id variable, and get the password and put it in the client_secret variable. At the end, click Save.

You did set the variables to secret right? 🙂

We need one more variable group for the not-secret stuff. Create a new variable group, name it azurenames and add the following variables (here with sample values):

azurelocation = westeurope
basename = graubfinancemock
dockerimage = YOURUSERNAME/graubfinancemock
dockerimageversion = latest
envtype = test
SKUsize = B1
SKUtier = Basic

Also here we need “Allow access from all pipelines” to be enabled.

Now we’re ready to create a new pipeline. In Azure Devops go to Pipelines > Builds > New Pipeline. You can click “Use the classic editor” if you’re not comfortable with YAML, but here I’ll use Azure Repos Git (YAML) as I can copy paste the result here. Select your code repository and then, to see how it works step by step, Starter Pipeline.

Our new build will get the sources in a directory on the build server, but nothing more than that. Let’s start telling the build server what to do.

First, we need to tell it to use our variable groups. Delete whatever default code is there and paste the following:

name: WHATEVERWORKSFORYOU

trigger:
- master
pool:
  vmImage: 'ubuntu-latest'
variables:
- group: azureconnectioncredentials
- group: azurenames

We don’t really need distinct stages, we’ll just set up two jobs, build and deploy.

Now let’s get it to create the docker image.

jobs:
  - job: Build
    displayName: Build docker image
    steps:     

Now click on Show Assistant on the right, search for Docker and pick Docker (description: Build or push Docker images etc etc). Connect your container registry as follows:

In Container repository you enter the full name of the docker hub repository (YOURUSERNAME/graubfinancemock in our example) but even better, we can use our variable (same for the desired version). So enter $(dockerimage), then change to tags to:

$(dockerimageversion)
1.0.$(Build.BuildId)

Leave everything else to default values, click Add. Under steps you should have the following:

    - task: Docker@2
      enabled: false
      inputs:
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |
          $(dockerimageversion)
          1.0.$(Build.BuildId)

Now click Save and Run. Et voila:

Having built our service, let’s deploy it. Paste the following at the end of the YAML file:

  - job: Deploy
    displayName: Deploy to Azure
    steps:

Now we need to run our cleanup script, then replace the variables in the tfvars files, then run terraform. Search for task Azure CLI, then configure the Azure subscription. Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/cleanup.ps1 and script arguments is “-rgName ‘$(baseName)’” (without the double quotes, but note the single quotes). But remember, this is not on the root of our code repository. Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops (without the double quotes). You should end up with the following:

      - task: AzureCLI@2
        inputs:
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
          arguments: '-rgName ''$(baseName)'''
          workingDirectory: '$(Build.SourcesDirectory)/devops/'

Time to replace the variable values. Add another task named Replace Tokens. Change the target files to **/*.tfvars, uncheck the BOM (it creates problems somtimes). Done.

      - task: replacetokens@3
        inputs:
          targetFiles: '**/*.tfvars'
          encoding: 'auto'
          writeBOM: false
          actionOnMissing: 'warn'
          keepToken: false
          tokenPrefix: '#{'
          tokenSuffix: '}#'

Next up, terraform. We have the batch file ready, but we need terraform.exe to be available. So add a task named Terraform tool installer. Change the version to the latest (find it here, at the time of writing it’s 0.12.15).

      - task: TerraformInstaller@0
        inputs:
          terraformVersion: '0.12.15'

Everything’s ready to run our batch script. As we need Azure CLI to be available for terraform to work the way we want to, add another Azure CLI task . Pick the Azure subscription from the drop down (you don’t have to configure it again). Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1 (it’s the one that uses the replaced .tfvars file). Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops“(without the double quotes). At the end it should look like this:

      - task: AzureCLI@2
        inputs:
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
          workingDirectory: '$(Build.SourcesDirectory)/devops'

We’re ready. The build definition, now complete, should be look this:

name: GraubFinanceMockServiceAutoDeploy

trigger:
- master
pool:
  vmImage: 'ubuntu-latest'
variables:
- group: azureconnectioncredentials
- group: azurenames

jobs:
  - job: Build
    displayName: Build docker image
    steps:     
    - task: Docker@2
      enabled: true
      inputs:
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |
          $(dockerimageversion)
          1.0.$(Build.BuildId)
  - job: Deploy
    displayName: Deploy to Azure
    steps:
    - task: AzureCLI@2
      inputs:
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
        arguments: '-rgName ''$(baseName)'''
        workingDirectory: '$(Build.SourcesDirectory)/devops/'
    - task: replacetokens@3
      inputs:
        targetFiles: '**/*.tfvars'
        encoding: 'auto'
        writeBOM: false
        actionOnMissing: 'warn'
        keepToken: false
        tokenPrefix: '#{'
        tokenSuffix: '}#'
    - task: TerraformInstaller@0
      inputs:
        terraformVersion: '0.12.15'
    - task: AzureCLI@2
      inputs:
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
        workingDirectory: '$(Build.SourcesDirectory)/devops'

Did it work? Navigate your browser to https://graubfinancemock.azurewebsites.net/servicehealth and:

Ta da!

We’re basically done. Let’s see how helpful our service is to our developers.

Design by contract Tutorial, part 4/6: [Terraform] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Infrastructure as Code: time to ship these containers

So we built our mock service and we dockerized it. Next up, run the container on the cloud.

Remember that in our scenario -and in my everyday work life- the mock service has to be accessible from people outside our local network. Of course, one way to do this would be to run it in-house and open a hole in your firewall.

…if you didn’t scream “BAD IDEA!” when you read the last sentence, now it would be the right time to do so 🙂

So, cloud to the rescue. We’ll use Azure here; we’ll create a subscription and then deploy with a terraform infrastructure-as-service (IaC) configuration. So our steps will be:

  1. Create the Azure subscription (manual step)
  2. Create the terraform config that creates a resource group , an app service plan and a web app for containers.
  3. Deploy to azure
  4. Test that it works by calling the /servicehealth path of the mock service.

If you’re deploying an actual application (say, a REST API that connects to a database) on the cloud you probably need more. For example, you might need a firewall, a virtual LAN so that different servers talk to each other but are isolated from the world, an API gateway, a cloud sql database and maybe more. But for our mock service, which has no data that need protection, we can keep it really simple.

  1. Open the azure portal and either create a new subscription or login if you have one already. For new subscriptions, Microsoft gives $200 or usage for free so you can experiment a bit. Running this tutorial has taken me less than $1 out of this amount, so no money actually left my pocket 🙂

After you created the subscription, you need to download the Azure Command-Line Interface (CLI), which is basically a powershell module. If you’re running on Linux -as I am at home- you also need Powershell Core (get it here). After installing, open a powershell prompt (you can also do it from ye olde command prompt) and run:

az login

Follow the instructions and you’re done.

2. Create an devops folder and create an empty text file inside. Name it service.tf and paste the following:

# Configure the Azure provider
provider "azurerm" {
  # for production deployments it's wise to fix the provider version
  #version = "~>1.32.0"

  subscription_id = var.subscription_id
  client_id       = var.client_id
  client_secret   = var.client_secret
  tenant_id       = var.tenant_id   
}

# Create a new resource group
resource "azurerm_resource_group" "rg" {
    name     = var.basename
    location = var.azurelocation
	
    tags = {
        environment = var.envtype
    }
}

# Create an App Service Plan with Linux
resource "azurerm_app_service_plan" "appserviceplan" {
  name                = "${azurerm_resource_group.rg.name}-APPPLAN"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  # Define Linux as Host OS
  kind = "Linux"
  reserved = true # Mandatory for Linux plans

  # Choose size
  # https://azure.microsoft.com/en-us/pricing/details/app-service/linux/
  sku {
    tier = var.SKUtier
    size = var.SKUsize
  }
}

# Create an Azure Web App for Containers in that App Service Plan
resource "azurerm_app_service" "appsvc" {
  name                = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  app_service_plan_id = azurerm_app_service_plan.appserviceplan.id

  # Do not attach Storage by default
  app_settings = {
    WEBSITES_ENABLE_APP_SERVICE_STORAGE = false

    /*
    # Settings for private Container Registires  
    DOCKER_REGISTRY_SERVER_URL      = ""
    DOCKER_REGISTRY_SERVER_USERNAME = ""
    DOCKER_REGISTRY_SERVER_PASSWORD = ""
    */
  }

  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.DockerImage}"
    #always_on        = "false"
    #ftps_state       = "FtpsOnly"
  }

  logs {
    http_logs {
      file_system {
        retention_in_days = var.logdays
        retention_in_mb   = var.logsizemb
      }
    }
  }

  identity {
    type = "SystemAssigned"
  }
}

output "DockerUrl" {
    value = azurerm_app_service.appsvc.default_site_hostname
}

Inside this configuration you may have noticed that we used a few variables, like var.basename. In terraform, we define variables and their values in separate files so that we can use the same base configuration with different details. A common scenario is the same configuration for testing, staging and production environments but with different names (think graubfinance-test for testing, graubfinance-staging for preprod and graubfinance for prod), different service levels etc.

Following best practice, these variables should be defined. Create another empty file called service-vars.tf and paste the following:

variable "basename" {
  type    = string
}

variable "azurelocation" {
  type    = string
}

variable "subscription_id" {
  type    = string
}

variable "client_id" {
  type    = string
}

variable "client_secret" {
  type    = string
}

variable "tenant_id" {
  type    = string
}

variable "envtype" {
    type    = string
}

variable "SKUsize" {
    type    = string
}

variable "SKUtier" {
    type    = string
}

variable "DockerImage" {
    type    = string
}

variable "logdays" {
    type    = number
}

variable "logsizemb" {
    type    = number
}

Now we need one or more “variable values” (.tfvars) files to define the values for our intended environment. Create yet another file, name it service-varvalues-dev.tfvars and paste the following:

basename = "graubfinancemock"

# when logging in as a user via Azure CLI, these values must be null
subscription_id = null
client_id       = null
client_secret   = null
tenant_id       = null

envtype = "test"

# this can change depending on your preferences
# you can get location codes using
# az account list-locations
# e.g. try "eastus" or "centralindia"
azurelocation = "westeurope"

# Using the free tier generates an error.
# Seems that Microsoft does not want people to
# use their resources *completely* free?
# Who knew! 🙂
#SKUtier = "Free"
#SKUsize = "F1"

# This is still very cheap though
SKUtier = "Basic"
SKUsize = "B1"

DockerImage = "dandraka/graubfinancemock:latest"

logdays = 30
logsizemb = 30

We’ll use this when testing locally but for later (when we deploy via Azure Devops) we’ll need the same but with placeholders for the deployment process to change. So copy-paste this file as service-varvalues-pipeline.tfvars and change it to look like this:

basename = "#{basename}#"

# when logging in as a service, these must NOT be null
subscription_id = "#{subscription_id}#"
client_id       = "#{client_id}#"
client_secret   = "#{client_secret}#"
tenant_id       = "#{tenant_id}#" 

envtype = "#{envtype}#"
azurelocation = "#{azurelocation}#"

SKUtier = "#{SKUtier}#"
SKUsize = "#{SKUsize}#"

DockerImage = "#{dockerimage}#"

logdays = 30
logsizemb = 30

Obviously the parts between #{…}# are placeholders. We’ll talk about these when we create the pipeline.

3. Now we’ll use terraform to deploy this configuration. Install terraform (instructions here, but basically it’s just an exe that you put in your path), then create a text file in your devops dir, name it terraformdeploy-dev.ps1 and paste the following:

terraform init
# here you need to see stuff happening and then
# "Terraform has been successfully initialized!"
terraform plan -out="out.plan" -var-file="service-varvalues-dev.tfvars"
# if everything went well, apply
terraform apply "out.plan"

Run it. If everything went well, you should get the following (or similar) output at the end:

Outputs:

DockerUrl = graubfinancemock.azurewebsites.net

In order to prepare ourselves for the automated deployment again, copy-paste this small script, name it terraformdeploy-pipeline.ps1 and just change the tfvars name. So the new file will look like this (I’ve stripped the comments here):

terraform init
terraform plan -out="out.plan" -var-file="service-varvalues-pipeline.tfvars"
terraform apply "out.plan"

4. Let’s see if it works

Navigate your browser to https://graubfinancemock.azurewebsites.net/servicehealth (or similar if you made any changes). That’s what you should see:

Hurray! 🙂

Notice also how we got https for free -we didn’t install any certificate or configured anything. Azure took care of it.

Out of curiosity, let’s head over to portal.azure.com to see what happened. Once there, click on “resource groups” and then “graubfinancemock” (or whatever you named it). You’ll see something like this:

Did it cost much? Click “Cost analysis” on the left, for scope select your subscription (by default named “Free trial”) and you see what you paid for our experiment:

It didn’t break the bank, did it? 🙂

To be fair, we didn’t really do much. Most of the CPU usage we were charged for went into getting the system -our linux container running wiremock- up and running. Just out of curiosity, how much does it cost if we use it a little more?

You can try the following experiment: have it answer 1000 (or whatever) requests and see what it costs. Try this powershell script:

cd $env:TEMP
mkdir testrequests
cd testrequests
for ($i=1;$i -le 1000;$i++) { Invoke-WebRequest -Uri "http://graubfinancemock.azurewebsites.net/servicehealth" -OutFile "out-$i.txt"; $i }

After it finishes, click refresh and let’s see the cost analysis again:

No joke: after 1000 requests, it didn’t change a cent. You can see why companies love the cloud! Though again, we didn’t use our CPU heavily -and that’s what Azure charges mostly for.

We’re close to finishing. The last thing to do is to automate the process via Azure Devops (a.k.a. VSTS, a.k.a. TFS Online). Just one last thing: since we’ll be doing the terraform deploy automatically, let’s delete everything we’ve done. Create a file named cleanup.ps1 inside our devops dir and paste the following:

param ([string]$rgName)

[bool]$rgExists = ((az group exists -n $rgName) -eq 'true')

if ($rgExists) 
{ 
        az group delete -n $rgName -y  
} 
else 
{ 
        Write-Host "Resource group $rgName does not exist, nothing to do"
}

Now in the command prompt, run:

 ./cleanup.ps1 -rgName graubfinancemock

A couple of minutes later, everything’s gone.

[EDIT] Just to be clear, this means that every time we deploy, we first delete everything and then we redo it from scratch.

This is fine for our scenario, the mock service, and in general it’s ok when both of these conditions are true:

1. Our Azure components have no state to lose (no databases etc) and
2. The down time doesn’t hurt.

For more complex scenarios, where you have real productive services, state, data etc this approach is not possible. In such cases you need to keep somewhere your plan and state files. This, and the best practice to do so, is explained here by the Terraform team and here by Microsoft.

Having created the auto deployment process, let’s add the missing sparkle and create the auto deployment pipeline.

Design by contract Tutorial, part 3/6: [Docker] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Let’s put the “Build once, Run anywhere” promise to the test: build the container for the mock service

Though there are many, many, many ways to run a service you’ve built in different environments, most of them require extensive reconfiguration, are problem prone and break easily. Just ask any developer that has built apps for IIS .

Docker is very popular because it solves this problem neatly. It builds a container -a box- within which your application lives. This stays the same everywhere, be it the dev PC, your local staging and production environment or the cloud. You still need, of course, to know how to communicate to other services or how the world reaches you, but this is reduced to a few configuration files.

And, while usually I’m suspicious against overhyped products and technologies, Docker really is easy to use.

How easy? Well, that’s what we need to do for our mock service:

  1. Install docker for your OS and create an account in docker hub
  2. Inside the wiremock folder, create an empty text file named Dockerfile (no extension)
  3. Open it with a text editor and paste the following (I’ll explain below):
FROM rodolpheche/wiremock
LABEL maintainer="Your Name <youremail@somewhere.com>"

ADD mappings/*.json /home/wiremock/mappings/
ADD __files/*.* /home/wiremock/__files/

CMD ["java", "-cp", "/var/wiremock/lib/*:/var/wiremock/extensions/*", "com.github.tomakehurst.wiremock.standalone.WireMockServerRunner", "--global-response-templating", "--verbose"]

4. Open a command window in the wiremock folder and enter the following command:

docker image build -t graubfinancemock:1.0 .

We’ve built our container image, now lets run it:

docker container run --publish 80:8080 --detach --name graubfinancemock graubfinancemock:1.0

To test if it works, open a browser and navigate to http://localhost/servicehealth .

docker running

Ta da!

The last step is to publish it so that it’s available for others (like our cloud instance which we’ll create next) to use. In the command window enter the following commands (use the login details you created in dockhub, step 1):

docker login
docker image tag graubfinancemock:1.0 YOURUSERNAME/graubfinancemock:1.0
docker image tag graubfinancemock:1.0 YOURUSERNAME/graubfinancemock:latest
docker image push YOURUSERNAME/graubfinancemock:1.0
docker image push YOURUSERNAME/graubfinancemock:latest

That’s it. Seriously, we’re done. But let’s take a moment and explain what we did.

First of all, the dockerfile. It contains all the info for your container and, in our case, states the following:

  1. FROM rodolpheche/wiremock“: don’t begin from an empty environment; instead, use the image named “wiremock” from account “rodolpheche“, who has already created and published a suitable docker configuration (thanks!)
  2. The two “ADD” lines tell it to add (duh) files into the filesystem of the container
  3. The “CMD” tells the container what to do when it starts. In our case, it runs the java package of wiremock, passing a few command line options, like –global-response-templating

Now the docker commands.

  1. The “docker image build” builds the image, i.e. creates the docker file system and stores the configuration. It gives it a name (graubfinancemock) and a version (1.0). A version is just a string; it could also be, say, 1.0-alpha, 2.1-RC2, 4.2.1 and so on.
  2. The “docker container run”, obviously, runs the image. The important thing here is the “–publish 80:8080”. By default, the wiremock server listens to port 8080. So here we instruct docker to map port 80 (visible from the world) to port 8080 (inside the docker container). That’s why we can use the url http://localhost/servicehealth and not http://localhost:8080/servicehealth .
  3. The last this is to publish the image. You need to login, obviously, and then you have to tag the image. You can assign as many tags as you want, so you can e.g. publish to many repositories. The format is REPOSITORY/IMAGE:VERSION. In docker hub the repo name is your username, but it can be different in private repositories. After tagging, you push the tag, which uploads the image.

Note that apart from the normal version (graubfinancemock:1.0) we also tag the image as latest (graubfinancemock:latest). This way when using the image we won’t need to update the version every time we upload a new one; we’ll just say “get the latest”.

But be careful here: if you build a service -forget our mock for a minute, let’s say we’re building an actual service- and people are using your image as :latest, they might unwillingly jump to an incompatible version with breaking changes (say, from 1.x to 2.0). So it’s a much safer strategy to tag your images as :1.0-latest, :2.0-latest etc instead of just :latest. This way, consumers are fairly certain that they only get non-breaking changes.

Private repositories are out of scope for this post, but in an enterprise setting you’ll probably need them; you usually don’t want the whole world to be able to use your container images. Creating a private repo is not hard, but it’s not free anymore. The easiest ways to do that is in Dockerhub itself or in Azure, where it’s called a container registry. Using it, though, is exactly the same as a public one. If it’s not on docker hub, you just have to prefix the tag with the repo address (see “Now the new feature!” in this post).

So now we have our docker configuration. Ready to ship?

Design by contract Tutorial, part 1/6: [OpenAPI] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Think before you speak write code: Define the specs and create the Swagger file

The first question we need to answer before we start development is always one and the same:

What problem am I trying to solve?

For our scenario, let’s say we are GraubLaser, a factory in Graubünden, Switzerland that makes expensive laser engraving machines. Why? 1) because lasers are cool and also 2) because at some point I was interviewed by this company – GraubLaser is not its real name- and while I decided not to join them as it’s around 80km from where I live, the people were super nice and the machines really cool.

Let’s imagine that we are extending our invoicing system with the ability to:

  1. Ask our financial data provider, GraubFinance, if a customer is creditworthy, so that we allow them to order an expensive item with less risk of not getting paid.
  2. Tell GraubFinance if a customer is creditworthy or not, following a transaction we had with them.

In REST terms one way (there are many!) to implement this would be for GraubFinance to create a service on their system for us to call. The spec sheet could look like this:

Service URLs:
http://api-test.graubfinance.ch/api/1.0/CustomerTrust
http://api.graubfinance.ch/api/1.0/CustomerTrust

GET : Reads info about a certain customer

Parameters (in path):
- taxid : the customer's tax id, string, e.g. CHE-123.456.789

Returns (json in message body):
- A CustomerTrustInfo object with the following properties:
  - name : the company name, string, e.g. GlarusAdvertising AG
  - taxid : the customer's tax id, string, e.g. CHE-123.456.789 
  - trustlevel : the creditworthiness level, enum as string, e.g. OK
    - valid values : 
      - OK (no problems reported) 
      - WARN (minor problems reported, e.g. delays in payments but not arrears)
      - BAD (arrears reported)

POST: Sends info about a certain customer following a transaction we had with them

Parameters (in path):
- taxid : the customer's tax id, string, e.g. CHE-123.456.789
Content (json in message body):
- A CustomerTrustReport object
  - reportid : a unique id from our reporting system, string, e.g. 2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e
  - reporttaxid : our own tax id, string, e.g. CHE-123.456.789 (indicating which of our subsidiaries had the transaction)
  - taxid : the customer's tax id, string, e.g. CHE-123.456.789 
  - trustlevel : the creditworthiness level, enum as string, e.g. OK
    - valid values : 
      - OK (no problems encountered) 
      - WARN (minor problems encountered, e.g. delays in payments but not arrears)
      - BAD (arrears encountered)
Returns (json in message body):
- A ReportStatus object
  - reportid : the id from the CustomerTrustReport
  - status : indicates if the report could be processed and saved, , enum as string, e.g. OK
    - valid values : 
      - OK (no problems encountered) 
      - ERROR (an exception occured)
- details : (only in case o error) gives problem details, string, e.g. Error: InsufficientDiskSpace exception

Authentication: 
Standard OAuth 2.0 bearer JWT token required which must contain the scope "CustomerTrust". If the token is missing or invalid, the service must return HTTP 401. If the token is valid but does not contain the correct scope, the service must return HTTP 403.

So basically the GET operation would give us info about a certain customer, identified by a tax id. The POST would give the provider info about our experience with them. As authentication is a big subject by itself, we won’t talk a lot about it here.

So the Swagger file could look like this (notice that for convenience we added the mock and the local dev machine URLs):

openapi: "3.0.2"
info:
  title: GraubFinance Customer Trust service
  version: "1.0"
servers:
  - url: http://localhost:8888/api/1.0/
    description: local dev machine
  - url: https://graubfinancemock.azurewebsites.net/api/1.0/
    description: mocking service
  - url: http://api-test.graubfinance.ch/api/1.0/    
    description: staging
  - url: http://api.graubfinance.ch/api/1.0/  
    description: production
paths:
  /CustomerTrust/{taxid}:
    get:
      operationId: GetCustomerTrustInfo
      summary: Reads info about a certain customer
      parameters:
        - name: taxid
          in: path
          description: Customer's tax id
          required: true
          schema:
            type: string
          example: "CHE-123.456.789"
      responses:
        '200':
          description: CustomerTrustInfo
          content:
            application/json:    
              schema:
                type: object
                properties:
                  name:
                    type: string
                    example: "GlarusAdvertising AG"
                  taxid:
                    type: string
                    example: "CHE-123.456.789"
                  trustlevel:
                    type: string                
                    enum: [OK, WARN, BAD]
                    example: "OK"
        '401':
          description: Unauthorized, JWT token not present or invalid
        '403':
          description: JWT token valid but does not contain necessary scope 
        '404':
          description: Customer tax id not found
    post:
      operationId: PostCustomerTrustReport
      summary: Sends info about a certain customer following a financial transaction 
      parameters:
        - name: taxid
          in: path
          description: Customer's tax id
          required: true
          schema:
            type: string
          example: "CHE-123.456.789"   
      requestBody:
        content:
          application/json:
            schema:
              type: object
              properties:
                reportid:
                  type: string
                  example: "2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e"
                reporttaxid:
                  type: string
                  example: "CHE-123.456.789"
                taxid:
                  type: string
                  example: "CHE-123.456.789"
                trustlevel:
                  type: string                
                  enum: [OK, WARN, BAD]
                  example: "OK"
      responses:
        '200':
          description: Success
        '401':
          description: Unauthorized, JWT token not present or invalid
        '403':
          description: JWT token valid but does not contain necessary scope 
        '404':
          description: Customer tax id not found                
components:
  securitySchemes:
    OAuth2:          
      type: http
      scheme: bearer
      bearerFormat: JWT            
security: 
  - OAuth2: [CustomerTrust]

You could certainly improve the API spec -I would- but I don’t want to focus on this.

We should keep this somewhere. Create a directory for our project, let’s call it GraubFinanceMockService. Inside this create another one called, say, openapi, and save the swagger file inside as CustomerTrust.yaml. We’ll use the folder structure in the next steps and ultimately add it to source control.

The real purpose of this tutorial begins now. So let’s create the fake service!

Design by contract Tutorial, part 2/6: [Wiremock] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

When the blue pill is good enough: create the mock service

Wiremock is an HTTP server that is designed to be easily configurable so that it acts as a mock server. I love it. In fact I started writing my own (Lefkogeia) and stopped when I found Wiremock since it was so obviously exactly what I needed.

It’s very easy to run standalone (they also have also a not-free cloud service, well worth its money). Follow these steps:

  1. Install Java (the JRE is enough) if you don’t have it already
  2. Inside our base directory GraubFinanceMockService, create an empty folder, say wiremock.
  3. Download the jar from this page using the link “Once you have downloaded the standalone JAR“. Save it in the wiremock folder.
  4. With a text editor, create a file named start-wiremock.cmd (or whatever script extension your OS has) and paste the following (assuming java is somewhere in the path):
java -jar wiremock-standalone-2.25.1.jar --global-response-templating --port 8888

Check the jar file name to match whatever you downloaded. You can also change the port to whatever you like.

To test that everything works, run the script and then open a browser and navigate to http://localhost:8888 . In the command window you should have the Wiremock logo and a few details, and in the browser you should get “HTTP ERROR 403 Problem accessing /__files/. Reason: Forbidden”. If so, it all works correctly. If not, well, check the error message in the command window. To stop the service just close the window.

Now let’s configure it.

In the wiremock folder, create two subfolders “mappings” and “__files” (if you ran it, they will have been created already). In the mappings folder you will put the service definitions and in the __files folder any necessary attachments like image, xml or json files.

So create a text file in mappings, name it “base.json” and paste the following:

{
    "request": {
        "method": "GET",
        "url": "/servicehealth"
    },
    "response": {
        "status": 200,
        "body": "Service is up and running!",
        "headers": {
            "Content-Type": "text/plain"
        }
    }
}

Now run the script and navigate your browser to http://localhost:8888/servicehealth . That’s what you should see:

Now that we got the basics working, let’s create the mock service. We’ll create a new json file in the mappings folder -you can have as many as you want, they’re combined. Let’s call the new file CustomerTrust.json.

Remember we need to respond to GET and POST for the url http://HOSTNAME/api/1.0/CustomerTrust/ID right? A first try for the GET could be like this:

{
    "request": {
        "method": "GET",
		"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,})"
    },
    "response": {
        "status": 200,
        "bodyFileName": "CustomerTrustInfo.json"
    }
}

Notice that with urlPattern, the accepted path is now a regular expression –which gives you a lot of flexibility.

In the __files folder, create a text file named CustomerTrustInfo.json with the following content (remember we just convert stuff from our swagger file):

{
  "name": "GlarusAdvertising AG",
  "taxid": "CHE-123.456.789",
  "trustlevel": "OK"
}

Run the script again and navigate your browser to http://localhost:8888/api/1.0/CustomerTrust/123 . You should get the contents of CustomerTrustInfo.json.

Now, obviously, the tax id we gave in the url (123) doesn’t match the taxid in the json (CHE-123.456.789). But Wiremock can take data from the request and use it in the response: it’s called response templating. Remember the –global-response-templating in the script? It enables exactly this behavior.

So to get the tax id from the request path, change CustomerTrustInfo.json as follows:

{
  "name": "GlarusAdvertising AG",
  "taxid": "{{request.requestLine.pathSegments.[3]}}",
  "trustlevel": "OK"
}

The double brackets {{ … }} tell wiremock that this part should be substituted. The expression request.requestLine.pathSegments.[3] tells it to get the 4rd part (it counts from zero) of the url’s path. The path part of url is /api/1.0/CustomerTrust/(taxid) so the tax id is in the 4th place.

So start the script and navigate to http://localhost:8888/api/1.0/CustomerTrust/CHE-12345. You should get:

{
  "name": "GlarusAdvertising AG",
  "taxid": "CHE-12345",
  "trustlevel": "OK"
}

Ok, but what about authentication? As OAuth 2.0 dictates, our request has to have a valid JWT token in the header named “Authorization” with content “Bearer (token)”. E,g,

Authorization: Bearer 0b79bab50daca910b000d4f1a2b675d604257e42

Now for our example let’s keep it relatively simple. We won’t validate anything except that the header is there. If not, we’ll return HTTP 401. Let’s add a 404 “not found” for good measure as well. So change the CustomerTrustInfo.json as follows:

{
	"mappings": [	
		{
			"priority": 10,
			"request": {
				"method": "GET",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,100})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "CustomerTrustInfo.json"
			}
		},
		{
			"priority": 90,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 404,
				"body": "Server path {{request.path}} not found",
				"headers": {
					"Content-Type": "text/plain"
				}					
			}
		},			
		{
			"priority": 99,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)"
			},
			"response": {
				"status": 401,
				"body": "401 Unauthorized",
				"headers": {
					"Content-Type": "text/plain"
				}				
			}
		}
	]
}

Notice the priorities. If the request matches the one in the first priority 10 (with the correct path and the Authorization header) the json is returned. If the request matches the one in priority 90 (under/api with Authorization header), 404 is returned. If the request matches priority 99 (under /api but without the Authorization header), the 401 is returned.

This done, let’s also create the POST part. This could be like that (we’ll put in the mappings section with the rest):

		{
			"priority": 20,
			"request": {
				"method": "POST",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "ReportStatus.json"
			}
		}

In the __files folder, create a text file named ReportStatus.json with the following content:

{
  "reportid": "{{jsonPath request.body '$.reportid'}}",
  "status": "OK"
}

Let’s test it. As this is a POST, we can’t test it with a browser. We’ll need curl for that, so install it if don’t have it already. In the wiremock folder, create a file named CustomerTrustReport.json, which we’ll use to test the service:

{
    "reportid": "edbf8395-ac62-4c8c-95c3-d59a488dee7e",
    "reporttaxid": "CHE-123.456.789",
    "taxid": "CHE-555.456.789",
    "trustlevel": "WARN"
}

Then in a command window enter:

curl -X POST "http://localhost:8888/api/1.0/CustomerTrust/CHE-123.456.789" -H  "accept: */*" -H "Authorization: Bearer 1234" -d @CustomerTrustReport.json 

You should get the following:

{
    "reportid": "edbf8395-ac62-4c8c-95c3-d59a488dee7e",
    "status": "OK"
}

Of course, if you omit the Authorization header in curl (the -H “Authorization: Bearer 1234” part) you’ll get the 401 error.

So we have a working mock service! That’s how the complete mapping file CustomerTrust.json looks like:

{
	"mappings": [	
		{
			"priority": 10,
			"request": {
				"method": "GET",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "CustomerTrustInfo.json"
			}
		},
		{
			"priority": 20,
			"request": {
				"method": "POST",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,100})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "ReportStatus.json"
			}
		},		
		{
			"priority": 90,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 404,
				"body": "Server path {{request.path}} not found",
				"headers": {
					"Content-Type": "text/plain"
				}					
			}
		},			
		{
			"priority": 99,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)"
			},
			"response": {
				"status": 401,
				"body": "401 Unauthorized",
				"headers": {
					"Content-Type": "text/plain"
				}				
			}
		}
	]
}

Of course we can -and in reality should- add a lot more. Especially useful would be to verify that the json provided by the POST has the correct attributes in place. One way to do that would be to check the json via further request matching on the POST, and then have a second POST section with lower priority (say, 50) that does not verify the json. This second POST then should return HTTP 400 (Bad Request) and a message like “Invalid JSON”.

But let’s stick with this for now (this post is arguably too long already) and start working on deploying the mock service. It’s docker time!

Design by contract Tutorial: Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Fake it till you make it: Introduction

As with many topics, this one came up due to a real need.

You see, at work we have in-house as well as outsourced development. The outsourcers’ locations literally span continents as well as time zones. And yes, the software each one develops must play nice with everyone else’s.

So in our situation, design-by-contract is imperative. It simply doesn’t work any other way. We need to let each one know what is expected of them and then give them relative freedom to develop using their tools and methods -as long as the input and output is defined.

Since we mostly use REST services, the way we usually do it is by giving them Swagger files. This does a decent job of explaining what they need to build, be it either the provider (a REST API) or the consumer (the caller of the REST API). But still, there are many cases where there are gaps and they need to test with “the real thing”, or at least something that’s close to being real.

So what I usually do is build and deploy a mock -a fake service that looks a lot like the real thing. That enables all of us to get most of the work done; any differences that arise during the integration testing will be (usually!) minor and (usually!) easily fixable.

My work (and this guide, walk-through, tutorial, nameitwhatyouwant) has the following steps:

  1. Write the Swagger, or OpenAPI, file. This is versioned; whenever there are changes, which is normal in every project, I issue a new version and notify everyone involved. The swagger files are kept in a git repository (we use Azure Devops, a.k.a. VSTS) so the changes are traceable.
  2. Create a mock (fake) service using Wiremock.
  3. Create a docker image for the mock service using Rodolphe Chaigneau’s wiremock-docker image.
  4. Using Terraform, I build an Azure resource group and host the docker image in an app service.
  5. In Azure Devops, I build a deployment pipeline that deploys all changes, be it in the Docker container or the Azure configuration, whenever a change is pushed in the git repository.
  6. Then everyone involved can test the service using the swagger editor, curl or whatever tool they like -SoapUI, Postman, Postwoman, younameit.

I could, of course, not bother with most of it and just run Wiremock locally. But remember, it’s not just for me. It has to be useful for many people, most of them outside my company’s network. They will use it to test the client they’re developing or verify the service they’re building.

Note that all of the steps explained in this guide are cross-platform. I’ve tested them with both Windows 10 and Ubuntu 19.04. In both OSes I’ve used just the simplest tools -a text editor and git command line- but normally I use VS Code (and occasionally vi for old time’s sake 😊). Whatever little scripting there is, it’s done in Powershell Core, also cross-platform.

Let’s start, shall we?

Step 1 – [OpenAPI] Define the specs and create the Swagger file

Step 2 – [Wiremock] Create the mock service

Step 3 – [Docker] Build the container for the mock service

Step 4 – [Terraform] Ship these containers

Step 5 – [Azure Devops] Create the auto-deploy pipeline

Step 6 – [Swagger] Test via Swagger UI and curl