Category Archives: Tutorials and guides

Design by contract Tutorial, part 1/6: [OpenAPI] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Think before you speak write code: Define the specs and create the Swagger file

The first question we need to answer before we start development is always one and the same:

What problem am I trying to solve?

For our scenario, let’s say we are GraubLaser, a factory in Graubünden, Switzerland that makes expensive laser engraving machines. Why? 1) because lasers are cool and also 2) because at some point I was interviewed by this company – GraubLaser is not its real name- and while I decided not to join them as it’s around 80km from where I live, the people were super nice and the machines really cool.

Let’s imagine that we are extending our invoicing system with the ability to:

  1. Ask our financial data provider, GraubFinance, if a customer is creditworthy, so that we allow them to order an expensive item with less risk of not getting paid.
  2. Tell GraubFinance if a customer is creditworthy or not, following a transaction we had with them.

In REST terms one way (there are many!) to implement this would be for GraubFinance to create a service on their system for us to call. The spec sheet could look like this:

Service URLs:
http://api-test.graubfinance.ch/api/1.0/CustomerTrust
http://api.graubfinance.ch/api/1.0/CustomerTrust

GET : Reads info about a certain customer

Parameters (in path):
- taxid : the customer's tax id, string, e.g. CHE-123.456.789

Returns (json in message body):
- A CustomerTrustInfo object with the following properties:
  - name : the company name, string, e.g. GlarusAdvertising AG
  - taxid : the customer's tax id, string, e.g. CHE-123.456.789 
  - trustlevel : the creditworthiness level, enum as string, e.g. OK
    - valid values : 
      - OK (no problems reported) 
      - WARN (minor problems reported, e.g. delays in payments but not arrears)
      - BAD (arrears reported)

POST: Sends info about a certain customer following a transaction we had with them

Parameters (in path):
- taxid : the customer's tax id, string, e.g. CHE-123.456.789
Content (json in message body):
- A CustomerTrustReport object
  - reportid : a unique id from our reporting system, string, e.g. 2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e
  - reporttaxid : our own tax id, string, e.g. CHE-123.456.789 (indicating which of our subsidiaries had the transaction)
  - taxid : the customer's tax id, string, e.g. CHE-123.456.789 
  - trustlevel : the creditworthiness level, enum as string, e.g. OK
    - valid values : 
      - OK (no problems encountered) 
      - WARN (minor problems encountered, e.g. delays in payments but not arrears)
      - BAD (arrears encountered)
Returns (json in message body):
- A ReportStatus object
  - reportid : the id from the CustomerTrustReport
  - status : indicates if the report could be processed and saved, , enum as string, e.g. OK
    - valid values : 
      - OK (no problems encountered) 
      - ERROR (an exception occured)
- details : (only in case o error) gives problem details, string, e.g. Error: InsufficientDiskSpace exception

Authentication: 
Standard OAuth 2.0 bearer JWT token required which must contain the scope "CustomerTrust". If the token is missing or invalid, the service must return HTTP 401. If the token is valid but does not contain the correct scope, the service must return HTTP 403.

So basically the GET operation would give us info about a certain customer, identified by a tax id. The POST would give the provider info about our experience with them. As authentication is a big subject by itself, we won’t talk a lot about it here.

So the Swagger file could look like this (notice that for convenience we added the mock and the local dev machine URLs):

openapi: "3.0.2"
info:
  title: GraubFinance Customer Trust service
  version: "1.0"
servers:
  - url: http://localhost:8888/api/1.0/
    description: local dev machine
  - url: https://graubfinancemock.azurewebsites.net/api/1.0/
    description: mocking service
  - url: http://api-test.graubfinance.ch/api/1.0/    
    description: staging
  - url: http://api.graubfinance.ch/api/1.0/  
    description: production
paths:
  /CustomerTrust/{taxid}:
    get:
      operationId: GetCustomerTrustInfo
      summary: Reads info about a certain customer
      parameters:
        - name: taxid
          in: path
          description: Customer's tax id
          required: true
          schema:
            type: string
          example: "CHE-123.456.789"
      responses:
        '200':
          description: CustomerTrustInfo
          content:
            application/json:    
              schema:
                type: object
                properties:
                  name:
                    type: string
                    example: "GlarusAdvertising AG"
                  taxid:
                    type: string
                    example: "CHE-123.456.789"
                  trustlevel:
                    type: string                
                    enum: [OK, WARN, BAD]
                    example: "OK"
        '401':
          description: Unauthorized, JWT token not present or invalid
        '403':
          description: JWT token valid but does not contain necessary scope 
        '404':
          description: Customer tax id not found
    post:
      operationId: PostCustomerTrustReport
      summary: Sends info about a certain customer following a financial transaction 
      parameters:
        - name: taxid
          in: path
          description: Customer's tax id
          required: true
          schema:
            type: string
          example: "CHE-123.456.789"   
      requestBody:
        content:
          application/json:
            schema:
              type: object
              properties:
                reportid:
                  type: string
                  example: "2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e"
                reporttaxid:
                  type: string
                  example: "CHE-123.456.789"
                taxid:
                  type: string
                  example: "CHE-123.456.789"
                trustlevel:
                  type: string                
                  enum: [OK, WARN, BAD]
                  example: "OK"
      responses:
        '200':
          description: Success
        '401':
          description: Unauthorized, JWT token not present or invalid
        '403':
          description: JWT token valid but does not contain necessary scope 
        '404':
          description: Customer tax id not found                
components:
  securitySchemes:
    OAuth2:          
      type: http
      scheme: bearer
      bearerFormat: JWT            
security: 
  - OAuth2: [CustomerTrust]

You could certainly improve the API spec -I would- but I don’t want to focus on this.

We should keep this somewhere. Create a directory for our project, let’s call it GraubFinanceMockService. Inside this create another one called, say, openapi, and save the swagger file inside as CustomerTrust.yaml. We’ll use the folder structure in the next steps and ultimately add it to source control.

The real purpose of this tutorial begins now. So let’s create the fake service!

Design by contract Tutorial, part 2/6: [Wiremock] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

When the blue pill is good enough: create the mock service

Wiremock is an HTTP server that is designed to be easily configurable so that it acts as a mock server. I love it. In fact I started writing my own (Lefkogeia) and stopped when I found Wiremock since it was so obviously exactly what I needed.

It’s very easy to run standalone (they also have also a not-free cloud service, well worth its money). Follow these steps:

  1. Install Java (the JRE is enough) if you don’t have it already
  2. Inside our base directory GraubFinanceMockService, create an empty folder, say wiremock.
  3. Download the jar from this page using the link “Once you have downloaded the standalone JAR“. Save it in the wiremock folder.
  4. With a text editor, create a file named start-wiremock.cmd (or whatever script extension your OS has) and paste the following (assuming java is somewhere in the path):
java -jar wiremock-standalone-2.25.1.jar --global-response-templating --port 8888

Check the jar file name to match whatever you downloaded. You can also change the port to whatever you like.

To test that everything works, run the script and then open a browser and navigate to http://localhost:8888 . In the command window you should have the Wiremock logo and a few details, and in the browser you should get “HTTP ERROR 403 Problem accessing /__files/. Reason: Forbidden”. If so, it all works correctly. If not, well, check the error message in the command window. To stop the service just close the window.

Now let’s configure it.

In the wiremock folder, create two subfolders “mappings” and “__files” (if you ran it, they will have been created already). In the mappings folder you will put the service definitions and in the __files folder any necessary attachments like image, xml or json files.

So create a text file in mappings, name it “base.json” and paste the following:

{
    "request": {
        "method": "GET",
        "url": "/servicehealth"
    },
    "response": {
        "status": 200,
        "body": "Service is up and running!",
        "headers": {
            "Content-Type": "text/plain"
        }
    }
}

Now run the script and navigate your browser to http://localhost:8888/servicehealth . That’s what you should see:

Now that we got the basics working, let’s create the mock service. We’ll create a new json file in the mappings folder -you can have as many as you want, they’re combined. Let’s call the new file CustomerTrust.json.

Remember we need to respond to GET and POST for the url http://HOSTNAME/api/1.0/CustomerTrust/ID right? A first try for the GET could be like this:

{
    "request": {
        "method": "GET",
		"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,})"
    },
    "response": {
        "status": 200,
        "bodyFileName": "CustomerTrustInfo.json"
    }
}

Notice that with urlPattern, the accepted path is now a regular expression –which gives you a lot of flexibility.

In the __files folder, create a text file named CustomerTrustInfo.json with the following content (remember we just convert stuff from our swagger file):

{
  "name": "GlarusAdvertising AG",
  "taxid": "CHE-123.456.789",
  "trustlevel": "OK"
}

Run the script again and navigate your browser to http://localhost:8888/api/1.0/CustomerTrust/123 . You should get the contents of CustomerTrustInfo.json.

Now, obviously, the tax id we gave in the url (123) doesn’t match the taxid in the json (CHE-123.456.789). But Wiremock can take data from the request and use it in the response: it’s called response templating. Remember the –global-response-templating in the script? It enables exactly this behavior.

So to get the tax id from the request path, change CustomerTrustInfo.json as follows:

{
  "name": "GlarusAdvertising AG",
  "taxid": "{{request.requestLine.pathSegments.[3]}}",
  "trustlevel": "OK"
}

The double brackets {{ … }} tell wiremock that this part should be substituted. The expression request.requestLine.pathSegments.[3] tells it to get the 4rd part (it counts from zero) of the url’s path. The path part of url is /api/1.0/CustomerTrust/(taxid) so the tax id is in the 4th place.

So start the script and navigate to http://localhost:8888/api/1.0/CustomerTrust/CHE-12345. You should get:

{
  "name": "GlarusAdvertising AG",
  "taxid": "CHE-12345",
  "trustlevel": "OK"
}

Ok, but what about authentication? As OAuth 2.0 dictates, our request has to have a valid JWT token in the header named “Authorization” with content “Bearer (token)”. E,g,

Authorization: Bearer 0b79bab50daca910b000d4f1a2b675d604257e42

Now for our example let’s keep it relatively simple. We won’t validate anything except that the header is there. If not, we’ll return HTTP 401. Let’s add a 404 “not found” for good measure as well. So change the CustomerTrustInfo.json as follows:

{
	"mappings": [	
		{
			"priority": 10,
			"request": {
				"method": "GET",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,100})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "CustomerTrustInfo.json"
			}
		},
		{
			"priority": 90,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 404,
				"body": "Server path {{request.path}} not found",
				"headers": {
					"Content-Type": "text/plain"
				}					
			}
		},			
		{
			"priority": 99,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)"
			},
			"response": {
				"status": 401,
				"body": "401 Unauthorized",
				"headers": {
					"Content-Type": "text/plain"
				}				
			}
		}
	]
}

Notice the priorities. If the request matches the one in the first priority 10 (with the correct path and the Authorization header) the json is returned. If the request matches the one in priority 90 (under/api with Authorization header), 404 is returned. If the request matches priority 99 (under /api but without the Authorization header), the 401 is returned.

This done, let’s also create the POST part. This could be like that (we’ll put in the mappings section with the rest):

		{
			"priority": 20,
			"request": {
				"method": "POST",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "ReportStatus.json"
			}
		}

In the __files folder, create a text file named ReportStatus.json with the following content:

{
  "reportid": "{{jsonPath request.body '$.reportid'}}",
  "status": "OK"
}

Let’s test it. As this is a POST, we can’t test it with a browser. We’ll need curl for that, so install it if don’t have it already. In the wiremock folder, create a file named CustomerTrustReport.json, which we’ll use to test the service:

{
    "reportid": "edbf8395-ac62-4c8c-95c3-d59a488dee7e",
    "reporttaxid": "CHE-123.456.789",
    "taxid": "CHE-555.456.789",
    "trustlevel": "WARN"
}

Then in a command window enter:

curl -X POST "http://localhost:8888/api/1.0/CustomerTrust/CHE-123.456.789" -H  "accept: */*" -H "Authorization: Bearer 1234" -d @CustomerTrustReport.json 

You should get the following:

{
    "reportid": "edbf8395-ac62-4c8c-95c3-d59a488dee7e",
    "status": "OK"
}

Of course, if you omit the Authorization header in curl (the -H “Authorization: Bearer 1234” part) you’ll get the 401 error.

So we have a working mock service! That’s how the complete mapping file CustomerTrust.json looks like:

{
	"mappings": [	
		{
			"priority": 10,
			"request": {
				"method": "GET",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "CustomerTrustInfo.json"
			}
		},
		{
			"priority": 20,
			"request": {
				"method": "POST",
				"urlPattern": "/api/1.0/CustomerTrust/([a-zA-Z0-9-.]{1,100})",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 200,
				"bodyFileName": "ReportStatus.json"
			}
		},		
		{
			"priority": 90,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)",
				"headers": {
					"Authorization": {
						"contains": "Bearer"
					}
				}
			},
			"response": {
				"status": 404,
				"body": "Server path {{request.path}} not found",
				"headers": {
					"Content-Type": "text/plain"
				}					
			}
		},			
		{
			"priority": 99,
			"request": {
				"method": "ANY",
				"urlPattern": "/api(.*)"
			},
			"response": {
				"status": 401,
				"body": "401 Unauthorized",
				"headers": {
					"Content-Type": "text/plain"
				}				
			}
		}
	]
}

Of course we can -and in reality should- add a lot more. Especially useful would be to verify that the json provided by the POST has the correct attributes in place. One way to do that would be to check the json via further request matching on the POST, and then have a second POST section with lower priority (say, 50) that does not verify the json. This second POST then should return HTTP 400 (Bad Request) and a message like “Invalid JSON”.

But let’s stick with this for now (this post is arguably too long already) and start working on deploying the mock service. It’s docker time!

Design by contract Tutorial: Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Fake it till you make it: Introduction

As with many topics, this one came up due to a real need.

You see, at work we have in-house as well as outsourced development. The outsourcers’ locations literally span continents as well as time zones. And yes, the software each one develops must play nice with everyone else’s.

So in our situation, design-by-contract is imperative. It simply doesn’t work any other way. We need to let each one know what is expected of them and then give them relative freedom to develop using their tools and methods -as long as the input and output is defined.

Since we mostly use REST services, the way we usually do it is by giving them Swagger files. This does a decent job of explaining what they need to build, be it either the provider (a REST API) or the consumer (the caller of the REST API). But still, there are many cases where there are gaps and they need to test with “the real thing”, or at least something that’s close to being real.

So what I usually do is build and deploy a mock -a fake service that looks a lot like the real thing. That enables all of us to get most of the work done; any differences that arise during the integration testing will be (usually!) minor and (usually!) easily fixable.

My work (and this guide, walk-through, tutorial, nameitwhatyouwant) has the following steps:

  1. Write the Swagger, or OpenAPI, file. This is versioned; whenever there are changes, which is normal in every project, I issue a new version and notify everyone involved. The swagger files are kept in a git repository (we use Azure Devops, a.k.a. VSTS) so the changes are traceable.
  2. Create a mock (fake) service using Wiremock.
  3. Create a docker image for the mock service using Rodolphe Chaigneau’s wiremock-docker image.
  4. Using Terraform, I build an Azure resource group and host the docker image in an app service.
  5. In Azure Devops, I build a deployment pipeline that deploys all changes, be it in the Docker container or the Azure configuration, whenever a change is pushed in the git repository.
  6. Then everyone involved can test the service using the swagger editor, curl or whatever tool they like -SoapUI, Postman, Postwoman, younameit.

I could, of course, not bother with most of it and just run Wiremock locally. But remember, it’s not just for me. It has to be useful for many people, most of them outside my company’s network. They will use it to test the client they’re developing or verify the service they’re building.

Note that all of the steps explained in this guide are cross-platform. I’ve tested them with both Windows 10 and Ubuntu 19.04. In both OSes I’ve used just the simplest tools -a text editor and git command line- but normally I use VS Code (and occasionally vi for old time’s sake 😊). Whatever little scripting there is, it’s done in Powershell Core, also cross-platform.

Let’s start, shall we?

Step 1 – [OpenAPI] Define the specs and create the Swagger file

Step 2 – [Wiremock] Create the mock service

Step 3 – [Docker] Build the container for the mock service

Step 4 – [Terraform] Ship these containers

Step 5 – [Azure Devops] Create the auto-deploy pipeline

Step 6 – [Swagger] Test via Swagger UI and curl

Powershell – file system operations within a transaction

Anyone who’s ever developed anything connected to a database knows about transactions. Using a transaction we can group data operations so that they happen “all or nothing”, i.e. either all of them succeed or no one does. One example is a transfer from one bank account to another: the complete transaction requires subtracting the amount to be transferred from one account and adding that same amount to the other. We wouldn’t want one to happen without the other, would we?

(yes, I know that’s not a complete description of what transactions are or do; that’s not my purpose here)

But what happens when we need the same from our filesystem?

Let’s take this scenario (which is a simplified version of a true story): we are receiving CSV files from solar panels (via SFTP) and we want to do some preprocessing and then store the data to a database. When processing them we have to generate a lot of intermediate files. After that, we want to move them to a different dir. But if something happens, say if the database is down, we want the whole thing to be cancelled so that, when we retry, we can start over.

Obviously a simple solution would be as follows:

try {
  # do a lot of hard work
  # store the data in the db
  # clean up intermediate files
  # move the CSV file to an "archive" dir
}
catch {
  # clean up intermediate files, potentially clean up any db records etc
}

That’s… well it can work but it’s not great. For example, the cleanup process itself -inside the catch block- might fail.

A much better process would be like that:

try {   
  # start a transaction
    # do a lot of hard work   
    # store the data in the db   
    # clean up intermediate files   
    # move the CSV file to an "archive" dir 
  # commit the transaction
} 
catch {   
  # rollback everything: files, db changes, the whole thing
}

That’s much cleaner! But is it possible to group db changes and filesystem changes (file create, move, delete & append, dir create & delete etc) all in one transaction?

Unix geeks are allowed to feel smug here: some flavors like HP-UX (though not Linux as far as I know) have this baked in from the get go. Your code doesn’t have to do anything special; the OS takes care of this on user request.

But as a matter of fact yes, it is also available on Windows, and it has been for some time now. The requirement is that you’re working on a file system that supports this, like NTFS.

But there’s a problem for the average .NET/Powershell coder: the standard methods, the ones inside System.IO, do not support any of this. So you have to go on a lower level, using the Windows API. Which for .NET coders, there’s no other way to put this, sucks. That’s also the reason why the Powershell implementation of file transactions (e.g. New-Item -ItemType File -UseTransaction) doesn’t work -it relies on System.IO.

I’m pretty sure that this is what crossed the minds of the developers that wrote AlphaFS which is just wonderful. It’s exactly what you’d expect but never got from Microsoft: a .NET implementation of most of System.IO classes that support NTFS’s advanced features that are not available in, say, ye olde FAT32. Chief among them is support for file system transactions.

So the example below shows how to do exactly that. I tried to keep it as simple as possible to highlight the interesting bits, but of course a real world example would be much more complicated, e.g. there would be a combined file system and database transaction, which would commit (or rollback) everything at the same time.

Note that there’s no need for an explicit rollback. As soon as the transaction scope object is disposed without calling Complete(), all changes are rolled back.

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, July 2019
#
# Prerequisite: 
#   1) internet connectivity
#   2) nuget command line must be installed 
#      (see https://www.nuget.org/downloads).
# If nuget is not in %path%, you need to change 
#   the installation (see below) to call it with 
#   its full path.
 
# Stop on error
$ErrorActionPreference = "Stop"

if ($psISE)
{
    $binPath = Split-Path -Path $PSISE.CurrentFile.FullPath        
}
else
{
    $binPath = $PSScriptRoot
}
$alphaFSver = "2.2.6"
$libPath = "$binPath\AlphaFS.$alphaFSver\lib\net40\AlphaFS.dll"
$basePath = "$binPath\..\alphatest"

# ====== installation ======
if (-not [System.IO.File]::Exists($libPath)) {
    Out-File -FilePath "$binPath\packages.config" `
        -Force `
        -Encoding utf8 `
        -InputObject ("<?xml version=`"1.0`" encoding=`"utf-8`"?><packages>" + `
          "<package id=`"AlphaFS`" version=`"$alphaFSver`" targetFramework=`"net46`" />" + `
          "</packages>")
    cd $binPath
    & nuget.exe restore -PackagesDirectory "$binPath"
}
# ==========================
 
# Make sure the path matches the version from step 2
Import-Module -Name $libPath
 
if (-not (Test-Path $basePath)) {
    New-Item -ItemType Directory -Path $basePath
}
 
# Check if the filesystem we're writing to supports transactions.
# On a FAT32 drive you're out of luck.
$driveRoot = [System.IO.Path]::GetPathRoot($basePath)
$driveInfo = [Alphaleonis.Win32.Filesystem.DriveInfo]($driveRoot)
if (-not $driveInfo.VolumeInfo.SupportsTransactions) {
    Write-Error ("Your $driveRoot volume $($driveInfo.DosDeviceName) " + `
      "[$($driveInfo.VolumeLabel)] does not support transactions, exiting")
}
 
# That's some example data to play with.
# In reality you'll probably get data from a DB, a REST service etc.
$list = @{1="Jim"; 2="Stef"; 3="Elena"; 4="Eva"}
 
try {
    # Transaction starts here
    $transactionScope = [System.Transactions.TransactionScope]::new([System.Transactions.TransactionScopeOption]::RequiresNew)
    $fileTransaction = [Alphaleonis.Win32.Filesystem.KernelTransaction]::new([System.Transactions.Transaction]::Current)
 
    # Here we're doing random stuff with our files and dirs, 
    #   just to show how this works.
    # The important thing to remember is that for the transaction 
    #   to work correctly, ALL methods you use have to be -transacted.
    # I.e. you must not use AppendText() but AppendTextTransacted(), 
    #   not CreateDirectory() but CreateDirectoryTransacted() etc etc.
    $logfileStream = [Alphaleonis.Win32.Filesystem.File]::AppendTextTransacted($fileTransaction, "$basePath\list.txt")
    foreach($key in $list.Keys) {
        $value = $list.$key
        $filename = "$([guid]::NewGuid()).txt"
        $dir = "$basePath\$key"
 
        Write-Host "Processing item $key $value"
 
        if (-not [Alphaleonis.Win32.Filesystem.Directory]::ExistsTransacted($fileTransaction, $dir)) {
            [Alphaleonis.Win32.Filesystem.Directory]::CreateDirectoryTransacted($fileTransaction, $dir)
        }
        [Alphaleonis.Win32.Filesystem.File]::WriteAllTextTransacted($fileTransaction, "$basePath\$key\$filename", $value)        
        $logfileStream.WriteLine("$filename;$key;$value")
    }
    $logfileStream.Close()
     
    # to simulate an error and subsequent rollback:
    # Write-Error "Something not great, not terrible happened"
     
    # Commit transaction
    $transactionScope.Complete()
    Write-Host "Transaction committed, all modifications written to disk"
}
catch {
    Write-Host "An error occured and the transaction was rolled back: '$($_.Exception.Message)'"
    throw $_.Exception
}
finally {
    if ($null -ne $logfileStream -and $logfileStream -is [System.IDisposable]) {
        $logfileStream.Dispose()
        $logfileStream = $null
    }    
    if ($null -ne $transactionScope -and $transactionScope -is [System.IDisposable]) {
        $transactionScope.Dispose()
        $transactionScope = $null
    }    
}

Have fun coding!

Bulk modify jobs in JAMS Scheduler

As I’ve mentioned before, at work we’re migrating all our scheduled tasks to JAMS. Now JAMS has a lot of flexibility to notify, sends emails etc but… you have to tell it to 🙂

And you can imagine that having to click-click-type-click in order to change, say, the email address in a few tens of jobs is not the creative work a developer craves for. Writing a powershell script to do that, though, is!

So here’s the script I wrote to change the email address for Warnings and Critical conditions, in bulk. Of course you can easily modify it to do whatever change you want (enable/disable a lot of jobs at once is a good example).

param(
    [string]$jamsServer = "myJamsServer", 
    [string]$jamsPath = "\somePath\someOtherPath"
)

# This script loops through all enabled JAMS jobs under a certain folder
# recursively, and changes the email address except for successes.

Import-Module Jams
$ErrorActionPreference = "Stop"
cls

try
{
    if ($null -eq (Get-PSDrive JD))
    {
        New-PSDrive JD JAMS $jamsServer -scope Local
    }
}
catch
{
    New-PSDrive JD JAMS $jamsServer -scope Local
}

$folders = New-Object System.Collections.ArrayList
$rootFolder = (Get-Item "JAMS::$($jamsServer)$($jamsPath)").Name
$folders.Add($rootFolder) | Out-Null
$childFolders = Get-ChildItem "JAMS::$($jamsServer)$($jamsPath)\*" -objecttype Folder -IgnorePredefined 
$childFolders | foreach { $folders.Add($_.Name) | Out-Null }

$rootJobs = New-Object System.Collections.ArrayList

foreach($f in $folders)
{
    Write-Host "Folder: $f"
    if ($f -eq $rootFolder)
    {
        $jobs = Get-ChildItem "JAMS::$($jamsServer)$($jamsPath)\*" -objecttype Job -IgnorePredefined -FullObject 
        $jobs | foreach { $rootJobs.Add($_.Name) | Out-Null }
    }
    else
    {
        $jobs = Get-ChildItem "JAMS::$($jamsServer)$($jamsPath)\$f\*" -objecttype Job -IgnorePredefined -FullObject 
    }

    # for test
    #$jobs | Format-Table -AutoSize

    foreach($job in $jobs)
    {
        #Write-Host "$($job.Name) : $($job.Properties["Enabled"])"
        #if you need a name filter as well, you can do:
        #if (($job.Name -notlike "*SomeString*") -or ($job.Properties["Enabled"].Value -eq $false))
        if ($job.Properties["Enabled"].Value -eq $false)
        {
            continue
        }

        $jobElements = $job.Elements
        $doUpdate = $false

        foreach($jobElement in $jobElements)
        {
            #Write-Host "$($job.Name) / $($jobElement.ElementTypeName) / $($jobElement.Description) / $($jobElement.ToString())"
            if (($jobElement.ElementTypeName -eq "SendEMail") -and ($jobElement.EntrySuccess -eq $false))
            {
                #Write-Host "$($job.Name) / $($jobElement.ElementTypeName) / $($jobElement.Description) / $($jobElement.FromAddress) / $($jobElement.ToAddress)"
                if ([string]::IsNullOrWhiteSpace($jobElement.ToAddress))
                {
                    $jobElement.FromAddress = "admin@superduperincrediblesoftware.com"
                    $jobElement.ToAddress = "someone@superduperincrediblesoftware.com;andhisdog@superduperincrediblesoftware.com"
                    $jobElement.MessageBody = "Uh, Houston, we've had a problem"      
                    $doUpdate = $true              
                }
            }
        }

        if ($doUpdate -eq $true)
        {
            $job.Update()
            Write-Host "Job $($job.Name) is updated"
        }
    }    
}

Have fun coding 🙂

Running Groovy scripts in JAMS Scheduler

Here at work, we’re working on a migration project, from Jenkins (which we’ve been using as a scheduler) to JAMS Scheduler. In Jenkins we have a lot of Groovy scripts, and we have them in source control. So, to make the migration as effortless as possible, we wanted to use them “as-is”, right out of source control.

The solution I found was:

  1. On the JAMS agent, install the subversion command line client
  2. Also on the JAMS agent, install groovy
  3. Create a job that gets (“checks out”) the latest scripts every evening from source control in a specific directory; let’s call it c:\jobs
  4. Create a JAMS Execution Method called Groovy (see below)
  5. Create the Jenkins jobs in JAMS, one by one. In the source box, only write the full path of the groovy script, e.g. c:\jobs\TransferOrders.groovy

#4 is where the magic happens. The execution method is defined as a Powershell method. In the template, there’s code that (suprise) calls groovy. The powershell code is the following (see if you can spot a couple of tricks):

#
# Source: DotJim blog (https://dandraka.com)
# Jim Andrakakis, December 2018
#
Import-Module JAMS

# the job's source is supposed to contain ONLY 
# the full path to the groovy script, without quotes
$groovy = "C:\app\groovy-2.5.4\bin\groovy.bat"
$groovyScript="<<JAMS.Current.Source>>"

Write-Host "[JAMS-GROOVY] Running script $groovyScript via $groovy"
if ((Test-Path -Path $groovy) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Groovy executable $groovy not found, is Groovy installed?"
}
if ((Test-Path -Path $groovyScript) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Source file $groovyScript not found"
}

$currentJob = Get-JAMSEntry {JAMS.JAMSEntry} 
$currentJobParams = $currentJob.Parameters
$currentJobParamNames = $currentJobParams.Keys

foreach($n in $currentJobParamNames)
{
	[string]$v = $currentJobParams[$n].Value
	
	# look for replacement tokens
	# in the form of <<ParamName>>
	foreach($r in $currentJobParamNames)
	{
		if ($v.Contains("<<$r>>"))
        {
            [string]$replVal = $currentJobParams[$r].Value
            $v = $v.Replace("<<$r>>", $replVal)
        }
	}
	
	Write-Host "[JAMS-GROOVY] Setting parameter $n = $v"
	[Environment]::SetEnvironmentVariable($n, $v, "Process")
}

# execute the script in groovy
& $groovy $groovyScript

Write-Host "[JAMS-GROOVY] script finished"

Two tricks to note here:

  • Almost all our groovy scripts have parameters; Jenkins inserts the parameters as environment variables so the scripts can do:
myVar = System.getenv()['myVar']

The first powershell loop does exactly that; it maps all the job’s parameters, defined or inherited, as environment variables, so the scripts can continue to work happily, no change needed.

  • The second trick is actually an enhancement. As the scripts get promoted though our environments (development > test > integration test > production) some parts of the parameters change –but not all of them.

For example, let’s say there’s a parameter for an inputDirectory.
In the development server, it has the value c:\documents\dev\input. In test, it’s c:\documents\test\input, in integration test it’s c:\documents\intg\input and in production c:\documents\prod\input.

What we can do now is have a folder-level parameter, defined on the JAMS folder where our job definitions are –which is not transferred from
environment to environment. And we can have job-defined parameters that, using the familiar JAMS <<param>> notation, get their values substituted.

So, for example, let’s say I define a folder parameter named “SERVERLEVEL”, which will have the value of “dev” in development, “test” in test etc. In the job, I define another parameter called inputDirectory. This will have the value c:\documents\<<SERVERLEVEL>>\input.

Et voilà! Now we can promote the jobs from environment to environment, completely unchanged. In Jenkins we couldn’t do that; we had to define different values for parameters in dev, in test etc.

Here’s the export xml of the execution method:

<?xml version="1.0" encoding="utf-8"?>
<JAMSObjects>
  <method
    name="Groovy"
    type="Routine">
    <description><![CDATA[Run a pre-fetched groovy script. The job's source should contain the full path to the groovy script.

Note: in the "Bad regex pattern", the execution methon looks for "Caught:" to try to undertand whether 
groovy encountered an exception or not. Here's an example of the groovy output of a script where
an unhandled exception occured:

Hello, world!
Caught: java.lang.NullPointerException: Cannot invoke method test() on null object
java.lang.NullPointerException: Cannot invoke method test() on null object
        at test1.run(test1.groovy:4)]]></description>
    <template><![CDATA[Import-Module JAMS

# the job's source is supposed to contain ONLY 
# the full path to the groovy script, without quotes
$groovy = "C:\app\groovy-2.5.4\bin\groovy.bat"
$groovyScript="<<JAMS.Current.Source>>"

Write-Host "[JAMS-GROOVY] Running script $groovyScript via $groovy"
if ((Test-Path -Path $groovy) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Groovy executable $groovy not found, is Groovy installed?"
}
if ((Test-Path -Path $groovyScript) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Source file $groovyScript not found"
}

$currentJob = Get-JAMSEntry {JAMS.JAMSEntry} 
$currentJobParams = $currentJob.Parameters
$currentJobParamNames = $currentJobParams.Keys

foreach($n in $currentJobParamNames)
{
	[string]$v = $currentJobParams[$n].Value
	
	# look for replacement tokens
	# in the form of <<ParamName>>
	foreach($r in $currentJobParamNames)
	{
		if ($v.Contains("<<$r>>"))
        {
            [string]$replVal = $currentJobParams[$r].Value
            $v = $v.Replace("<<$r>>", $replVal)
        }
	}
	
	Write-Host "[JAMS-GROOVY] Setting parameter $n = $v"
	[Environment]::SetEnvironmentVariable($n, $v, "Process")
}

# execute the script in groovy
& $groovy $groovyScript

Write-Host "[JAMS-GROOVY] script finished"]]></template>
    <properties>
      <property
        name="HostAssemblyName"
        typename="System.String"
        value="JAMSPSHost" />
      <property
        name="HostClassName"
        typename="System.String"
        value="MVPSI.JAMS.Host.PowerShell.JAMSPSHost" />
      <property
        name="StartAssemblyName"
        typename="System.String"
        value="" />
      <property
        name="StartClassName"
        typename="System.String"
        value="" />
      <property
        name="EditAssemblyName"
        typename="System.String"
        value="" />
      <property
        name="EditClassName"
        typename="System.String"
        value="" />
      <property
        name="ViewAssemblyName"
        typename="System.String"
        value="" />
      <property
        name="ViewClassName"
        typename="System.String"
        value="" />
      <property
        name="BadPattern"
        typename="System.String"
        value="^Caught\:" />
      <property
        name="ExitCodeHandling"
        typename="MVPSI.JAMS.ExitCodeHandling"
        value="ZeroIsGood" />
      <property
        name="GoodPattern"
        typename="System.String"
        value="" />
      <property
        name="SpecificInformational"
        typename="System.String"
        value="" />
      <property
        name="SpecificValues"
        typename="System.String"
        value="" />
      <property
        name="SpecificWarning"
        typename="System.String"
        value="" />
      <property
        name="Force32Bit"
        typename="System.Boolean"
        value="false" />
      <property
        name="ForceV2"
        typename="System.Boolean"
        value="false" />
      <property
        name="HostLocally"
        typename="System.Boolean"
        value="false" />
      <property
        name="Interactive"
        typename="System.Boolean"
        value="false" />
      <property
        name="NoBOM"
        typename="System.Boolean"
        value="false" />
      <property
        name="SourceFormat"
        typename="MVPSI.JAMS.SourceFormat"
        value="Text" />
      <property
        name="EditAfterStart"
        typename="System.Boolean"
        value="false" />
      <property
        name="EditSource"
        typename="System.Boolean"
        value="false" />
      <property
        name="Extension"
        typename="System.String"
        value="ps1" />
      <property
        name="JobModule"
        typename="System.String"
        value="" />
      <property
        name="SnapshotSource"
        typename="System.Boolean"
        value="false" />
      <property
        name="Redirect"
        typename="MVPSI.JAMS.Redirect"
        value="All" />
      <property
        name="HostSubDirectory"
        typename="System.String"
        value="" />
      <property
        name="HostExecutable"
        typename="System.String"
        value="JAMSHost.exe" />
    </properties>
  </method>
</JAMSObjects>

Powershell: How do you add inline C#?

Powershell is great for admin tasks. Stuff like iterating through files and folders, copying and transforming files are very, very easily done. But inevitably there will always be stuff that are easier to do via a “normal” language such as C#.

Trying to solve a problem I had at work, I needed to transform a CSV file by changing the fields -which is easily done via powershell- and, at the same time, do a “get only the highest record of every group”. This is done with LINQ, which you can use in powershell but it’s cumbersome and will result in many, many lines of code.

So I wanted to do this in a more clean way, in C#. The general template to include C# inside a powershell script is the following:

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, November 2018
#
# Here goes the C# code:
Add-Type -Language CSharp @"
using System; 
namespace DotJim.Powershell 
{
    public static class Magician 
    {
        private static string spell = ""; 
        public static void DoMagic(string magicSpell) 
        {
            spell = magicSpell; 
        }
        public static string GetMagicSpells() 
        {
            return "Wingardium Leviosa\r\n" + spell; 
        }
    }
}
"@;

# And here's how to call it:
[DotJim.Powershell.Magician]::DoMagic("Expelliarmus")
$spell = [DotJim.Powershell.Magician]::GetMagicSpells()

Write-Host $spell

Note here that the C# classes don’t have to be static; but if they are, they’re easier to call (no instantiation needed). Of course this only works if all you need to do is provide an input and get a manipulated output. If you need more complex stuff then yes, you can use non-static classes or whatever C# functionality solves your problems. Here’s the previous example, but with a non-static class:

#
# Source: DotJim blog (https://dandraka.com)
# Jim Andrakakis, November 2018
#
# Here goes the C# code:
Add-Type -Language CSharp @"
using System; 
namespace DotJim.Powershell 
{
    public class Magician 
    {
        private string spell = ""; 
        public void DoMagic(string magicSpell) 
        {
            spell = magicSpell; 
        }
        public string GetMagicSpells() 
        {
            return "Wingardium Leviosa\r\n" + spell; 
        }
    }
}
"@;

# Here's how to create an instance:
$houdini = New-Object -TypeName DotJim.Powershell.Magician
# And here's how to call it:
$houdini.DoMagic("Expelliarmus")
$spell = $houdini.GetMagicSpells()

Write-Host $spell

The main advantage of having C# inside the powershell script (and not in a separate dll file) is that it can be deployed very easily with various Devops tools. Otherwise you need to deploy the dll alongside which can, sometimes, be the source of trouble.

So here’s my complete working code, which worked quite nicely:

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, November 2018
#
# The purpose of this script is to read a CSV file with bank data
# and transform it into a different CSV.
#
# 1. The Bank class is a POCO to hold the data which I need
#    from every line of the CSV file.
# 2. The Add() method of the BankAggregator class adds the
#    record to the list after checking the data for correctness.
# 3. The Get() methof of the BankAggregator class does a
#    LINQ query to get the 1st (max BankNr) bank record
#    from every record with the same Country/BIC.
#    It then returns a list of strings, formatted the way
#    I want for the new (transformed) CSV file.
#
# Here is where I inline the C# code:
Add-Type -Language CSharp @"
using System;
using System.Collections.Generic;
using System.Linq;
namespace DotJim.Powershell {
 public class Bank {
  public int BankNr;
  public string Country;
  public string BIC;
 }
 public static class BankAggregator {
  private static List list = new List();
  public static void Add(string country, string bic, string bankNr) {
   //For debugging
   //Console.WriteLine(string.Format("{0}{3}{1}{3}{3}{2}", country, bic, bankNr, ";"));
   int mBankNr;
   // Check data for correctness, discard if not ok
   if (string.IsNullOrWhiteSpace(country) ||
    country.Length != 2 ||
    string.IsNullOrWhiteSpace(bic) ||
    string.IsNullOrWhiteSpace(bankNr) ||
    !int.TryParse(bankNr, out mBankNr) ||
    mBankNr & gt; = 0) {
    return;
   }
   list.Add(new Bank() {
    BankNr = mBankNr, Country = country, BIC = bic
   });
  }
  public static List Get(string delimiter) {
   // For every record with the same Country & BIC, keep only
   // the record with the highest BankNr
   var bankList = from b in list
   group b by new {
    b.Country, b.BIC
   }
   into bankGrp
   let maxBankNr = bankGrp.Max(x = & gt; x.BankNr)
   select new Bank {
    Country = bankGrp.Key.Country,
     BIC = bankGrp.Key.BIC,
     BankNr = maxBankNr
   };
   // Format the list the way I want the new CSV file to look
   return bankList.Select(x = & amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; gt; string.Format("{0}{3}{1}{3}{3}{2}",
    x.Country, x.BIC, x.BankNr, delimiter)).ToList();
  }
 }
}
"@;

# Read one or more files with bank data from the same dir
# where the script is located ($PSScriptRoot)
$srcSearchStr = "source_bankdata*.csv"
$SourcePath = $PSScriptRoot
$destPath = $SourcePath

$fields = @("Country","BIC","EmptyField","BankId")

$filesList = Get-ChildItem -Path $SourcePath -Filter $srcSearchStr

foreach ($file in $filesList)
{
Write-Host "Processing" $file.FullName

# Fields in the source CSV:
# BANKNUMMER  = BankNr
# BANKLAND    = Country
# BANKSWIFT   = BIC
$data = Import-Csv -Path $file.FullName -Delimiter ";"

foreach ($item in $data)
{
# Call the C# code to add the CSV lines to the list
[DotJim.Powershell.BankAggregator]::Add($item.BANKLAND,$item.BANKSWIFT,$item.BANKNUMMER)
}

# Call the C# code to get the transformed data
$list = [DotJim.Powershell.BankAggregator]::Get(";")

Write-Host "Found" $list.Count "valid rows"

# Now that we have the list, write it in the new CSV
Out-File -FilePath "$destPath\transformed_bankdata_$(New-Guid).csv" -Encoding UTF8 -InputObject $list
}

Have fun coding!

My bread recipes

[UPDATE 03.2019] added a Brioche recipe.

I recently bought a bread machine, an Unold 8695 Onyx, and I’m very, very happy with it. Simple machine, nothing fancy (whenever I hear of appliances that are “connected”, “internet enabled” or, god forbid, “on the blockchain” I run away) but great value for money and gets the job done, very well.

The manual is excellent, with detailed timing tables and recipes which I fully recommend. That said, I did get the recipes that I liked most -the humble white bread and the farmer’s bread- and customized them a bit.

These are the ingredients, in the order which I put them in the bowl:

Brioche

Ingredient For 600 gr bread
White flour (Zopfmehl, type 405) 390 ml
Salt 3/4 teasp. (4 gr)
Sugar 2 tblsp. (40 gr)
Vanille sugar 1 pkg (8 gr)
Whole egg 1
Egg yolk 1
Yeast, fresh 1/2 cube
Milk 160 ml
Butter 80 gr

Important note: put everything in the bread maker bowl, in that order, except the milk and the butter. Then heat the milk and the butter just slightly (do not boil!) until the butter is almost melted. Then pour the milk-butter mix in the bowl over the other ingredients.

Use the Sweet (“Hefekuchen”) or Quick (“Schnell”) program, size 1 (“Stufe 1”) and light crust setting.

White bread

Ingredient For 500 gr bread For 800 gr bread
Water 230 ml 300 ml
Salt 3/4 teasp. (4 gr) 1 teasp. (6 gr)
Honey 2 tblsp. (40 gr) 2.5 tblsp. (52 gr)
Wheat semolina (or Corn polenta) 100 gr 126 gr
Whole wheat flour (Ruchmehl) or light whole wheat flour (Halbweissmehl) 20 gr 30 gr
White flour (Weissmehl, type 550, preferably with vitamins) 280 gr 356 gr
Yeast (if fresh yeast is used, use 1/2 a cube in both cases) 5 gr

7 gr (1 package)

Farmer’s bread

Ingredient For 800 gr bread
Water 320 ml
Leaven (Sauerteig; in CH, I can only find leaven powder in Coop) 10 gr (1 package)
Salt 1 teasp. (6 gr)
Butter or margarine 20 gr
Honey 2.5 tblsp. (52 gr)
Light whole wheat flour (Halbweissmehl) 400 gr
White flour (Weissmehl, type 550, preferably with vitamins) 100 gr
Yeast, fresh 1/2 cube

For both of them, I then use the “Quick” (“Schnell”) program, with light or medium crust. 1h 40min later, it’s ready.

Enjoy!

Citrix on Ubuntu 18.04

I recently changed from Win10 to Ubuntu 18.04 as my main OS at home. I still have Windows in a few VMs, as I need to do the occasional development with Visual Studio.

But a problem I had was that needed to connect to the office when doing home office.

Now, at work we have Citrix Netscaler Gateway. And there’s a Linux client available. It worked, but not as smoothly as I hoped 🙂

Here’s what I did:

From Ubuntu’s Software Center, I installed Citrix Receiver.

Then it asked for the server and tried to connect, but I was getting an error: “An SSL connection to the server could not be established because the server’s certificate could not be trusted.”

So I opened a terminal and gave the following commands (source):

sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts/

sudo c_rehash /opt/Citrix/ICAClient/keystore/cacerts/

After that it connected, but it was still giving an error: “A protocol error occured while communicating with the Authentication Service”

So after some sleuthing, I opened my browser (Chrome) and connected to the my company’s Citrix server address (https://server). When I clicked the apps there, it worked.

Powershell & Microsoft Dynamics CRM: how to get results using a FetchXml

[Update June 2020] There’s a newer post that does the same as this and is more complete -it includes paging and updating records. You might want to check it out here.

If you’ve used Microsoft CRM as a power user (on-premise or online), chances are you’ve come across the standard way of querying CRM data, FetchXml.

You can run this by hand but of course the real power of it is using it to automate tasks. And another great way to automate tasks in Windows is, naturally, powershell.

So here’s a script I’m using to run a fetch xml and export the results to a csv file:

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, May 2018
#
# Prerequisites:
# 1. Install PS modules
#    Run the following in a powershell with admin permissions:
#       Install-Module -Name Microsoft.Xrm.Tooling.CrmConnector.PowerShell
#       Install-Module -Name Microsoft.Xrm.Data.PowerShell -AllowClobber
#
# 2. Write password file
#    Run the following and enter your user's password when prompted:
#      Read-Host -assecurestring | convertfrom-securestring | out-file C:\temp\crmcred.pwd
#
# ============ Constants to change ============
$pwdFile = "C:\temp\crmcred.pwd"
$username = "myusername@mycompany.com"
$serverurl = "https://my-crm-instance.crm4.dynamics.com"
$fetchXmlFile = "c:\temp\fetch.xml"
$exportfile = "C:\temp\crm_export.csv"
$exportdelimiter = ";"
# =============================================
# ============ Login to MS CRM ============
$password = get-content $pwdFile | convertto-securestring
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$password
try
{
    $connection = Connect-CRMOnline -Credential $cred -ServerUrl $serverurl
    # for on-prem use :
    #   $connection = Connect-CrmOnPremDiscovery -Credential $cred -ServerUrl $serverurl
    # you can also use interactive mode if you get e.g. problems with multi-factor authentication
    #   $connection = Connect-CrmOnlineDiscovery -InteractiveMode -Credential $cred
    # or you can use a connection string if you want to use e.g. OAuth or a Client Secret
    # but then the password must be plaintext which is kind of a security no-no
    #   $connString = "AuthType=ClientSecret;url=$serverurl;ClientId=$username;ClientSecret=$password"
    #   $connection = Get-CrmConnection -ConnectionString $connString
}
catch
{
    Write-Host $_.Exception.Message
    exit
}
if($connection.IsReady -ne $True)
{
    $errorDescr = $connection.LastCrmError
    Write-Host "Connection not established: $errorDescr"
    exit
}
else
{
    Write-Host "Connection to $($connection.ConnectedOrgFriendlyName) successful"
}
# ============ Fetch data ============
$fetchXml = [xml](Get-Content $fetchXmlFile)
$result = Get-CrmRecordsByFetch -conn $connection -Fetch $fetchXml.OuterXml
# ============ Write to file ============
# Obviously here, instead of writing to csv directly, you can loop and do whatever suits your needs, e.g. run a db query, call a web service etc etc
$result.CrmRecords | Select -Property lastname, firstname | Export-Csv -Encoding UTF8 -Path $exportfile -NoTypeInformation -Delimiter $exportdelimiter

When you use your own FetchXml, do remember to change the properties in the last line (lastname, firstname).

For a quick test, the example FetchXml I’m using is the following:

<fetch mapping="logical" version="1.0">
    <entity name="account">
        <attribute name="customertypecode" alias="customertypecode"/>
        <attribute name="name" alias="company_name"/>
        <attribute name="emailaddress1" alias="company_emailaddress1"/>
        <link-entity name="contact" from="accountid" to="accountid" link-type="inner">
            <attribute name="lastname" alias="lastname"/>
            <attribute name="firstname" alias="firstname"/>
        </link-entity>
    </entity>
</fetch>

Have fun coding!