Category Archives: Tutorials and guides

Powershell – file system operations within a transaction

Anyone who’s ever developed anything connected to a database knows about transactions. Using a transaction we can group data operations so that they happen “all or nothing”, i.e. either all of them succeed or no one does. One example is a transfer from one bank account to another: the complete transaction requires subtracting the amount to be transferred from one account and adding that same amount to the other. We wouldn’t want one to happen without the other, would we?

(yes, I know that’s not a complete description of what transactions are or do; that’s not my purpose here)

But what happens when we need the same from our filesystem?

Let’s take this scenario (which is a simplified version of a true story): we are receiving CSV files from solar panels (via SFTP) and we want to do some preprocessing and then store the data to a database. When processing them we have to generate a lot of intermediate files. After that, we want to move them to a different dir. But if something happens, say if the database is down, we want the whole thing to be cancelled so that, when we retry, we can start over.

Obviously a simple solution would be as follows:

try {
  # do a lot of hard work
  # store the data in the db
  # clean up intermediate files
  # move the CSV file to an "archive" dir
}
catch {
  # clean up intermediate files, potentially clean up any db records etc
}

That’s… well it can work but it’s not great. For example, the cleanup process itself -inside the catch block- might fail.

A much better process would be like that:

try {   
  # start a transaction
    # do a lot of hard work   
    # store the data in the db   
    # clean up intermediate files   
    # move the CSV file to an "archive" dir 
  # commit the transaction
} 
catch {   
  # rollback everything: files, db changes, the whole thing
}

That’s much cleaner! But is it possible to group db changes and filesystem changes (file create, move, delete & append, dir create & delete etc) all in one transaction?

Unix geeks are allowed to feel smug here: some flavors like HP-UX (though not Linux as far as I know) have this baked in from the get go. Your code doesn’t have to do anything special; the OS takes care of this on user request.

But as a matter of fact yes, it is also available on Windows, and it has been for some time now. The requirement is that you’re working on a file system that supports this, like NTFS.

But there’s a problem for the average .NET/Powershell coder: the standard methods, the ones inside System.IO, do not support any of this. So you have to go on a lower level, using the Windows API. Which for .NET coders, there’s no other way to put this, sucks. That’s also the reason why the Powershell implementation of file transactions (e.g. New-Item -ItemType File -UseTransaction) doesn’t work -it relies on System.IO.

I’m pretty sure that this is what crossed the minds of the developers that wrote AlphaFS which is just wonderful. It’s exactly what you’d expect but never got from Microsoft: a .NET implementation of most of System.IO classes that support NTFS’s advanced features that are not available in, say, ye olde FAT32. Chief among them is support for file system transactions.

So the example below shows how to do exactly that. I tried to keep it as simple as possible to highlight the interesting bits, but of course a real world example would be much more complicated, e.g. there would be a combined file system and database transaction, which would commit (or rollback) everything at the same time.

Note that there’s no need for an explicit rollback. As soon as the transaction scope object is disposed without calling Complete(), all changes are rolled back.

# To install:
# Prerequisite: nuget command line must be installed (see https://www.nuget.org/downloads).
#   If it's not in %path%, you need to change step 3 with its full path.
# 1. Create a directory to work into
# 2. Create a text file named "packages.config" with the following content (note the version which will change in the future):
#      <?xml version="1.0" encoding="utf-8"?>
#      <packages>
#        <package id="AlphaFS" version="2.2.6" targetFramework="net46" />
#      </packages>
# 3. Create a text file named "Install_AlphaFS.cmd" with the following content:
#      cd <your work dir>
#      nuget restore -PackagesDirectory .
# 4. Run Install_AlphaFS.cmd

$basePath = "$PSScriptRoot\..\alphatest"

# Stop on error
$ErrorActionPreference = "Stop"

# Make sure the path matches the version from step 2
Import-Module -Name "$PSScriptRoot\AlphaFS.2.2.6\lib\net40\AlphaFS.dll"

if (-not (Test-Path $basePath)) {
    New-Item -ItemType Directory -Path $basePath
}

# Check if the filesystem we're writing to supports transactions.
# On a FAT32 drive you're out of luck.
$driveRoot = [System.IO.Path]::GetPathRoot($basePath)
$driveInfo = [Alphaleonis.Win32.Filesystem.DriveInfo]($driveRoot)
if (-not $driveInfo.VolumeInfo.SupportsTransactions) {
    Write-Error "Your $driveRoot volume $($driveInfo.DosDeviceName) [$($driveInfo.VolumeLabel)] does not support transactions, exiting"
}

# That's some example data to play with.
# In reality you'll probably get data from a DB, a REST service etc.
$list = @{1="Jim"; 2="Stef"; 3="Elena"; 4="Eva"}

try {
    # Transaction starts here
    $transactionScope = [System.Transactions.TransactionScope]::new([System.Transactions.TransactionScopeOption]::RequiresNew)
    $fileTransaction = [Alphaleonis.Win32.Filesystem.KernelTransaction]::new([System.Transactions.Transaction]::Current)

    # Here we're doing random stuff with our files and dirs, just to show how this works.
    # The important thing to remember is that for the transaction to work correctly, ALL methods you use have to be -transacted.
    # I.e. you must not use AppendText() but AppendTextTransacted(), not CreateDirectory() but CreateDirectoryTransacted() etc etc.
    $logfileStream = [Alphaleonis.Win32.Filesystem.File]::AppendTextTransacted($fileTransaction, "$basePath\list.txt")
    foreach($key in $list.Keys) {
        $value = $list.$key
        $filename = "$([guid]::NewGuid()).txt"
        $dir = "$basePath\$key"

        Write-Host "Processing item $key $value"

        if (-not [Alphaleonis.Win32.Filesystem.Directory]::ExistsTransacted($fileTransaction, $dir)) {
            [Alphaleonis.Win32.Filesystem.Directory]::CreateDirectoryTransacted($fileTransaction, $dir)
        }
        [Alphaleonis.Win32.Filesystem.File]::WriteAllTextTransacted($fileTransaction, "$basePath\$key\$filename", $value)        
        $logfileStream.WriteLine("$filename;$key;$value")
    }
    $logfileStream.Close()
    
    # to simulate an error and subsequent rollback:
    Write-Error "Something not great, not terrible happened"
    
    # Commit transaction
    $transactionScope.Complete()
    Write-Host "Transaction committed, all modifications written to disk"
}
catch {
    Write-Host "An error occured and the transaction was rolled back: '$($_.Exception.Message)'"
    throw $_.Exception
}
finally {
    if ($null -ne $logfileStream -and $logfileStream -is [System.IDisposable]) {
        $logfileStream.Dispose()
        $logfileStream = $null
    }    
    if ($null -ne $transactionScope -and $transactionScope -is [System.IDisposable]) {
        $transactionScope.Dispose()
        $transactionScope = $null
    }    
}

Have fun coding!

Bulk modify jobs in JAMS Scheduler

As I’ve mentioned before, at work we’re migrating all our scheduled tasks to JAMS. Now JAMS has a lot of flexibility to notify, sends emails etc but… you have to tell it to 🙂

And you can imagine that having to click-click-type-click in order to change, say, the email address in a few tens of jobs is not the creative work a developer craves for. Writing a powershell script to do that, though, is!

So here’s the script I wrote to change the email address for Warnings and Critical conditions, in bulk. Of course you can easily modify it to do whatever change you want (enable/disable a lot of jobs at once is a good example).

param(
    [string]$jamsServer = "myJamsServer", 
    [string]$jamsPath = "\somePath\someOtherPath"
)

# This script loops through all enabled JAMS jobs under a certain folder
# recursively, and changes the email address except for successes.

Import-Module Jams
$ErrorActionPreference = "Stop"
cls

try
{
    if ($null -eq (Get-PSDrive JD))
    {
        New-PSDrive JD JAMS $jamsServer -scope Local
    }
}
catch
{
    New-PSDrive JD JAMS $jamsServer -scope Local
}

$folders = New-Object System.Collections.ArrayList
$rootFolder = (Get-Item "JAMS::$($jamsServer)$($jamsPath)").Name
$folders.Add($rootFolder) | Out-Null
$childFolders = Get-ChildItem "JAMS::$($jamsServer)$($jamsPath)\*" -objecttype Folder -IgnorePredefined 
$childFolders | foreach { $folders.Add($_.Name) | Out-Null }

$rootJobs = New-Object System.Collections.ArrayList

foreach($f in $folders)
{
    Write-Host "Folder: $f"
    if ($f -eq $rootFolder)
    {
        $jobs = Get-ChildItem "JAMS::$($jamsServer)$($jamsPath)\*" -objecttype Job -IgnorePredefined -FullObject 
        $jobs | foreach { $rootJobs.Add($_.Name) | Out-Null }
    }
    else
    {
        $jobs = Get-ChildItem "JAMS::$($jamsServer)$($jamsPath)\$f\*" -objecttype Job -IgnorePredefined -FullObject 
    }

    # for test
    #$jobs | Format-Table -AutoSize

    foreach($job in $jobs)
    {
        #Write-Host "$($job.Name) : $($job.Properties["Enabled"])"
        #if you need a name filter as well, you can do:
        #if (($job.Name -notlike "*SomeString*") -or ($job.Properties["Enabled"].Value -eq $false))
        if ($job.Properties["Enabled"].Value -eq $false)
        {
            continue
        }

        $jobElements = $job.Elements
        $doUpdate = $false

        foreach($jobElement in $jobElements)
        {
            #Write-Host "$($job.Name) / $($jobElement.ElementTypeName) / $($jobElement.Description) / $($jobElement.ToString())"
            if (($jobElement.ElementTypeName -eq "SendEMail") -and ($jobElement.EntrySuccess -eq $false))
            {
                #Write-Host "$($job.Name) / $($jobElement.ElementTypeName) / $($jobElement.Description) / $($jobElement.FromAddress) / $($jobElement.ToAddress)"
                if ([string]::IsNullOrWhiteSpace($jobElement.ToAddress))
                {
                    $jobElement.FromAddress = "admin@superduperincrediblesoftware.com"
                    $jobElement.ToAddress = "someone@superduperincrediblesoftware.com;andhisdog@superduperincrediblesoftware.com"
                    $jobElement.MessageBody = "Uh, Houston, we've had a problem"      
                    $doUpdate = $true              
                }
            }
        }

        if ($doUpdate -eq $true)
        {
            $job.Update()
            Write-Host "Job $($job.Name) is updated"
        }
    }    
}

Have fun coding 🙂

Running Groovy scripts in JAMS Scheduler

Here at work, we’re working on a migration project, from Jenkins (which we’ve been using as a scheduler) to JAMS Scheduler. In Jenkins we have a lot of Groovy scripts, and we have them in source control. So, to make the migration as effortless as possible, we wanted to use them “as-is”, right out of source control.

The solution I found was:

  1. On the JAMS agent, install the subversion command line client
  2. Also on the JAMS agent, install groovy
  3. Create a job that gets (“checks out”) the latest scripts every evening from source control in a specific directory; let’s call it c:\jobs
  4. Create a JAMS Execution Method called Groovy (see below)
  5. Create the Jenkins jobs in JAMS, one by one. In the source box, only write the full path of the groovy script, e.g. c:\jobs\TransferOrders.groovy

#4 is where the magic happens. The execution method is defined as a Powershell method. In the template, there’s code that (suprise) calls groovy. The powershell code is the following (see if you can spot a couple of tricks):

#
# Source: DotJim blog (https://dandraka.com)
# Jim Andrakakis, December 2018
#
Import-Module JAMS

# the job's source is supposed to contain ONLY 
# the full path to the groovy script, without quotes
$groovy = "C:\app\groovy-2.5.4\bin\groovy.bat"
$groovyScript="<<JAMS.Current.Source>>"

Write-Host "[JAMS-GROOVY] Running script $groovyScript via $groovy"
if ((Test-Path -Path $groovy) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Groovy executable $groovy not found, is Groovy installed?"
}
if ((Test-Path -Path $groovyScript) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Source file $groovyScript not found"
}

$currentJob = Get-JAMSEntry {JAMS.JAMSEntry} 
$currentJobParams = $currentJob.Parameters
$currentJobParamNames = $currentJobParams.Keys

foreach($n in $currentJobParamNames)
{
	[string]$v = $currentJobParams[$n].Value
	
	# look for replacement tokens
	# in the form of <<ParamName>>
	foreach($r in $currentJobParamNames)
	{
		if ($v.Contains("<<$r>>"))
        {
            [string]$replVal = $currentJobParams[$r].Value
            $v = $v.Replace("<<$r>>", $replVal)
        }
	}
	
	Write-Host "[JAMS-GROOVY] Setting parameter $n = $v"
	[Environment]::SetEnvironmentVariable($n, $v, "Process")
}

# execute the script in groovy
& $groovy $groovyScript

Write-Host "[JAMS-GROOVY] script finished"

Two tricks to note here:

  • Almost all our groovy scripts have parameters; Jenkins inserts the parameters as environment variables so the scripts can do:
myVar = System.getenv()['myVar']

The first powershell loop does exactly that; it maps all the job’s parameters, defined or inherited, as environment variables, so the scripts can continue to work happily, no change needed.

  • The second trick is actually an enhancement. As the scripts get promoted though our environments (development > test > integration test > production) some parts of the parameters change –but not all of them.

For example, let’s say there’s a parameter for an inputDirectory.
In the development server, it has the value c:\documents\dev\input. In test, it’s c:\documents\test\input, in integration test it’s c:\documents\intg\input and in production c:\documents\prod\input.

What we can do now is have a folder-level parameter, defined on the JAMS folder where our job definitions are –which is not transferred from
environment to environment. And we can have job-defined parameters that, using the familiar JAMS <<param>> notation, get their values substituted.

So, for example, let’s say I define a folder parameter named “SERVERLEVEL”, which will have the value of “dev” in development, “test” in test etc. In the job, I define another parameter called inputDirectory. This will have the value c:\documents\<<SERVERLEVEL>>\input.

Et voilà! Now we can promote the jobs from environment to environment, completely unchanged. In Jenkins we couldn’t do that; we had to define different values for parameters in dev, in test etc.

Here’s the export xml of the execution method:

<?xml version="1.0" encoding="utf-8"?>
<JAMSObjects>
  <method
    name="Groovy"
    type="Routine">
    <description><![CDATA[Run a pre-fetched groovy script. The job's source should contain the full path to the groovy script.

Note: in the "Bad regex pattern", the execution methon looks for "Caught:" to try to undertand whether 
groovy encountered an exception or not. Here's an example of the groovy output of a script where
an unhandled exception occured:

Hello, world!
Caught: java.lang.NullPointerException: Cannot invoke method test() on null object
java.lang.NullPointerException: Cannot invoke method test() on null object
        at test1.run(test1.groovy:4)]]></description>
    <template><![CDATA[Import-Module JAMS

# the job's source is supposed to contain ONLY 
# the full path to the groovy script, without quotes
$groovy = "C:\app\groovy-2.5.4\bin\groovy.bat"
$groovyScript="<<JAMS.Current.Source>>"

Write-Host "[JAMS-GROOVY] Running script $groovyScript via $groovy"
if ((Test-Path -Path $groovy) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Groovy executable $groovy not found, is Groovy installed?"
}
if ((Test-Path -Path $groovyScript) -ne $true)
{
	Write-Error "[JAMS-GROOVY] Source file $groovyScript not found"
}

$currentJob = Get-JAMSEntry {JAMS.JAMSEntry} 
$currentJobParams = $currentJob.Parameters
$currentJobParamNames = $currentJobParams.Keys

foreach($n in $currentJobParamNames)
{
	[string]$v = $currentJobParams[$n].Value
	
	# look for replacement tokens
	# in the form of <<ParamName>>
	foreach($r in $currentJobParamNames)
	{
		if ($v.Contains("<<$r>>"))
        {
            [string]$replVal = $currentJobParams[$r].Value
            $v = $v.Replace("<<$r>>", $replVal)
        }
	}
	
	Write-Host "[JAMS-GROOVY] Setting parameter $n = $v"
	[Environment]::SetEnvironmentVariable($n, $v, "Process")
}

# execute the script in groovy
& $groovy $groovyScript

Write-Host "[JAMS-GROOVY] script finished"]]></template>
    <properties>
      <property
        name="HostAssemblyName"
        typename="System.String"
        value="JAMSPSHost" />
      <property
        name="HostClassName"
        typename="System.String"
        value="MVPSI.JAMS.Host.PowerShell.JAMSPSHost" />
      <property
        name="StartAssemblyName"
        typename="System.String"
        value="" />
      <property
        name="StartClassName"
        typename="System.String"
        value="" />
      <property
        name="EditAssemblyName"
        typename="System.String"
        value="" />
      <property
        name="EditClassName"
        typename="System.String"
        value="" />
      <property
        name="ViewAssemblyName"
        typename="System.String"
        value="" />
      <property
        name="ViewClassName"
        typename="System.String"
        value="" />
      <property
        name="BadPattern"
        typename="System.String"
        value="^Caught\:" />
      <property
        name="ExitCodeHandling"
        typename="MVPSI.JAMS.ExitCodeHandling"
        value="ZeroIsGood" />
      <property
        name="GoodPattern"
        typename="System.String"
        value="" />
      <property
        name="SpecificInformational"
        typename="System.String"
        value="" />
      <property
        name="SpecificValues"
        typename="System.String"
        value="" />
      <property
        name="SpecificWarning"
        typename="System.String"
        value="" />
      <property
        name="Force32Bit"
        typename="System.Boolean"
        value="false" />
      <property
        name="ForceV2"
        typename="System.Boolean"
        value="false" />
      <property
        name="HostLocally"
        typename="System.Boolean"
        value="false" />
      <property
        name="Interactive"
        typename="System.Boolean"
        value="false" />
      <property
        name="NoBOM"
        typename="System.Boolean"
        value="false" />
      <property
        name="SourceFormat"
        typename="MVPSI.JAMS.SourceFormat"
        value="Text" />
      <property
        name="EditAfterStart"
        typename="System.Boolean"
        value="false" />
      <property
        name="EditSource"
        typename="System.Boolean"
        value="false" />
      <property
        name="Extension"
        typename="System.String"
        value="ps1" />
      <property
        name="JobModule"
        typename="System.String"
        value="" />
      <property
        name="SnapshotSource"
        typename="System.Boolean"
        value="false" />
      <property
        name="Redirect"
        typename="MVPSI.JAMS.Redirect"
        value="All" />
      <property
        name="HostSubDirectory"
        typename="System.String"
        value="" />
      <property
        name="HostExecutable"
        typename="System.String"
        value="JAMSHost.exe" />
    </properties>
  </method>
</JAMSObjects>

Powershell: How do you add inline C#?

Powershell is great for admin tasks. Stuff like iterating through files and folders, copying and transforming files are very, very easily done. But inevitably there will always be stuff that are easier to do via a “normal” language such as C#.

Trying to solve a problem I had at work, I needed to transform a CSV file by changing the fields -which is easily done via powershell- and, at the same time, do a “get only the highest record of every group”. This is done with LINQ, which you can use in powershell but it’s cumbersome and will result in many, many lines of code.

So I wanted to do this in a more clean way, in C#. The general template to include C# inside a powershell script is the following:

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, November 2018
#
# Here goes the C# code:
Add-Type -Language CSharp @"
using System; 
namespace DotJim.Powershell 
{
    public static class Magician 
    {
        private static string spell = ""; 
        public static void DoMagic(string magicSpell) 
        {
            spell = magicSpell; 
        }
        public static string GetMagicSpells() 
        {
            return "Wingardium Leviosa\r\n" + spell; 
        }
    }
}
"@;

# And here's how to call it:
[DotJim.Powershell.Magician]::DoMagic("Expelliarmus")
$spell = [DotJim.Powershell.Magician]::GetMagicSpells()

Write-Host $spell

Note here that the C# classes don’t have to be static; but if they are, they’re easier to call (no instantiation needed). Of course this only works if all you need to do is provide an input and get a manipulated output. If you need more complex stuff then yes, you can use non-static classes or whatever C# functionality solves your problems. Here’s the previous example, but with a non-static class:

#
# Source: DotJim blog (https://dandraka.com)
# Jim Andrakakis, November 2018
#
# Here goes the C# code:
Add-Type -Language CSharp @"
using System; 
namespace DotJim.Powershell 
{
    public class Magician 
    {
        private string spell = ""; 
        public void DoMagic(string magicSpell) 
        {
            spell = magicSpell; 
        }
        public string GetMagicSpells() 
        {
            return "Wingardium Leviosa\r\n" + spell; 
        }
    }
}
"@;

# Here's how to create an instance:
$houdini = New-Object -TypeName DotJim.Powershell.Magician
# And here's how to call it:
$houdini.DoMagic("Expelliarmus")
$spell = $houdini.GetMagicSpells()

Write-Host $spell

The main advantage of having C# inside the powershell script (and not in a separate dll file) is that it can be deployed very easily with various Devops tools. Otherwise you need to deploy the dll alongside which can, sometimes, be the source of trouble.

So here’s my complete working code, which worked quite nicely:

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, November 2018
#
# The purpose of this script is to read a CSV file with bank data
# and transform it into a different CSV.
#
# 1. The Bank class is a POCO to hold the data which I need
#    from every line of the CSV file.
# 2. The Add() method of the BankAggregator class adds the
#    record to the list after checking the data for correctness.
# 3. The Get() methof of the BankAggregator class does a
#    LINQ query to get the 1st (max BankNr) bank record
#    from every record with the same Country/BIC.
#    It then returns a list of strings, formatted the way
#    I want for the new (transformed) CSV file.
#
# Here is where I inline the C# code:
Add-Type -Language CSharp @"
using System;
using System.Collections.Generic;
using System.Linq;
namespace DotJim.Powershell {
 public class Bank {
  public int BankNr;
  public string Country;
  public string BIC;
 }
 public static class BankAggregator {
  private static List list = new List();
  public static void Add(string country, string bic, string bankNr) {
   //For debugging
   //Console.WriteLine(string.Format("{0}{3}{1}{3}{3}{2}", country, bic, bankNr, ";"));
   int mBankNr;
   // Check data for correctness, discard if not ok
   if (string.IsNullOrWhiteSpace(country) ||
    country.Length != 2 ||
    string.IsNullOrWhiteSpace(bic) ||
    string.IsNullOrWhiteSpace(bankNr) ||
    !int.TryParse(bankNr, out mBankNr) ||
    mBankNr & gt; = 0) {
    return;
   }
   list.Add(new Bank() {
    BankNr = mBankNr, Country = country, BIC = bic
   });
  }
  public static List Get(string delimiter) {
   // For every record with the same Country & BIC, keep only
   // the record with the highest BankNr
   var bankList = from b in list
   group b by new {
    b.Country, b.BIC
   }
   into bankGrp
   let maxBankNr = bankGrp.Max(x = & gt; x.BankNr)
   select new Bank {
    Country = bankGrp.Key.Country,
     BIC = bankGrp.Key.BIC,
     BankNr = maxBankNr
   };
   // Format the list the way I want the new CSV file to look
   return bankList.Select(x = & amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; amp; gt; string.Format("{0}{3}{1}{3}{3}{2}",
    x.Country, x.BIC, x.BankNr, delimiter)).ToList();
  }
 }
}
"@;

# Read one or more files with bank data from the same dir
# where the script is located ($PSScriptRoot)
$srcSearchStr = "source_bankdata*.csv"
$SourcePath = $PSScriptRoot
$destPath = $SourcePath

$fields = @("Country","BIC","EmptyField","BankId")

$filesList = Get-ChildItem -Path $SourcePath -Filter $srcSearchStr

foreach ($file in $filesList)
{
Write-Host "Processing" $file.FullName

# Fields in the source CSV:
# BANKNUMMER  = BankNr
# BANKLAND    = Country
# BANKSWIFT   = BIC
$data = Import-Csv -Path $file.FullName -Delimiter ";"

foreach ($item in $data)
{
# Call the C# code to add the CSV lines to the list
[DotJim.Powershell.BankAggregator]::Add($item.BANKLAND,$item.BANKSWIFT,$item.BANKNUMMER)
}

# Call the C# code to get the transformed data
$list = [DotJim.Powershell.BankAggregator]::Get(";")

Write-Host "Found" $list.Count "valid rows"

# Now that we have the list, write it in the new CSV
Out-File -FilePath "$destPath\transformed_bankdata_$(New-Guid).csv" -Encoding UTF8 -InputObject $list
}

Have fun coding!

My bread recipes

[UPDATE 03.2019] added a Brioche recipe.

I recently bought a bread machine, an Unold 8695 Onyx, and I’m very, very happy with it. Simple machine, nothing fancy (whenever I hear of appliances that are “connected”, “internet enabled” or, go forbid, “on the blockchain” I run away) but great value for money and gets the job done, very well.

The manual is excellent, with detailed timing tables and recipes which I fully recommend. That said, I did get the recipes that I liked most -the humble white bread and the farmer’s bread- and customized them a bit.

These are the ingredients, in the order which I put them in the bowl:

Brioche

Ingredient For 600 gr bread
White flour (Zopfmehl, type 405) 390 ml
Salt 3/4 teasp. (4 gr)
Sugar 2 tblsp. (40 gr)
Vanille sugar 1 pkg (8 gr)
Whole egg 1
Egg yolk 1
Yeast, fresh 1/2 cube
Milk 160 ml
Butter 80 gr

Important note: put everything in the bread maker bowl, in that order, except the milk and the butter. Then heat the milk and the butter just slightly (do not boil!) until the butter is almost melted. Then pour the milk-butter mix in the bowl over the other ingredients.

Use the Sweet (“Hefekuchen”) or Quick (“Schnell”) program, size 1 (“Stufe 1”) and light crust setting.

White bread

Ingredient For 500 gr bread For 800 gr bread
Water 230 ml 300 ml
Salt 3/4 teasp. (4 gr) 1 teasp. (6 gr)
Honey 2 tblsp. (40 gr) 2.5 tblsp. (52 gr)
Wheat semolina (or Corn polenta) 100 gr 126 gr
Whole wheat flour (Ruchmehl) or light whole wheat flour (Halbweissmehl) 20 gr 30 gr
White flour (Weissmehl, type 550, preferably with vitamins) 280 gr 356 gr
Yeast (if fresh yeast is used, use 1/2 a cube in both cases) 5 gr

7 gr (1 package)

Farmer’s bread

Ingredient For 800 gr bread
Water 320 ml
Leaven (Sauerteig; in CH, I can only find leaven powder in Coop) 10 gr (1 package)
Salt 1 teasp. (6 gr)
Butter or margarine 20 gr
Honey 2.5 tblsp. (52 gr)
Light whole wheat flour (Halbweissmehl) 400 gr
White flour (Weissmehl, type 550, preferably with vitamins) 100 gr
Yeast, fresh 1/2 cube

For both of them, I then use the “Quick” (“Schnell”) program, with light or medium crust. 1h 40min later, it’s ready.

Enjoy!

Powershell & Microsoft Dynamics CRM: how to get results using a FetchXml

If you’ve used Microsoft CRM as a power user (on-premise or online), chances are you’ve come across the standard way of querying CRM data, FetchXml.

You can run this by hand but of course the real power of it is using it to automate tasks. And another great way to automate tasks in Windows is, naturally, powershell.

So here’s a script I’m using to run a fetch xml and export the results to a csv file:

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, May 2018
#
# ============ Constants to change ============
# note: create pwd file with the following command:
# read-host -assecurestring | convertfrom-securestring | out-file C:\temp\crmcred.pwd
$pwdFile = "C:\temp\crmcred.pwd"
$username = "myusername@mycompany.com"
$serverurl = "https://my-crm-instance.crm4.dynamics.com"
$fetchXmlFile = "c:\temp\fetch.xml"
$exportfile = "C:\temp\crm_export.csv"
$exportdelimiter = ";"
# =============================================
# ============ Login to MS CRM ============
$password = get-content $pwdFile | convertto-securestring
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$password
try
{
    # for on-prem use :
    # $connection = Connect-CrmOnPremDiscovery -Credential $cred -ServerUrl $serverurl
    $connection = Connect-CRMOnline -Credential $cred -ServerUrl $serverurl
    # you can also use interactive mode if you get e.g. problems with multi-factor authentication
    #$connection = Connect-CrmOnlineDiscovery -InteractiveMode -Credential $cred
}
catch
{
    Write-Host $_.Exception.Message
    exit
}
if($connection.IsReady -ne $True)
{
    $errorDescr = $connection.LastCrmError
    Write-Host "Connection not established: $errorDescr"
    exit
}
# ============ Fetch data ============
$fetchXml = [xml](Get-Content $fetchXmlFile)
$result = Get-CrmRecordsByFetch -conn $connection -Fetch $fetchXml.OuterXml
# ============ Write to file ============
# Obviously here, instead of writing to csv directly, you can loop and do whatever suits your needs, e.g. run a db query, call a web service etc etc
$result.CrmRecords | Select -Property lastname, firstname | Export-Csv -Encoding UTF8 -Path $exportfile -NoTypeInformation -Delimiter $exportdelimiter

When you use your own FetchXml, do remember to change the properties in the last line (lastname, firstname).

For a quick test, the example FetchXml I’m using is the following:

<fetch mapping="logical" version="1.0">
    <entity name="account">
        <attribute name="customertypecode" alias="customertypecode"/>
        <attribute name="name" alias="company_name"/>
        <attribute name="emailaddress1" alias="company_emailaddress1"/>
        <link-entity name="contact" from="accountid" to="accountid" link-type="inner">
            <attribute name="lastname" alias="lastname"/>
            <attribute name="firstname" alias="firstname"/>
        </link-entity>
    </entity>
</fetch>

Have fun coding!

How to make an espresso / cappuccino freddo

So you went for vacations in Greece or Cyprus or southern Italy and liked the cold coffee they serve there? Or maybe you have a Greek colleague who’s busting your balls non stop about how great cold coffee is, and just want him to shut up? You’re at the right place!

Now you’re talking’

These recipe is for both espresso freddo and cappuccino freddo which are exactly the same thing; you just add cold foam milk on top of the espresso freddo to make the cappuccino version.

Over the years I’ve tried to simplify the recipe a bit. It’s not barista-level good, but anyone who’s tried it tells me it’s pretty decent.

You can see the video here:

To begin with, here’s the equipment you need:

  • A strong coffee mixer. This is an absolute must, you can’t do without it. Outside of Greece they are called “drink mixers” (you can find them in amazon.de for example). They look like this:
mixer
  • One or more suitable tall glasses. You need them to be around 200-250 ml for espresso freddo and 300-350 ml for cappuccino freddo. The ones from IKEA are fine.
Trinkglas
  • Two cocktail shakers, one for the milk and one for the coffee. It’s ok if you don’t have shakers though, you can just use normal glasses. But you can also buy them from amazon.de.
shaker

Now let’s see the stuff you need to prepare every time before you make cold coffee.

  • I’m sure you’ll be surprised to learn that you need coffee! Basically you need a double espresso, around 100ml. What I usually do is use the Lungo capsules for my Dolce Gusto machine, and set it to 3 lines instead of 4.
  • You also need straws, medium or thin ones. Don’t get the thick ones, they’re good for smoothies but not cold coffee.
  • You need ice cubes. For every coffee, you need 5-6.
  • If you’re going to make cappuccino (not espresso) freddo, you need milk, and you need it cold. Let me say that again, because it’s really really important: COLD. Ideally it should be 2 degrees. That means that you need to put it at the back of the fridge, not at the door where it’s a bit warmer. I usually put it in the refrigerator about 10min before I start. Keep it in the fridge until the moment you actually need it.

    You also need to experiment a bit with the kind of milk you’ll use. I’ve found that the best one -at least from the ones you find in a regular supermarket- is full fat UHT milk, 3.5%. The one you get at the fridge of the supermarket isn’t as good –no idea why. If you find a “barista milk” get it; they have more proteins so they froth better.

  • One of the shakers, the one to use for milk, has to be really, really cold. Put it in the refrigerator for at least an hour before making the coffee.

The basic idea is that, in order to make the foam milk, the milk has to be cold and stay cold. That’s why you need its container to also be frozen.

Now that we’ve prepared everything, let’s get to work.

  • The first thing you need to do is prepare the coffee. If you also want sugar, you need to add it immediately afterwards, while the coffee is still hot, and stir it a bit with the mixer; that way it will melt nicely and you won’t get the awful crunchy feeling of unmelted sugar.
  • Now we need to get our coffee ice cold. Put 5 or 6 ice cubes in the shaker or glass. Pour the coffee swiftly over the ice cubes. Stir it a bit with the mixer, but too much, you don’t want it to turn into foam. 5-6 seconds should be enough. Then pour everything (coffee+ice cubes) in the glass. 
  • If you want an espresso freddo, you can add a straw and stop here, you’re done. Otherwise you have one more step to prepare the cold foam milk. 
  • Get the milk and the 2nd shaker (or glass) out of the fridge. Fill the shaker just below half full. Stir it with the mixer for some time (at least 30 sec, can be more) until the surface is smooth and free of bubbles. This part is exactly why the shaker has to be cold. If it’s not, it will warm up the milk and it will be impossible to turn into foam.

Pro (well, sort of) tip: when holding the shaker with the milk and stirring, try to grab it from the top, not the middle or the bottom. That way the heat from your hand will affect the milk as little as possible.

The result should, ideally, look like this:

Αποτέλεσμα εικόνας για καπουτσινο φρεντο

The water -always with ice cubes!- is mandatory. The beach isn’t, but it’s a very nice addition 😉