Tag Archives: how-to

Password Manager For Dummies

Normally I start every post with a small introduction. This one I want to keep as short as possible so I’ll just say this: It’s 2021. You need a password manager.

Let’s start from the very beginning. First, I’ll explain a few things you’ll hear often. A lot of these words can seem daunting but actually are quite simple. Then we get down to the nitty gritty.

I DON’T WANT TO DO THIS WHY DO I NEED TO DO THIS???!??!

Because there are some things that you 1) want to be able to do on the internet but 2) don’t want other people to be able do (at least not without you knowing).

You don’t want other people to move money from your bank account. Or buy things with your credit card. You get the idea.

But but but I already have a password!

Yes, you do. But there are some problems.

If you’re, well, human, you can remember some things but not many and not very well (read this if you don’t believe me). And it’s 2021, if you don’t live under a rock you have at the very least 10-20 accounts in different services, like your bank, your email etc etc. Try to count them and write in the comments how many you found 😊

The other problem is: criminals steal data from these services. A lot. Like, in the billions. Estee Lauder had a breach on February 2020 where 440 million records -data about people- were stolen. MGM Resorts, which you know from the casino in “Ocean’s 11”, had personal information about more than 10 million guests stolen. And these are just 2 of the around 3000 data breaches that were reported in 2020 in the US alone.

What this means is that your password will get stolen and there’s nothing you can do about it. Well, almost nothing. You can and should do 3 things:

  • Have a unique password per service. This way, when your H&M password is stolen, it cannot be used to pay from your PayPal.
  • Use random passwords. For crying out loud, do not use your phone number. You think that adding a few letters here and there makes it safe. It does not. A computer with a program you can download for free can crack your “safe” password in like an hour. The password must be long and random, something like g5D9C467YxeEfAmqL. You get the idea.
  • Use 2-factor authentication. Since this post is already long, I’ll get to this in a later one.

What does “authentication” mean? And what are these “credentials” I keep hearing about?

Credentials just means whatever you need to give to a service, like a web site, so that it checks it’s really you. Some of it is secret, some of it is not. Usually it’s a username and a password but it might be more, like your fingerprint or a code that you receive in your phone.

Authentication is just the process that checks the credentials and lets you in (or not).

What’s a password manager?

It’s a program that stores your credentials and helps you use them. Because your passwords must be long, it’s tedious to have to type them yourself. So the password manager for example can auto-fill them, or you can copy-paste them, in your e-banking web site.

Ok, ok, I’ll do it, but which one should I use?

There are many good password managers you can use like 1Password, LastPass, Devolutions, NordPass and others. Here I’ll use my favourite one which is Bitwarden, because it’s arguably the best free one and in my humble opinion the easiest to use.

Obviously this is just one way to do it; it works and it’s secure, but of course you can change things, for example use a different program. The main things to consider if you decide to use another one is:

  • It should have both a computer as well as a smartphone application.
  • It should be able to synchronize your credentials between them.
  • It should be as simple to use as possible.

And how much time will it take?

Realistically, assuming you’re an average computer and smartphone user, for 5-10 web sites you’ll need around a couple of hours from start to finish. Obviously if you have dozens it will take more -not proportionally- but it’s also worth more. If you get stuck, write me in the comments and I’ll do my best to help.

UPDATE: some friends suggested that instead of doing all your sites at once, it makes the effort more manageable to do the most important ones first -e-banking, email etc. The rest you can do when you come across them in everyday use.

Now I’ll explain how you do it in your computer and smartphone. Ready, set, go!

Password Manager For Dummies: Store your passwords

Part 1: Introduction
Part 2: Store your passwords
Part 3: Now on your phone

We’ll start from your computer because usually it’s easier to create the account there. Then we’ll continue to your smartphone. But the very first thing you need to do is grab a piece of old fashioned paper.

Step 1: Write a password and a 6 digit code.

Get a paper. Yes the traditional one!

Not necessarily a post-it, but this will do as well

Write 20 or more random numbers and letters, both lower and capital. Something like 6xTzHx41jKQ3yg48FeR9sAb. This will be your password.

You don’t need to remember this.

In the same piece of paper write 6 random numbers. DO NOT USE ANYTHING REAL OR EVEN CLOSE TO IT LIKE YOUR BIRTHDAY OR YOUR POSTCODE OR YOUR PHONE, NOT EVEN CHANGED. This will be your unlock code.

This code will be the one and only thing you need to learn by heart.

Keep this paper safe in your desk at home but NOT in your computer -don’t take a photo of it or write it in a Word file.

Step 2: Create your Bitwarden account

On your computer, go to bitwarden.com and click “Get started”.

Fill in the form, it’s really simple. Use the password you wrote on the paper.

Step 3: Install the browser extension

Still on your computer, open your favourite browser -Firefox, Chrome, Edge, Opera, whatever- go to the bitwarden extension and install it.

Here it is for Firefox

Here for Chrome

Here for Microsoft Edge (you’re not still using Internet Explorer, are you?)

And here for Opera.

In case you’re using anything else, just google “bitwarden <browser name>” and you’ll find it.

NOTE: As you’ll see, about the only annoying thing with Bitwarden is that if you click outside of it before you save your changes, it closes and loses your input. There’s a solution for this: you can click the “Pop out” button” and then it opens as a separate window. The “Pop up” button is this one:

When the extension is installed, you’ll get the Bitwarden shield icon on the top right corner of your browser. Click it and fill in your email and password.

Once you log in you see your list of passwords. This a called your “vault”. For now, it’s obviously empty.

Click “Settings”, then “Unlock with pin”. Enter the 6 numbers you wrote on the paper and uncheck the “lock with master password…” check box.

Step 4: Store your credentials

If you’ve done so far, great job! Now it’s the time to start storing your passwords, one by one.

Click the shield icon of Bitwarden, then the plus icon on the top right corner.

Start with your email. Enter the name, username and password -the ones you have already. Add also the URL you use to access the site. Then click “Save”.

One by one, add all the sites and other services you have. This will probably take some time; my list has more than 400 entries 😊

Step 5: Try it

So all of this is supposed to help you right? Here’s how it helps you login. Say you want to log in to your email for example.

Click the shield icon of Bitwarden, click “My vault” and click the little arrow of the site. You’ll see that it takes you there.

In your email site, click “Sign in” or “Login” or whatever it has. Right click in the username or password and select Bitwarden > Auto-fill > your site name. Then click Next or Login or whatever it has.

If for whatever reason right click doesn’t find the site, there’s another way that’s not as easy but works every time. From “My vault” click the head icon to copy the username, then paste it in the site, then click the key icon to copy the password, then paste it in the site.

After doing it a few times, you’ll get the hang of it; it will feel very easy very quickly.

Step 6: Change your passwords

Until now you’ve done great, but we’re still using our old passwords. Now it’s the time to make them big and hard πŸ˜‰

The exact process differs slightly for every site, obviously, but not much. In this example, I’ll use a popular e-shop, Zara UK.

Go to your profile and go to change password:

In the bitwarden “My vault” click the key icon of the site (see above) to copy the existing password. Paste it in the “Current password” box of the web site.

Then go in the bitwarden “My vault” again and click somewhere in the middle of the site name. This will open the entry. Click Edit on the top right corner.

Click the double arrow next to the password and click “yes” in the “overwrite password” question. Slide the length of the password to something over 17, click “regenerate” and then “select”.

Click “Save” to save the new password.

Now go to “My vault” again, click the key icon to copy the new password, go to the web site and paste it twice. Then click “Update password” or whatever button is there.

The first time you do it will be cumbersome, but after the first 2-3 sites, it will feel really easy.

If you’ve reached this far, congratulations πŸ₯³πŸŽ‰πŸ‘ You’ve done the hard work! The last thing to do is install the app on your smartphone so you can use it there too. Let’s go!

Password Manager for Dummies: Now on your phone

Part 1: Introduction
Part 2: Store your passwords
Part 3: Now on your phone

Here we get to the fun part -well, if not fun, certainly the easiest and most useful. I’ll give screenshots for iPhone, because that’s what I have, but for Android it’s almost the same.

Step 1: Install the Bitwarden App

Go to your App Store (or Play Store for Android), find Bitwarden and install it.

Step 2: Login

Open the app, click Log In and fill in the email and password (the one you wrote on the paper).

Go to Settings and press “Unlock with PIN code”. Enter the 6 digit number you wrote on the paper and select “No”.

We’re ready to use it!

Step 3: Use it to login to sites

Let’s try to use the browser in our smartphone to login to Zara UK. Navigate to the web site and click Login, or My Account or whatever it has:

Now switch to Bitwarden (you might need to unlock it with your 6 digit code), find the site, press the 3 dots and click Copy Username.

Switch to the browser, tap in the username box and paste the username.

Repeat the same steps for the password and click Log In.

Ta da! We’re in!

That’s all folks

This was what you have to do to get started and work with Bitwarden. It’s not an exhaustive guide, mind you, there are more to it. But it covers the most important part: securely creating, storing and using unique passwords that are impossible to guess.

I hope this works for you. If you have any questions or suggestions, I’ll be more than happy to discuss in the comments!

Have fun 😊

How to ask for a certificate the right way: CSR via Windows or Keytool with Subject Alternative Names (SANs)

Sooo you’re working in an enterprise and have to maintain an internal server. The security audit asks you to ensure all HTTP communications are encrypted, so you need to change to HTTPS. Boy is this SO not obvious. You’d think this should be quite easy by now, but there are A LOT of pitfalls in your way.

If you want the TL;DR version, to skip the explanation and go directly to the instructions, scroll directly to the Mandalorian below. No hard feelings, honest 😊

Mistake #1: Use a self-signed certificate

Many, many, MANY tutorials you’ll find online are written with a developer in mind, leaving the maintainer/admin as an afterthought -if that. So what they care about is having some certificate, any certificate, as long as it works on the developer’s PC.

But what this certificate says is basically “I’m Jim because I say so”.

Do I need to say that it won’t work for other PCs? Yes? Well surprise, it won’t.

Mistake #2: Get a certificate from your PC’s certificate authority

I don’t know how some people don’t understand that this, while being a bit more complex, it’s basically the same as #1. What this certificate says is “I’m Jim because someone else who is also Jim says so”.

Yeah, no, it won’t work.

Mistake #3: Get a certificate from a trusted certificate authority using only a server name (or an alias).

Now we’re getting more serious.

Getting a certificate from a trusted certificate authority (CA for short) is the right thing to do. The certificate you get then says “I’m Jim because someone else who you already trust says so”.

So if you get a certificate that verifies you’re, say, server web-ch-zh.xyz123.com or mysite.xyz123.com is good enough. Right?

Ummm…

IT DEPENDS.

If you run a website (e.g. https://www.xyz123.com) and want your HTTPS URL to work without giving a certificate warning that’s fine. You don’t need to do anything else. That’s why most tutorials that avoid the self-signed certificate mine stop here.

But remember, our scenario is that we’re working for an enterprise (a big company) and we’re maintaining an internal server. What that usually -not always, but a lot of the time- means is that communication to our server happens using different hostnames.

Let me give you my own example:

  • I run a service called Joint Information Module or JIM for short -that’s a totally real service name [1].
  • The server name is ch-zh-jim-01.mycompany.local.
  • The users use the web interface of the service by navigating to https://jim.mycompany.com.
  • Another application uses the REST API of the service using the server name (ch-zh-jim-01) without the domain name (mycompany.local).
  • The service uses a queuing software that is installed on the same server. We want to use the same certificate for this as well. The JIM service accesses the queues via https://localhost (and a port number).

Now, if the certificate you got says “ch-zh-jim-01.mycompany.local ” and you try to access the server via https://ch-zh-jim-01, https://jim.mycompany.com, https://localhost or https://127.0.0.1, you’ll get a certificate error much like the following:

certificate error chrome

Also, the REST API won’t work. The caller will throw an exception, e.g. java.security.cert.CertPathValidatorException in Java or System.Security.Authentication.AuthenticationException in DotNet. You can avoid this by forcing your code to not care about invalid certificates but this is a) lazy b) bad c) reaaaaaaaaaaly bad, seriously man, don’t do this unless the API you’re connecting to is completely out of your control (e.g. it belongs to a government).

The correct way

So you need a certificate that is trusted and valid for all the names that will be used to communicate with your server. How do you do that? SIMPLEZ!

  1. Generate a CSR (a certificate signing request, which is a small file you send to the CA) with the alternative names (SANs) you need. That’s what I’ll cover here.
  2. Send it to a trusted CA
    1. either the one your own company operates or
    2. a commercial one (which you have to pay), say Digicert.
  3. Get the signed certificate and install it on your software.

Important note: the CA you send the CSR to must support SANs. Not every CA supports this, for their own reasons. Make sure you read their FAQ or ask their helpdesk. Let’s Encrypt, a free and very popular CA, supports them.

Here I’ll show how you can generate a CSR, both in the “Microsoft World” (i.e. on a Windows machine) and in the “Java World” (i.e. on any machine that has Java installed).

A. Using Windows

Note that this is the GUI way to do this. There’s also a command line tool for this, certreq. I won’t cover it here as this post is already quite long, but you can read a nice guide here and Microsoft’s reference here. One thing to note though is that it’s a bit cumbersome to include SANs with this method.

  1. Open C:\windows\System32\certlm.msc (“Local Computer Certificates”).
  2. Expand “Personal” and right click on “Certificates”. Select “All tasks” > “Advanced Operations” > “Create Custom Request”.
  3. In the “Before you begin” page, click Next.
  4. In the “Select Certificate Enrollment Policy” page, click “Proceed without enrollment policy” and then Next.
  5. In the “Custom Request” page, leave the defaults (CNG key / PKCS #10) and click Next.
  6. In the “Certificate Information” page, click on Details, then on Properties.
  7. In the “General” tab:
    1. In the “Friendly Name” field write a short name for your certificate (that has nothing to do with the server). E.g. cert-jim-05-2021.
    2. In the “Description” field, write a description, duh 😊
  8. In the “Subject” tab:
    1. Under “Subject Name” make sure the “Type” is set to “Full DN” and in the Value field paste the following (without the quotes): “CN=ch-zh-jim-01.mycompany.local, OU=IT, O=mycompany, L=Zurich, ST=ZH, C=CH” and click “Add”. Here:
      • Instead of “ch-zh-jim-01.mycompany.local” enter your full server name, complete with domain name. You can get it by typing ipconfig /all in a command prompt (combine Host Name and Primary Dns Suffix).
      • Instead of “IT” and “mycompany” enter your department and company name respectively.
      • Instead of “Zurich”, “ZH” and “CH” enter the city, state (or Kanton or Bundesland or region or whatever) and country respectively.
    2. Under “Alternative Name”:
      1. Change the type to “IP Address (v4)” and in the Value field type “127.0.0.1”. Click “Add”.
      2. Change the type to “DNS” and in the Value field type the following, clicking “Add” every time:
        • localhost
        • ch-zh-jim-01 (i.e. the server name without the default domain)
        • jim.mycompany.com (i.e. the alias that will be normally used)
        • (add as many names as needed)

Important note: all names you enter there must be resolvable (i.e. there’s a DNS entry for the name) by the CA that will generate your certificate. Otherwise there’s no way they can confirm you’re telling the truth and the request will most likely be rejected.

It should end up looking like this:

  1. In the “Extensions” tab, expand “Extended Key Usage (application policies)”. Select “Server Authentication” and “Client Authentication” and click “Add”.
  2. In the “Private Key” tab, expand “Key Options”.
    1. Set the “Key Size” to 2048 (recommended) or higher.
    2. Check the “Mark private key exportable” check box.
    3. (optional, but HIGHLY recommended) Check the “Strong private key protection” check box. This will make the process ask for a certificate password. Avoid only if your software doesn’t support this (although if it does, you really should question if you should be using it!).

At the end, click OK, then Next. Provide a password (make sure you keep it somewhere safe NOT ON A TEXT FILE ON YOUR DESKTOP, YOU KNOW THAT RIGHT???) and save the CSR file. That’s what you have to send to your CA, according to their instuctions.

B. Using Java

Here the process is sooo much simpler:

  1. Open a command prompt (I’m assuming your Java/bin is in the system path; if not, cd to the bin directory of your Java installation). You should have enough permissions to write to your Java security dir; in Windows, that means that you need an administrative command prompt.
  2. Create the certificate. Type the following, in one line, but given here splitted for clarity. Replace as explained below.
keytool
-genkey
-noprompt
-cacerts
-alias cert-jim-05-2021 
-dname "CN=ch-zh-jim-01.mycompany.local, OU=IT, O=mycompany, L=Zurich, ST=ZH, C=CH" 
-keyalg RSA
-keysize 2048
-storepass changeit
-keypass MYSUPERSECRETPASSWORD
  1. Create the certificate signing request (CSR). Type the following, in one line, but given here splitted for clarity. Replace as explained below.
keytool 
-certreq 
-file c:\temp\cert-jim-05-2021.csr 
-cacerts 
-alias cert-jim-05-2021 
-dname "CN=ch-zh-jim-01.mycompany.local, OU=IT, O=mycompany, L=Zurich, ST=ZH, C=CH" 
-ext "SAN=IP:127.0.0.1,DNS:localhost,DNS:ch-zh-jim-01,DNS:jim.mycompany.com" 
-ext "EKU=serverAuth,clientAuth"
-storepass changeit 
-keypass MYSUPERSECRETPASSWORD

In the steps above, you need to replace:

  • “cert-jim-05-2021”, both in the filename and the alias, with your certificate name (which is the short name for your certificate; this has nothing to do with the server itself).
  • “ch-zh-jim-01.mycompany.local” with the full DNS name of your server.
  • “IT” and “mycompany” with your department and company name respectively.
  • “Zurich”, “ZH” and “CH” with your city, state (or Kanton or Bundesland or region or whatever) and country respectively.
  • “ch-zh-jim-01” with your server name (without the domain name).
  • “jim.mycompany.com” with the DNS alias you’re using. You can add as many as needed, e.g. “DNS:jim.mycompany.com,DNS:jim-server.mycompany.com,DNS:jim.mycompany.gr,DNS:jim.mycompany.ch”

Important note: all names you enter there must be resolvable (i.e. there’s a DNS entry for the name) by the CA that will generate your certificate. Otherwise there’s no way they can confirm you’re telling the truth and the request will most likely be rejected.

  • “changeit” is the default password of the Java certificate store (JAVA_HOME/jre/lib/security/cacerts). It should be replaced by the actual password of the certificate store you’re using. But 99.999% of all java installations never get this changed 😊 so if you don’t know otherwise, leave it as it is.
  • “MYSUPERSECRETPASSWORD” is a password for the certificate. Make sure you keep it somewhere safe NOT ON A TEXT FILE ON YOUR DESKTOP, YOU KNOW THAT RIGHT???

That’s it. The CSR is saved in the path you specified (in the “-file” option). That’s what you have to send to your CA, according to their instuctions.

Enjoy!

[1] no it’s not, c’mon

RabbitMQ: How to move configuration, data and log directories on Windows

A good part of my job has to do with enterprise messaging. When a piece of data -a message- needs to be sent from, say, an invoicing system to an accounting system and then to a customer relationship system and then to the customer portal… it has to navigate treacherous waters.

Avast ye bilge-sucking scurvy dogs! A JSON message from accounting says they hornswaggled 1000 doubloons! Aarrr!!!

So we need to make sure that whatever happens, say if a system is overloaded while receiving the message, the message will not be lost.

A key component in this is message queues (MQ), like RabbitMQ. An MQ plays the middleman; it receives a message from a system and stores it reliably until the next system has confirmed that it picked it up.

My daily duties includes setting up, configuring and maintaining a few RabbitMQ instances. It works great! Honestly, so far -for loads up to a couple of 100s of messages per second- I haven’t even had the need to do any serious tuning.

But one thing that annoys me on Windows is that, after installation, the location of everything except the binaries -configuration, data, logs- is under the profile dir of the user (C:\Users\USERNAME\AppData\Roaming\RabbitMQ) that did the installation, even if the service runs as LocalSystem. Not very good, is it?

Therefore I’ve created this script to help me. The easiest way to use it is run it before you install RabbitMQ. Change the directories in this part and run it from an admin powershell:

# ========== Customize here ==========
$BaseLocation = "C:\mqroot\conf"
$DbLocation = "C:\mqroot\db"
$LogLocation = "C:\mqroot\log"
# ====================================

Then just reboot and run the installation normally; when it starts, RabbitMQ will use the directories you specified.

You can also do it after installation, if you have a running instance and want to move it. In this case do the following (you can find these steps also in the script):

  1. Stop the RabbitMQ service.
  2. From Task Manager, kill the epmd.exe process if present.
  3. Go to the existing base dir (usually C:\Users\USERNAME\AppData\Roaming\RabbitMQ)
    and move it somewhere else (say, C:\temp).
  4. Run this script (don’t forget to change the paths).
  5. Reboot the machine
  6. Run the “RabbitMQ Service (re)install” (from Start Menu).
  7. Copy the contents of the old log dir to $LogLocation.
  8. Copy the contents of the old db dir to $DbLocation.
  9. Copy the files on the root of the old base dir (e.g. advanced.config, enabled_plugins) to $BaseLocation.
  10. Start the RabbitMQ service.

Here’s the script. Have fun πŸ™‚

#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, March 2021
#

# What this script does is:
#   1. Creates the directories where the configuration, queue data and logs will be stored.
#   2. Downloads a sample configuration file (it's necessary to have one).
#   3. Sets the necessary environment variables.

# If you're doing this before installation: 
# Just run it, reboot and then install RabbitMQ.

# If you're doing this after installation, i.e. if you have a 
# running service and want to move its files:
#   1. Stop the RabbitMQ service
#   2. From Task Manager, kill the epmd.exe process if present
#   3. Go to the existing base dir (usually C:\Users\USERNAME\AppData\Roaming\RabbitMQ)
#      and move it somewhere else (say, C:\temp).
#   4. Run this script.
#   5. Reboot the machine
#   6. Run the "RabbitMQ Service (re)install" (from Start Menu)
#   7. Copy the contents of the old log dir to $LogLocation.
#   8. Copy the contents of the old db dir to $DbLocation.
#   9. Copy the files on the root of the old base dir (e.g. advanced.config, enabled_plugins) 
#      to $BaseLocation.
#   10. Start the RabbitMQ service.

# ========== Customize here ==========

$BaseLocation = "C:\mqroot\conf"
$DbLocation = "C:\mqroot\db"
$LogLocation = "C:\mqroot\log"

# ====================================

$exampleConfUrl = "https://raw.githubusercontent.com/rabbitmq/rabbitmq-server/master/deps/rabbit/docs/rabbitmq.conf.example"

Clear-Host
$ErrorActionPreference = "Stop"

$dirList = @($BaseLocation, $DbLocation, $LogLocation)
foreach($dir in $dirList) {
    if (-not (Test-Path -Path $dir)) {
        New-Item -ItemType Directory -Path $dir
    }
}

# If this fails (e.g. because there's a firewall) you have to download the file 
# from $exampleConfUrl manually and copy it to $BaseLocation\rabbitmq.conf
try {
    Invoke-WebRequest -Uri $exampleConfUrl -OutFile ([System.IO.Path]::Combine($BaseLocation, "rabbitmq.conf"))
}
catch {
    Write-Host "(!) Download of conf file failed. Please download the file manually and copy it to $BaseLocation\rabbitmq.conf"
    Write-Host "(!) Url: $exampleConfUrl"
}

&setx /M RABBITMQ_BASE $BaseLocation
&setx /M RABBITMQ_CONFIG_FILE "$BaseLocation\rabbitmq"
&setx /M RABBITMQ_MNESIA_BASE $DbLocation
&setx /M RABBITMQ_LOG_BASE $LogLocation

Write-Host "Finished. Now you can install RabbitMQ."

How to upgrade Ubuntu from an unsupported version

Some time ago, a friend of mine (the one of “How I fought off a Facebook hacker” fame) had problems with his Windows laptop, basically the machine became next to useless. Sadly, while I generally like Windows (there are exceptions) this is something that happens all too often. So I solved it by installing Ubuntu, and even though he’s not technically proficient he’s very happy -the machine isn’t exactly lightning fast, but it works and it’s stable.

But a small mistake I made was installing the latest-greatest Ubuntu version available at the time, 19.04. Now for those who don’t know, Ubuntu has some releases that are supported for a long time, called LTS for Long Term Support, and the ones in between that are… not. Full list here.

So as of January 2020, 19.04 went into End-Of-Life status, meaning you can’t download and install updates the normal way (apt upgrade) any more. And without updates, you can’t upgrade to a newer release (do-release-upgrade) as well. The first symptom is that, while trying to install updates, he was getting errors similar to the following:

E: Unable to locate package XXX

An additional problem is that we’re in different countries, so I couldn’t just do the usual routine backup-format-reinstall everything πŸ™‚

But as usual, Google is your friend! That’s how I solved it from the command line:

sudo sed -i -re 's/([a-z]{2}\.)?archive.ubuntu.com|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list
sudo apt update
sudo apt upgrade
# ...wait for like 30min, then restart...
sudo do-release-upgrade
# ...wait for a couple of hours, restart

What does this do? Well everything except the first line is the standard procedure to upgrade: update (i.e. refresh info for) the software repositories, upgrade (i.e. download and install the updates), restart and then do-release-upgrade which upgrades the complete Ubuntu system -always to the latest LTS release.

But the “magic” is in the first line (and let’s give credit where it’s due). This changes the list that keeps the repositories location (/etc/apt/sources.list) from the normal locations (under archive.ubuntu.com or security.ubuntu.com) to the “historic” servers, old-releases.ubuntu.com. For more info, see “Update sources.list” here.

So after that is done, apt upgrade can now install whatever updates are available and then do-release-upgrade can do its job.

Design by contract Tutorial, part 6/6: [Swagger] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Sound check time: did it really work?

We already saw that the health page works. But it’s time to check if our objective was met.

Remember, the goal is to let developers outside our network use the mock service to help them with their implementation. To see if this works as intended, we can use the swagger file we created and the online Swagger editor.

So open the CustomerTrust.yaml file with a text editor, copy all its contents, navigate your browser to https://editor.swagger.io, delete the default content and paste ours. You’ll get this:

Select the mock service from the drop down, click on one of the services, click “Try it out” and then “Execute“. After a few seconds you… will get an error, something like “NetworkError when attempting to fetch resource.

Why? Well it’s the browser preventing us from doing do. If you press F12 and watch the Console, you’ll see something like “Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at xxxxxx”. More info here, but in short, it’s a security measure. You can either disable it in your browser’s settings (WHICH IS A BAD IDEA) or use the curl utility, which you can download here for your operating system.

[EDIT] or I could not be lazy, go back to the wiremock config and set the CORS-related HTTP response headers properly, as explained here, here and specifically for wiremock here.

So after you install curl, you can get the command line from the Swagger Editor:

For GET, the command should be:

curl -X GET "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H "accept: application/json"

Which ever method you pick -GET or POST- we’ll add to this a -v at the end (for more info). So run it and at the end you’ll get this:

401 Unauthorized* Connection #0 to host graubfinancemock.azurewebsites.net left intact

Makes sense right? The mock service expects an authorization token which we haven’t provided. Let’s add this:

curl -X GET "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H "accept: application/json" -H "Authorization: Bearer 1234" -v

And now you’ll get the json:

{
  "name": "GlarusAdvertising AG",
  "taxid": "CHE-123.456.789",
  "trustlevel": "OK"
}

Likewise, let’s try the POST:

curl -X POST "https://graubfinancemock.azurewebsites.net/api/1.0/CustomerTrust/CHE-123.456.789" -H  "accept: */*" -H  "Content-Type: application/json" -d "{\"reportid\":\"2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e\",\"reporttaxid\":\"CHE-123.456.789\",\"taxid\":\"CHE-123.456.789\",\"trustlevel\":\"OK\"}" -H "Authorization: Bearer 1234" -v

And you should see the id of the request in the json response:

{
    "reportid": "2dcc02d9-6402-4ce9-bf44-3d2cbe8bcd5e",
    "status": "OK"
}

A small note on Windows: if you try this in Powershell, it seems that the json escaping is acting funny. If you try it through cmd, it works just fine.

That’s all folks

So now our kind-of-fictional-but-actually-quite-real developers can access the service and test their code against it. And wherever we make a change and push it, the service is updated automatically. Not bad, isn’t it? πŸ™‚

That concludes this guide and its introductory journey in the world of Devops (or, as a friend of mine more accurately calls the field, SRE -short for “Site Reliability Engineering”).

I hope you enjoyed it as much as I did writing it -I really did. I’m sure you’ll have many, many questions which I can try to answer -but no promises πŸ™‚ You can ask here in the comments (better) or in my twitter profile @JimAndrakakis.

Resources

I’ve put all the code in a github repository, here; the only change is that I moved the pipeline yaml in the devops folder and removed my name. You can also find the docker image in docker hub, here.

Have fun coding!

Design by contract Tutorial, part 5/6: [Azure Devops] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Put a little magic in your life: create the auto-deploy pipeline

We’re close to the end of our journey.

So far we’ve basically done everything we need. In this last step we’ll also make it happen automagically: we want to be able to do changes to our code (which, in our scenario, is the wiremock service configuration) and have them get deployed on Azure without us having to do anything.

We’ll use Azure Devops -formerly called Visual Studio Team System (VSTS) or, even earlier, Team Foundation Server (TFS) Online- for this. There are other services we could use as well, like Github or Bitbucket, and they’re equally good.

But whatever your service, in general this process is called CI/CD, short for Continuous Integration / Continuous Delivery. Simply put, CI means that your code is built and tested as soon as you push changes in source control. if the build or any test is not successful, the code changes are rolled back, guaranteeing (well, as far as your tests are concerned) that the code in the repo is correct. CD is the next step, taking the build and deploying it, usually in a test server, then in a staging one and then to production.

So as a first step, create a free account in Azure Devops. You can use the same Microsoft account you used in Azure or different. Once you’ve logged in, create a new project. Let’s call it GraubFinanceMockService.

By default we got a Git repository with the same name as the project. Let’s clone it in our development PC (i’m using C:\src\test, but feel free to use whatever you like).

Make sure you have git installed (or download it from here), then open a command prompt and type (replace the URL with your details):

cd c:\src\test
git clone https://dev.azure.com/YOURUSERNAME/GraubFinanceMockService/_git/GraubFinanceMockService

You’ll be asked for credentials of course (you might want to cache them). After that you’ll get a folder named GraubFinanceMockService. Move in there the folders we created during our previous steps: openapi, wiremock and devops.

Additionally, to avoid committing unwanted files in the repository, create an empty text file on the top folder named .gitignore, open it with a text editor and paste the following:

*.jar
lock.json
**/.terraform/*
*.plan
*.tfstate
*.info
*.backup

Now we’re ready to commit for the first time. Type the following in the command line:

cd c:\src\test\GraubFinanceMockService
git add .
git commit -m 'initial commit'
git push

And our code is there:

Now we’ll start setting up our build. “But wait”, you might reasonably ask, “we don’t really have any code to build, that’s no C# or Java or whatever project, why do we need a build?”.

Well, we do need to build our docker image, and push it in Docker Hub. This way when we change anything in our wiremock config, we’ll get a new image to reflect that.

But before we continue, remember that we have some variables in our tfvars files that we need to replace? Now it’s time to do that. Under Pipelines go to Library, then (+) Variable Group. Name the variable group azureconnectioncredentials, then add four variables (click the lock to set them as secret!):

subscription_id
tenant_id
client_id
client_secret

Be sure to check that “Allow access from all pipelines” is enabled.

But how do you get these values? From Azure CLI. The process is described by Microsoft here, but in short, open a command prompt (remember that from the previous step, we are logged in with Azure CLI already) and write:

az account show
# note the id, that's the subscription id, and the tenant id
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTIONID"

You’ll get something like the following, which you need to keep secret (the recommended way is to use a password manager):

{
  "appId": "XXXXXXXXXX",
  "displayName": "azure-cli-YYYYYY",
  "name": "http://azure-cli-2019-YYYYYY",
  "password": "ZZZZZZZZZZZZZZZZ",
  "tenant": "TTTTTTTTTTTT"
}

So paste these values to the respective variables in Azure Devops. You got the subscription id and tenant id from the first command (az account show). From the second (az ad sp create-for-rbac) get the appId and put it in the client_id variable, and get the password and put it in the client_secret variable. At the end, click Save.

You did set the variables to secret right? πŸ™‚

We need one more variable group for the not-secret stuff. Create a new variable group, name it azurenames and add the following variables (here with sample values):

azurelocation = westeurope
basename = graubfinancemock
dockerimage = YOURUSERNAME/graubfinancemock
dockerimageversion = latest
envtype = test
SKUsize = B1
SKUtier = Basic

Also here we need “Allow access from all pipelines” to be enabled.

Now we’re ready to create a new pipeline. In Azure Devops go to Pipelines > Builds > New Pipeline. You can click “Use the classic editor” if you’re not comfortable with YAML, but here I’ll use Azure Repos Git (YAML) as I can copy paste the result here. Select your code repository and then, to see how it works step by step, Starter Pipeline.

Our new build will get the sources in a directory on the build server, but nothing more than that. Let’s start telling the build server what to do.

First, we need to tell it to use our variable groups. Delete whatever default code is there and paste the following:

name: WHATEVERWORKSFORYOU

trigger:
- master
pool:
  vmImage: 'ubuntu-latest'
variables:
- group: azureconnectioncredentials
- group: azurenames

We don’t really need distinct stages, we’ll just set up two jobs, build and deploy.

Now let’s get it to create the docker image.

jobs:
  - job: Build
    displayName: Build docker image
    steps:     

Now click on Show Assistant on the right, search for Docker and pick Docker (description: Build or push Docker images etc etc). Connect your container registry as follows:

In Container repository you enter the full name of the docker hub repository (YOURUSERNAME/graubfinancemock in our example) but even better, we can use our variable (same for the desired version). So enter $(dockerimage), then change to tags to:

$(dockerimageversion)
1.0.$(Build.BuildId)

Leave everything else to default values, click Add. Under steps you should have the following:

    - task: Docker@2
      enabled: false
      inputs:
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |
          $(dockerimageversion)
          1.0.$(Build.BuildId)

Now click Save and Run. Et voila:

Having built our service, let’s deploy it. Paste the following at the end of the YAML file:

  - job: Deploy
    displayName: Deploy to Azure
    steps:

Now we need to run our cleanup script, then replace the variables in the tfvars files, then run terraform. Search for task Azure CLI, then configure the Azure subscription. Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/cleanup.ps1 and script arguments is “-rgName ‘$(baseName)’” (without the double quotes, but note the single quotes). But remember, this is not on the root of our code repository. Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops (without the double quotes). You should end up with the following:

      - task: AzureCLI@2
        inputs:
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
          arguments: '-rgName ''$(baseName)'''
          workingDirectory: '$(Build.SourcesDirectory)/devops/'

Time to replace the variable values. Add another task named Replace Tokens. Change the target files to **/*.tfvars, uncheck the BOM (it creates problems somtimes). Done.

      - task: replacetokens@3
        inputs:
          targetFiles: '**/*.tfvars'
          encoding: 'auto'
          writeBOM: false
          actionOnMissing: 'warn'
          keepToken: false
          tokenPrefix: '#{'
          tokenSuffix: '}#'

Next up, terraform. We have the batch file ready, but we need terraform.exe to be available. So add a task named Terraform tool installer. Change the version to the latest (find it here, at the time of writing it’s 0.12.15).

      - task: TerraformInstaller@0
        inputs:
          terraformVersion: '0.12.15'

Everything’s ready to run our batch script. As we need Azure CLI to be available for terraform to work the way we want to, add another Azure CLI task . Pick the Azure subscription from the drop down (you don’t have to configure it again). Script type is Powershell Core, script location is Script Path, script path is $(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1 (it’s the one that uses the replaced .tfvars file). Click on Advanced and in working directory enter “$(Build.SourcesDirectory)/devops“(without the double quotes). At the end it should look like this:

      - task: AzureCLI@2
        inputs:
          azureSubscription: 'Free Trial (XXXXXXXX)'
          scriptType: 'pscore'
          scriptLocation: 'scriptPath'
          scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
          workingDirectory: '$(Build.SourcesDirectory)/devops'

We’re ready. The build definition, now complete, should be look this:

name: GraubFinanceMockServiceAutoDeploy

trigger:
- master
pool:
  vmImage: 'ubuntu-latest'
variables:
- group: azureconnectioncredentials
- group: azurenames

jobs:
  - job: Build
    displayName: Build docker image
    steps:     
    - task: Docker@2
      enabled: true
      inputs:
        containerRegistry: 'dockerhub-graubfinancemock'
        repository: '$(dockerimage)'
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |
          $(dockerimageversion)
          1.0.$(Build.BuildId)
  - job: Deploy
    displayName: Deploy to Azure
    steps:
    - task: AzureCLI@2
      inputs:
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/cleanup.ps1'
        arguments: '-rgName ''$(baseName)'''
        workingDirectory: '$(Build.SourcesDirectory)/devops/'
    - task: replacetokens@3
      inputs:
        targetFiles: '**/*.tfvars'
        encoding: 'auto'
        writeBOM: false
        actionOnMissing: 'warn'
        keepToken: false
        tokenPrefix: '#{'
        tokenSuffix: '}#'
    - task: TerraformInstaller@0
      inputs:
        terraformVersion: '0.12.15'
    - task: AzureCLI@2
      inputs:
        azureSubscription: 'Free Trial (XXXXXXX)'
        scriptType: 'pscore'
        scriptLocation: 'scriptPath'
        scriptPath: '$(Build.SourcesDirectory)/devops/terraformdeploy-pipeline.ps1'
        workingDirectory: '$(Build.SourcesDirectory)/devops'

Did it work? Navigate your browser to https://graubfinancemock.azurewebsites.net/servicehealth and:

Ta da!

We’re basically done. Let’s see how helpful our service is to our developers.

Design by contract Tutorial, part 4/6: [Terraform] Mock your interfaces using Swagger, Wiremock, Docker, Azure Devops, Terraform and Azure

Infrastructure as Code: time to ship these containers

So we built our mock service and we dockerized it. Next up, run the container on the cloud.

Remember that in our scenario -and in my everyday work life- the mock service has to be accessible from people outside our local network. Of course, one way to do this would be to run it in-house and open a hole in your firewall.

…if you didn’t scream “BAD IDEA!” when you read the last sentence, now it would be the right time to do so πŸ™‚

So, cloud to the rescue. We’ll use Azure here; we’ll create a subscription and then deploy with a terraform infrastructure-as-service (IaC) configuration. So our steps will be:

  1. Create the Azure subscription (manual step)
  2. Create the terraform config that creates a resource group , an app service plan and a web app for containers.
  3. Deploy to azure
  4. Test that it works by calling the /servicehealth path of the mock service.

If you’re deploying an actual application (say, a REST API that connects to a database) on the cloud you probably need more. For example, you might need a firewall, a virtual LAN so that different servers talk to each other but are isolated from the world, an API gateway, a cloud sql database and maybe more. But for our mock service, which has no data that need protection, we can keep it really simple.

  1. Open the azure portal and either create a new subscription or login if you have one already. For new subscriptions, Microsoft gives $200 or usage for free so you can experiment a bit. Running this tutorial has taken me less than $1 out of this amount, so no money actually left my pocket πŸ™‚

After you created the subscription, you need to download the Azure Command-Line Interface (CLI), which is basically a powershell module. If you’re running on Linux -as I am at home- you also need Powershell Core (get it here). After installing, open a powershell prompt (you can also do it from ye olde command prompt) and run:

az login

Follow the instructions and you’re done.

2. Create an devops folder and create an empty text file inside. Name it service.tf and paste the following:

# Configure the Azure provider
provider "azurerm" {
  # for production deployments it's wise to fix the provider version
  #version = "~>1.32.0"

  subscription_id = var.subscription_id
  client_id       = var.client_id
  client_secret   = var.client_secret
  tenant_id       = var.tenant_id   
}

# Create a new resource group
resource "azurerm_resource_group" "rg" {
    name     = var.basename
    location = var.azurelocation
	
    tags = {
        environment = var.envtype
    }
}

# Create an App Service Plan with Linux
resource "azurerm_app_service_plan" "appserviceplan" {
  name                = "${azurerm_resource_group.rg.name}-APPPLAN"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  # Define Linux as Host OS
  kind = "Linux"
  reserved = true # Mandatory for Linux plans

  # Choose size
  # https://azure.microsoft.com/en-us/pricing/details/app-service/linux/
  sku {
    tier = var.SKUtier
    size = var.SKUsize
  }
}

# Create an Azure Web App for Containers in that App Service Plan
resource "azurerm_app_service" "appsvc" {
  name                = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  app_service_plan_id = azurerm_app_service_plan.appserviceplan.id

  # Do not attach Storage by default
  app_settings = {
    WEBSITES_ENABLE_APP_SERVICE_STORAGE = false

    /*
    # Settings for private Container Registires  
    DOCKER_REGISTRY_SERVER_URL      = ""
    DOCKER_REGISTRY_SERVER_USERNAME = ""
    DOCKER_REGISTRY_SERVER_PASSWORD = ""
    */
  }

  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.DockerImage}"
    #always_on        = "false"
    #ftps_state       = "FtpsOnly"
  }

  logs {
    http_logs {
      file_system {
        retention_in_days = var.logdays
        retention_in_mb   = var.logsizemb
      }
    }
  }

  identity {
    type = "SystemAssigned"
  }
}

output "DockerUrl" {
    value = azurerm_app_service.appsvc.default_site_hostname
}

Inside this configuration you may have noticed that we used a few variables, like var.basename. In terraform, we define variables and their values in separate files so that we can use the same base configuration with different details. A common scenario is the same configuration for testing, staging and production environments but with different names (think graubfinance-test for testing, graubfinance-staging for preprod and graubfinance for prod), different service levels etc.

Following best practice, these variables should be defined. Create another empty file called service-vars.tf and paste the following:

variable "basename" {
  type    = string
}

variable "azurelocation" {
  type    = string
}

variable "subscription_id" {
  type    = string
}

variable "client_id" {
  type    = string
}

variable "client_secret" {
  type    = string
}

variable "tenant_id" {
  type    = string
}

variable "envtype" {
    type    = string
}

variable "SKUsize" {
    type    = string
}

variable "SKUtier" {
    type    = string
}

variable "DockerImage" {
    type    = string
}

variable "logdays" {
    type    = number
}

variable "logsizemb" {
    type    = number
}

Now we need one or more “variable values” (.tfvars) files to define the values for our intended environment. Create yet another file, name it service-varvalues-dev.tfvars and paste the following:

basename = "graubfinancemock"

# when logging in as a user via Azure CLI, these values must be null
subscription_id = null
client_id       = null
client_secret   = null
tenant_id       = null

envtype = "test"

# this can change depending on your preferences
# you can get location codes using
# az account list-locations
# e.g. try "eastus" or "centralindia"
azurelocation = "westeurope"

# Using the free tier generates an error.
# Seems that Microsoft does not want people to
# use their resources *completely* free?
# Who knew!
#SKUtier = "Free"
#SKUsize = "F1"

# This is still very cheap though
SKUtier = "Basic"
SKUsize = "B1"

DockerImage = "dandraka/graubfinancemock:latest"

logdays = 30
logsizemb = 30

We’ll use this when testing locally but for later (when we deploy via Azure Devops) we’ll need the same but with placeholders for the deployment process to change. So copy-paste this file as service-varvalues-pipeline.tfvars and change it to look like this:

basename = "#{basename}#"

# when logging in as a service, these must NOT be null
subscription_id = "#{subscription_id}#"
client_id       = "#{client_id}#"
client_secret   = "#{client_secret}#"
tenant_id       = "#{tenant_id}#" 

envtype = "#{envtype}#"
azurelocation = "#{azurelocation}#"

SKUtier = "#{SKUtier}#"
SKUsize = "#{SKUsize}#"

DockerImage = "#{dockerimage}#"

logdays = 30
logsizemb = 30

Obviously the parts between #{…}# are placeholders. We’ll talk about these when we create the pipeline.

3. Now we’ll use terraform to deploy this configuration. Install terraform (instructions here, but basically it’s just an exe that you put in your path), then create a text file in your devops dir, name it terraformdeploy-dev.ps1 and paste the following:

terraform init
# here you need to see stuff happening and then
# "Terraform has been successfully initialized!"
terraform plan -out="out.plan" -var-file="service-varvalues-dev.tfvars"
# if everything went well, apply
terraform apply "out.plan"

Run it. If everything went well, you should get the following (or similar) output at the end:

Outputs:

DockerUrl = graubfinancemock.azurewebsites.net

In order to prepare ourselves for the automated deployment again, copy-paste this small script, name it terraformdeploy-pipeline.ps1 and just change the tfvars name. So the new file will look like this (I’ve stripped the comments here):

terraform init
terraform plan -out="out.plan" -var-file="service-varvalues-pipeline.tfvars"
terraform apply "out.plan"

4. Let’s see if it works

Navigate your browser to https://graubfinancemock.azurewebsites.net/servicehealth (or similar if you made any changes). That’s what you should see:

Hurray! πŸ™‚

Notice also how we got https for free -we didn’t install any certificate or configured anything. Azure took care of it.

Out of curiosity, let’s head over to portal.azure.com to see what happened. Once there, click on “resource groups” and then “graubfinancemock” (or whatever you named it). You’ll see something like this:

Did it cost much? Click “Cost analysis” on the left, for scope select your subscription (by default named “Free trial”) and you see what you paid for our experiment:

It didn’t break the bank, did it? πŸ™‚

To be fair, we didn’t really do much. Most of the CPU usage we were charged for went into getting the system -our linux container running wiremock- up and running. Just out of curiosity, how much does it cost if we use it a little more?

You can try the following experiment: have it answer 1000 (or whatever) requests and see what it costs. Try this powershell script:

cd $env:TEMP
mkdir testrequests
cd testrequests
for ($i=1;$i -le 1000;$i++) { Invoke-WebRequest -Uri "http://graubfinancemock.azurewebsites.net/servicehealth" -OutFile "out-$i.txt"; $i }

After it finishes, click refresh and let’s see the cost analysis again:

No joke: after 1000 requests, it didn’t change a cent. You can see why companies love the cloud! Though again, we didn’t use our CPU heavily -and that’s what Azure charges mostly for.

We’re close to finishing. The last thing to do is to automate the process via Azure Devops (a.k.a. VSTS, a.k.a. TFS Online). Just one last thing: since we’ll be doing the terraform deploy automatically, let’s delete everything we’ve done. Create a file named cleanup.ps1 inside our devops dir and paste the following:

param ([string]$rgName)

[bool]$rgExists = ((az group exists -n $rgName) -eq 'true')

if ($rgExists) 
{ 
        az group delete -n $rgName -y  
} 
else 
{ 
        Write-Host "Resource group $rgName does not exist, nothing to do"
}

Now in the command prompt, run:

 ./cleanup.ps1 -rgName graubfinancemock

A couple of minutes later, everything’s gone.

[EDIT] Just to be clear, this means that every time we deploy, we first delete everything and then we redo it from scratch.

This is fine for our scenario, the mock service, and in general it’s ok when both of these conditions are true:

1. Our Azure components have no state to lose (no databases etc) and
2. The down time doesn’t hurt.

For more complex scenarios, where you have real productive services, state, data etc this approach is not possible. In such cases you need to keep somewhere your plan and state files. This, and the best practice to do so, is explained here by the Terraform team and here by Microsoft.

Having created the auto deployment process, let’s add the missing sparkle and create the auto deployment pipeline.