Software engineer from Crete living in Switzerland; C# & Azure paladin; economics hobbyist; firearm enthusiast; perpetually tormented by 3 beautiful women :-)
I recently bought a bread machine, an Unold 8695 Onyx, and I’m very, very happy with it. Simple machine, nothing fancy (whenever I hear of appliances that are “connected”, “internet enabled” or, god forbid, “on the blockchain” I run away) but great value for money and gets the job done, very well.
The manual is excellent, with detailed timing tables and recipes which I fully recommend. That said, I did get the recipes that I liked most -the humble white bread and the farmer’s bread- and customized them a bit.
These are the ingredients, in the order which I put them in the bowl:
Brioche
Ingredient
For 600 gr bread
White flour (Zopfmehl, type 405)
390 ml
Salt
3/4 teasp. (4 gr)
Sugar
2 tblsp. (40 gr)
Vanille sugar
1 pkg (8 gr)
Whole egg
1
Egg yolk
1
Yeast, fresh
1/2 cube
Milk
160 ml
Butter
80 gr
Important note: put everything in the bread maker bowl, in that order, except the milk and the butter. Then heat the milk and the butter just slightly (do not boil!) until the butter is almost melted. Then pour the milk-butter mix in the bowl over the other ingredients.
Use the Sweet (“Hefekuchen”) or Quick (“Schnell”) program, size 1 (“Stufe 1”) and light crust setting.
White bread
Ingredient
For 500 gr bread
For 800 gr bread
Water
230 ml
300 ml
Salt
3/4 teasp. (4 gr)
1 teasp. (6 gr)
Honey
2 tblsp. (40 gr)
2.5 tblsp. (52 gr)
Wheat semolina (or Corn polenta)
100 gr
126 gr
Whole wheat flour (Ruchmehl) or light whole wheat flour (Halbweissmehl)
20 gr
30 gr
White flour (Weissmehl, type 550, preferably with vitamins)
280 gr
356 gr
Yeast (if fresh yeast is used, use 1/2 a cube in both cases)
5 gr
7 gr (1 package)
Farmer’s bread
Ingredient
For 800 gr bread
Water
320 ml
Leaven (Sauerteig; in CH, I can only find leaven powder in Coop)
10 gr (1 package)
Salt
1 teasp. (6 gr)
Butter or margarine
20 gr
Honey
2.5 tblsp. (52 gr)
Light whole wheat flour (Halbweissmehl)
400 gr
White flour (Weissmehl, type 550, preferably with vitamins)
100 gr
Yeast, fresh
1/2 cube
For both of them, I then use the “Quick” (“Schnell”) program, with light or medium crust. 1h 40min later, it’s ready.
I recently changed from Win10 to Ubuntu 18.04 as my main OS at home. I still have Windows in a few VMs, as I need to do the occasional development with Visual Studio.
But a problem I had was that needed to connect to the office when doing home office.
Now, at work we have Citrix Netscaler Gateway. And there’s a Linux client available. It worked, but not as smoothly as I hoped 🙂
Here’s what I did:
From Ubuntu’s Software Center, I installed Citrix Receiver.
Then it asked for the server and tried to connect, but I was getting an error: “An SSL connection to the server could not be established because the server’s certificate could not be trusted.”
So I opened a terminal and gave the following commands (source):
After that it connected, but it was still giving an error: “A protocol error occured while communicating with the Authentication Service”
So after some sleuthing, I opened my browser (Chrome) and connected to the my company’s Citrix server address (https://server). When I clicked the apps there, it worked.
[Update June 2020] There’s a newer post that does the same as this and is more complete -it includes paging and updating records. You might want to check it out here.
If you’ve used Microsoft CRM as a power user (on-premise or online), chances are you’ve come across the standard way of querying CRM data, FetchXml.
You can run this by hand but of course the real power of it is using it to automate tasks. And another great way to automate tasks in Windows is, naturally, powershell.
So here’s a script I’m using to run a fetch xml and export the results to a csv file:
#
# Source: DotJim blog (http://dandraka.com)
# Jim Andrakakis, May 2018
#
# Prerequisites:
# 1. Install PS modules
# Run the following in a powershell with admin permissions:
# Install-Module -Name Microsoft.Xrm.Tooling.CrmConnector.PowerShell
# Install-Module -Name Microsoft.Xrm.Data.PowerShell -AllowClobber
#
# 2. Write password file
# Run the following and enter your user's password when prompted:
# Read-Host -assecurestring | convertfrom-securestring | out-file C:\temp\crmcred.pwd
#
# ============ Constants to change ============
$pwdFile = "C:\temp\crmcred.pwd"
$username = "myusername@mycompany.com"
$serverurl = "https://my-crm-instance.crm4.dynamics.com"
$fetchXmlFile = "c:\temp\fetch.xml"
$exportfile = "C:\temp\crm_export.csv"
$exportdelimiter = ";"
# =============================================
# ============ Login to MS CRM ============
$password = get-content $pwdFile | convertto-securestring
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$password
try
{
$connection = Connect-CRMOnline -Credential $cred -ServerUrl $serverurl
# for on-prem use :
# $connection = Connect-CrmOnPremDiscovery -Credential $cred -ServerUrl $serverurl
# you can also use interactive mode if you get e.g. problems with multi-factor authentication
# $connection = Connect-CrmOnlineDiscovery -InteractiveMode -Credential $cred
# or you can use a connection string if you want to use e.g. OAuth or a Client Secret
# but then the password must be plaintext which is kind of a security no-no
# $connString = "AuthType=ClientSecret;url=$serverurl;ClientId=$username;ClientSecret=$password"
# $connection = Get-CrmConnection -ConnectionString $connString
}
catch
{
Write-Host $_.Exception.Message
exit
}
if($connection.IsReady -ne $True)
{
$errorDescr = $connection.LastCrmError
Write-Host "Connection not established: $errorDescr"
exit
}
else
{
Write-Host "Connection to $($connection.ConnectedOrgFriendlyName) successful"
}
# ============ Fetch data ============
$fetchXml = [xml](Get-Content $fetchXmlFile)
$result = Get-CrmRecordsByFetch -conn $connection -Fetch $fetchXml.OuterXml
# ============ Write to file ============
# Obviously here, instead of writing to csv directly, you can loop and do whatever suits your needs, e.g. run a db query, call a web service etc etc
$result.CrmRecords | Select -Property lastname, firstname | Export-Csv -Encoding UTF8 -Path $exportfile -NoTypeInformation -Delimiter $exportdelimiter
When you use your own FetchXml, do remember to change the properties in the last line (lastname, firstname).
For a quick test, the example FetchXml I’m using is the following:
Anyone who develops software that interacts with a database knows (read: should know) how to read a query execution plan, given by “EXPLAIN PLAN”, and how to avoid at least the most common problems like a full table scan.
It is obvious that a plan can change if the database changes. For example if we add an index that is relevant to our query, it will be used to make our query faster. And this will be reflected in the new plan.
Likewise if the query changes. If instead of
SELECT * FROM mytable WHERE somevalue > 5
the query changes to
SELECT * FROM mytable WHERE somevalue IN
(SELECT someid FROM anothertable)
the plan will of course change.
So during a database performance tuning seminar at work, we came to the following question: can the execution plan change if we just change the filter value? Like, if instead of
SELECT * FROM mytable WHERE somevalue > 5
the query changes to
SELECT * FROM mytable WHERE somevalue > 10
It’s not obvious why it should. The columns used, both in the SELECT and the WHERE clause, do not change. So if a human would look at these two queries, they would select the same way of executing them (e.g. using an index on somevalue if one is available).
But databases have a knowledge we don’t have. They have statistics.
Let’s do an example. We’ll use Microsoft SQL server here. The edition doesn’t really matter, you can use Express for example. But the idea, and the results, are the same for Oracle or any other major RDBMS.
First off, let’s create a database. Open Management Studio and paste the following (changing the paths as needed):
CREATE DATABASE [PLANTEST]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'PLANTEST',
FILENAME = N'C:\DATA\PLANTEST.mdf' ,
SIZE = 180MB , FILEGROWTH = 10% )
LOG ON
( NAME = N'PLANTEST_log',
FILENAME = N'C:\DATA\PLANTEST_log.ldf' ,
SIZE = 20MB , FILEGROWTH = 10%)
GO
Note that, by default, I’ve allocated a lot of space, 180MB. There’s a reason for that; We know that we’ll pump in a lot of data, and we want to avoid the delay of the db files growing.
Now let’s create a table to work on:
USE PLANTEST
GO
CREATE TABLE dbo.TESTWORKLOAD
( testid int NOT NULL IDENTITY(1,1),
testname char(10) NULL,
testdata nvarchar(36) NULL )
ON [PRIMARY]
GO
And let’s fill it (this can take some time, say around 5-10 minutes):
DECLARE @cnt1 INT = 0;
DECLARE @cnt2 INT = 0;
WHILE @cnt1 < 20
BEGIN
SET @cnt2 = 0;
WHILE @cnt2 < 100000
BEGIN
insert into TESTWORKLOAD (testname, testdata)
values ('COMMON0001', CONVERT(char(36), NEWID()));
SET @cnt2 = @cnt2 + 1;
END;
insert into TESTWORKLOAD (testname, testdata)
values ('SPARSE0002', CONVERT(char(36), NEWID()));
SET @cnt1 = @cnt1 + 1;
END;
GO
What I did here is, basically, I filled the table with 2 million (20 * 100000) plus 20 rows. Almost all of them (2 million) in the testname field, have the value “COMMON0001”. But a few, only 20, have a different value, “SPARSE0002”.
Essentially the table is our proverbial haystack. The “COMMON0001” rows are the hay, and the “SPARSE0002” rows are the needles 🙂
Let’s examine how the database will execute these two queries:
SELECT * FROM TESTWORKLOAD WHERE testname = 'COMMON0001';
SELECT * FROM TESTWORKLOAD WHERE testname = 'SPARSE0002';
Select both of them and, in management studio, press Control+L or the “Display estimated execution plan” button. What you will see is this:
What you see here is that both queries will do a full table scan. That means that the database will go and grab every single row from the table, look at the rows one by one, and give us only the ones who match (the ones with COMMON0001 or SPARSE0002, respectively).
That’s ok when you don’t have a lot of rows (say, up to 5 or 10 thousand), but it’s terribly slow when you have a lot (like our 2 million).
So let’s create an index for that:
CREATE NONCLUSTERED INDEX [IX_testname] ON [dbo].[TESTWORKLOAD]
(
[testname] ASC
)
GO
And here’s where you watch the magic happen. Select the same queries as above and press Control+L (or the “Display estimated execution plan” button) again. Voila:
What you see here is that, even though the only difference between the two queries is the filter value, the execution plan changes.
Why does this happen? And how?
Well, here’s where statistics are handy. On the Object Explorer of management studio, expand (the “+”) our database and table, and then the “Statistics” folder.
You can see the statistic for our index, IX_testname. If you open it (double click and then go to “details”) you see the following:
So (I’m simplifying a bit here, but not a lot) the database knows how many rows have the value “COMMON0001” (2 million) and how many the value “SPARSE0002” (just 20).
Knowing this, it concludes (that’s the job of the query optimizer) that the best way to execute the 2 queries is different:
The first one (WHERE testname = ‘COMMON0001’) will return almost all the rows of the table. Knowing this, the optimizer decides that it’s faster to just get everything (aka Full Table Scan) and filter out the very few rows we don’t need.
For the second one (WHERE testname = ‘SPARSE0002’), things are different. The optimizer knows that it’s looking only for a few rows, and it’s smartly using the index to find them as fast as possible.
In plain English, if you want the hay out of a haystack, you just get the whole stack. But if you’re looking for the needles, you go find them one by one.
So you went for vacations in Greece or Cyprus or southern Italy and liked the cold coffee they serve there? Or maybe you have a Greek colleague who’s busting your balls non stop about how great cold coffee is, and just want him to shut up? You’re at the right place!
Now you’re talking’
These recipe is for both espresso freddo and cappuccino freddo which are exactly the same thing; you just add cold foam milk on top of the espresso freddo to make the cappuccino version.
Over the years I’ve tried to simplify the recipe a bit. It’s not barista-level good, but anyone who’s tried it tells me it’s pretty decent.
You can see the video here:
To begin with, here’s the equipment you need:
A strong coffee mixer. This is an absolute must, you can’t do without it. Outside of Greece they are called “drink mixers” (you can find them in amazon.de for example). They look like this:
One or more suitable tall glasses. You need them to be around 200-250 ml for espresso freddo and 300-350 ml for cappuccino freddo. The ones from IKEA are fine.
Two cocktail shakers, one for the milk and one for the coffee. It’s ok if you don’t have shakers though, you can just use normal glasses. But you can also buy them from amazon.de.
Now let’s see the stuff you need to prepare every time before you make cold coffee.
I’m sure you’ll be surprised to learn that you need coffee! Basically you need a double espresso, around 100ml. What I usually do is use the Lungo capsules for my Dolce Gusto machine, and set it to 3 lines instead of 4.
You also need straws, medium or thin ones. Don’t get the thick ones, they’re good for smoothies but not cold coffee.
You need ice cubes. For every coffee, you need 5-6.
If you’re going to make cappuccino (not espresso) freddo, you need milk, and you need it cold. Let me say that again, because it’s really really important: COLD. Ideally it should be 2 degrees. That means that you need to put it at the back of the fridge, not at the door where it’s a bit warmer. I usually put it in the refrigerator about 10min before I start. Keep it in the fridge until the moment you actually need it.
You also need to experiment a bit with the kind of milk you’ll use. I’ve found that the best one -at least from the ones you find in a regular supermarket- is full fat UHT milk, 3.5%. The one you get at the fridge of the supermarket isn’t as good –no idea why. If you find a “barista milk” get it; they have more proteins so they froth better.
One of the shakers, the one to use for milk, has to be really, really cold. Put it in the refrigerator for at least an hour before making the coffee.
The basic idea is that, in order to make the foam milk, the milk has to be cold and stay cold. That’s why you need its container to also be frozen.
Now that we’ve prepared everything, let’s get to work.
The first thing you need to do is prepare the coffee. If you also want sugar, you need to add it immediately afterwards, while the coffee is still hot, and stir it a bit with the mixer; that way it will melt nicely and you won’t get the awful crunchy feeling of unmelted sugar.
Now we need to get our coffee ice cold. Put 5 or 6 ice cubes in the shaker or glass. Pour the coffee swiftly over the ice cubes. Stir it a bit with the mixer, but too much, you don’t want it to turn into foam. 5-6 seconds should be enough. Then pour everything (coffee+ice cubes) in the glass.
If you want an espresso freddo, you can add a straw and stop here, you’re done.Otherwise you have one more step to prepare the cold foam milk.
Get the milk and the 2nd shaker (or glass) out of the fridge. Fill the shaker just below half full. Stir it with the mixer for some time (at least 30 sec, can be more) until the surface is smooth and free of bubbles. This part is exactly why the shaker has to be cold. If it’s not, it will warm up the milk and it will be impossible to turn into foam.
Pro (well, sort of) tip: when holding the shaker with the milk and stirring, try to grab it from the top, not the middle or the bottom. That way the heat from your hand will affect the milk as little as possible.
The result should, ideally, look like this:
The water -always with ice cubes!- is mandatory. The beach isn’t, but it’s a very nice addition 😉
Coders used in C#, Java etc. know there are two ways to evaluate a logical AND. In C# you can do either
if (test1) & (test2)
{
// whatever
}
or
if (test1) && (test2)
{
// whatever
}
The difference, of course, is that in the first case (&) BOTH test1 and test2 are evaluated. This doesn’t matter much if test1 and test2 are variables, but it matters a lot if they’re methods. This of the following example:
if (reserveItemsForOrder()) && (sendOrderToErp())
{
// whatever
}
In this fictional case, && means that the order will be sent to the ERP system only if items can be reserved. If the single & is used, however, it will be sent anyway –even if not enough stock can be found.
This is well known in languages like C, C++, C#, Java etc. But how is AND evaluated in Oracle?
In short, it’s the same as &&.But for a more complete explanation, let’s read it from Oracle itself:
Short-Circuit Evaluation
When evaluating a logical expression, PL/SQL uses short-circuit evaluation. That is, PL/SQL stops evaluating the expression as soon as the result can be determined. This lets you write expressions that might otherwise cause an error. Consider the following OR expression:
DECLARE
…
on_hand INTEGER;
on_order INTEGER;
BEGIN
..
IF (on_hand = 0) OR ((on_order / on_hand) < 5) THEN
…
END IF;
END;
When the value of on_hand is zero, the left operand yields TRUE, so PL/SQL need not evaluate the right operand. If PL/SQL were to evaluate both operands before applying the OR operator, the right operand would cause a division by zero error. In any case, it is a poor programming practice to rely on short-circuit evaluation.
As part of an investigation project at work, we had to create a number of graphs. Of course our first idea was using Excel; but it turns out that in a lot of scenarios it’s ambiguous, time consuming and sometimes outright frustrating. So we ended up doing it with Gnuplot, which provided a much better experience.
This article is not meant to give extended coverage of course; there are many FAQs and other documents available online for that (a small collection is given at the end of the article). It’s meant to cover basic usage and some common scenarios, namely:
How to download and install
How to plot a simple function
How to plot data points from a file
How to plot multiple functions and/or data points
How to setup the plot (axes etc.)
How to fill the area between functions
How to export the plot for MS Office
How to plot using batch files
Links and FAQs
Gnuplot is a really powerful tool. This article won’t cover many things, like 3D plots, polar coordinates, binary data, financial-style graphs and others; take a look at the demo library for that (link given at the end).
How to download and install
Download is provided from Sourcefourge. Go to http://www.gnuplot.info/download.html and get the current version. After downloading, installation is pretty easy and straightforward. Just click “next” in every step and you’ll be ok.
After installation, start Gnuplot from the desktop icon. You’ll get a command prompt (gnuplot>).
How to plot a simple function
A major problem with MS Excel is that you cannot create a graph for a function; you have to create the data in cells, using a formula. And of course, the values will not be continuous but discrete.
So let’s say you want to make a graph of a function f(x)=x^2+10/x, for values of x between -10 and 10. Enter these commands to the command prompt, pressing ENTER after each line (lines that begin with # are comments):
# setup the x axis range
set xrange [-10:10]
# plot our function
f(x)=x**2+10/x
plot f(x)
To change the line color, the easiest way is to use one of the available linestyles:
plot f(x) linestyle 3
In order to see the readily available linestyles, just enter:
test
How to plot data points from a file
For our example, we have two text tab-separated files, c:\temp\out1.txt and c:\temp\out2.txt, that look like this:
The first has just two columns, x and y. The second has four columns: the second is labels on x, the third and fourth are measurement values (y) and the first an incremental number (for gnuplot to know which comes first, second etc.)
Let’s plot the first one:
# make sure we're in the correct dir
cd 'c:\temp\gnuplot'
set xrange [0:1000]
set yrange [0:1000]
plot 'out1.txt' using 1:2
Note the 1:2 here. This tells gnuplot that the 1st column of the file will be used for x and the 2nd for y.
If you want to connect the points, the last line would be:
plot 'out1.txt' using 1:2 with lines
If instead of simply connecting the points you would need to do a ‘best fit’ with a given function, say g(x)=a*x+c :
g(x)=a*x+c
fit g(x) 'out1.txt' using 1:2 via a,c
# here you get a list of the calculations gnuplot is doing, the parameters used and the standard error
plot 'out1.txt' using 1:2, g(x)
Of course, that’s not a very accurate fit, but that’s not our point here 🙂
Let’s now plot the second file. Our goal here is to create a bar chart:
cd 'c:\temp\gnuplot'
# 'set autoscale' automatically sets ranges for x,y
set autoscale
set boxwidth 0.5
set style fill solid
plot 'out2.txt' using 1:3:xtic(2) with boxes
Note the 1:3:xtic(2). This tells gnuplot that the 1st column is ot be used for x, the 3rd for y and the 2nd (xtic(2)) for x-axis labels.
Now let’s try to plot two data series in the same bar chart:
cd 'c:\temp\gnuplot'
set style data histogram
set style histogram cluster gap 1
set style fill solid border -1
set boxwidth 0.9p
plot 'out2.txt' using 3:xtic(2) title 'Measurement 14-Feb-2014', 'out2.txt' using 4:xtic(2) title 'Measurement 17-Feb-2014'
How to plot multiple functions and/or data points
Actually we did that already in the example with the fit and bar examples. We just have to give multiple functions/files and separate them with a comma. As an example:
cd 'c:\temp\gnuplot'
a=5
f(x)=a*x
plot f(x), 'out1.txt' using 1:2
Let’s add a line and a legend, shall we ? The last line will become:
plot f(x) title 'My function', 'out1.txt' using 1:2 with line title 'My data'
How to setup the plot (axes etc.)
Let’s see an (almost) all-inclusive example:
# Chart title
set title 'Workflow performance (AWTs/sample)'
# Get the legend out of the chart
set key outside
# place the legend
# here you can use 'left', 'right', 'center' and 'top', 'bottom', 'cent'
set key right cent
# Setup the axes range using, e.g.
# From-to
set xrange [1:500]
set yrange [1:1000]
# logarithmic
set logscale x
set logscale y
# axes titles
set xlabel 'Samples'
set ylabel 'AWTs per sample'
# let's see what we've done
replot
How to fill the area between functions
What if you want to fill the area below a curve, or between two curves ?
First, let’s fill the area between a curve f(x)=x^2 and the x axis:
set xrange [0:10]
# c(x) is the same as the x axis
c(x)=0
f(x)=x**2
# '+' is the pseudofile, you can read about it in the documentation
plot '+' using 1:(c($1)):(f($1)) with filledcurves closed linestyle 3 title 'Filled area'
After this, it should be obvious how you can fill the area between two curves; just use a function instead of c(x)=0. Let’s say we use g(x)=x^1.8
set xrange [0:10]
g(x)=x**1.8
f(x)=x**2
plot '+' using 1:(g($1)):(f($1)) with filledcurves closed linestyle 4 title 'Area between two functions'
How to export the plot for MS Office
Although of course you can do a printscreen, the best format to use with Word, Powerpoint etc. is the Enhanced Metafile Format (.emf). The best thing about it is that it’s scalable. Surprisingly, if you have a .emf image and preview it (Windows uses Paint by default) it seems awful; but if you insert it in Word it looks great.
So, in order to create a plot and get it as an .emf you need to do something like this:
# output directory and file name
cd 'c:\temp\gnuplot'
set terminal emf enhanced
set output 'plot.emf'
# create the plot
set xrange [-100:100]
f(x)=x**2-10*x
plot f(x) title 'f(x)=x^2-10x'
# because of 'set output' above, plot creates
# the file on disk instead of showing it on the screen
# NOTE: until you 'unset output', the plot file (.emf, .pdf, whatever)
# is locked and cannot be accessed
unset output
# normally you need to return the plot output to the screen
set terminal wxt
How to plot using batch files
Creating a batch file is useful in several scenarios. For example, you might have a data file (like out1.txt above) which changes every day; and every day you need to create the same graph, but with the fresh data.
So in order to do this, just write all gnuplot commands to a text file and execute it with gnuplot.exe. See the example below (the backslash means that the line is continued):
# Save as 'C:\temp\dailychart.plt'
cd 'c:\temp\gnuplot'
set terminal pdf enhanced
set output "plot.pdf"
f(x) = 980/x
c(x) = 650
plot \
f(x) title '980 limit' linestyle 1, \
c(x) title '2 sec limit' linestyle 2, \
'out1.txt' using 1:2 title 'External data' linestyle 3, \
'out2.txt' using 1:3 title 'External data 2' linestyle 4
unset output
quit
The .plt extension is the gnuplot default, but you can name the file anything (e.g. .txt). Now open a Windows command prompt and type:
Short answer: 4-5 capsules per day, 3 for Intenso-type pods.
For more details, and an answer taking into account the specific type of capsule, read on.
To answer this -very important 😊- question, I’ll concentrate on Dolce Gusto capsules for the simple reason that that’s what I have at home (well, that, plus a Krups filter coffee machine, plus an Izzy traditional espresso machine, plus my one-time favorite Bialetti brikka). The results for Nespresso et. al. should be similar.
Do note that I’m only considering caffeine content; but that’s not always the only factor. E.g. if you drink anything near 400 cups Lungo decaffeinato in a single day, you will have non-caffeine related problems (WC attendance comes readily to mind ! 😊).
The usual number that’s given as “safe” caffeine per day for adults is 400 Mg (eg. see “Caffeine: How much is too much?” from Mayo Clinic here). So this amounts to:
Mg caffeine per capsule
Max capsules per day
Max. caffeine per day:
400
Specialty Coffee :
Cappuccino
107
4
Cappuccino Skinny
90
4
Latte Macchiato
85
5
Vanilla Latte Macchiato
83
5
Caramel Latte Macchiato
83
5
Latte Macchito Skinny
83
5
Mocha
45
9
Cappuccino Ice
35
11
Coffee Drinks :
Caffe Grande Intenso
130
3
Espresso Intenso
115
3
Grande Mild
106
4
Light Roast
106
4
Medium Roast
103
4
Café au Lait
92
4
Lungo
89
4
Espresso
80
5
Lungo decaffeinato
1
400
Non-Coffee Drinks
Chai Tea Latte
34
12
Nestea Peach Iced Tea
9
44
Spreadsheet is here if you want to play with the numbers (you can also comment on it). I’ve rounded to closest integer because nobody makes 3 cups of coffee and then decides to have another 0.8 😊
That’s a piece of news, tweeted by a random guy, that went viral. It’s false, but it didn’t matter at all.
Note that this is NOT a typical fake news case. The “guilty” guy -the one who tweeted the wrong info- actually did some effort to verify if his claim was true. Not that much; but this is totally understandable given that he had, like, 40 followers (basically his friends). And when his tweet went viral and was shared 100s of 1000s of times, he tried to find the truth. When he did, he admitted it, deleted the original tweet and posted the fact that it was false.
It didn’t matter.
These are not the buses you’re looking for
It immediately became, and still is, “proof” for a lot of people, asserting a fact that never happened.
I’m becoming increasingly desperate. There really doesn’t look any way out of this mess. People will believe anything if they want to believe it. And through the internet, it’s all too easy to find it. True or not; it doesn’t matter.
Yesterday (Mon 16-May-2016) a court in northern Greece convicted, for the first time, a journalist/blogger for spreading a hoax.
A hoax is a piece of fake and (usually) emotionally charged news item. The usual drivers behind this is “like farming” (earning a small amount of money for every ‘click’ via Google ads) and selling bogus “health” products on the side. It’s very common for hoaxes to go hand in hand with conspiracy theories, like “chemtrails” (“we are being spreyed with chemicals from airplanes!”) or, as in this case, “harmful vaccines” (“vaccines cause autism”, “pharma companies spread cancer through vaccines!”).
Until now, the economics were firmly on the side of the scammers propagating the hoaxes: there was only profit to make, no real cost and, more importantly, no risk. So they would (and are) spreading whatever b*****t they can think of, with no or fake proof but lots of emotional content (“cancer to children!!!”) and pocket the profits.
The hoax of this specific case was titled “Shock: See how companies are spreading cancer through a vaccine”. It was about a girl which is not named other than by first name who supposedly received the MMR vaccine and then died from a brain tumor.
The story is full of sh*t. It was very well researched here.
This conviction is the only one I’m aware of globally (I do hope there are more, but I haven’t heard of any). And it may be, however slowly, a turning of the tide. Organized society needs to fight against this, and such cases are long overdue.