Let us start by stating the obvious: password management for programs and services is a huge pain for developers.
It’s one of the things that is always overlooked during development, where you’re just trying to make the thing work. It’s even not given much attention during testing, where people are usually focused on whether it works correctly on normal uses and edge cases, they look for the UI and usability etc etc.
But come deployment time and the admins start complaining. Storing passwords in plain text files is, how to put it mildly, BLOODY HORRIBLE from a security perspective. And storing them in better ways takes a surprising amount of time –just when the devs thought they’re almost finished.
So having less passwords to store and secure is very helpful for everyone. And one thing many applications need is the credentials to a database.
Fortunately, if your application is running as a Windows service and your database is SQL server, you don’t need a password. You can use integrated security. All you need is to allow (grant) access for the service user to read data from SQL server.
Now here’s the thing: if you’re using a domain user to run the server, that’s obvious. You just create the user in SQL and grant access as needed (you can even use the script below and change the user). But what happens when, as is very common, the application is running under the Local System account?
Turns out, fortunately, there’s a solution for that as well. Every computer’s Local System account exists in Active Directory as “hostname$”. E.g. if the hostname of the application server is MYSERVER, the user name will be MYDOMAIN\MYSERVER$.
So you can run the following SQL to grant access:
/*
Source: DotJim blog (http://dandraka.com)
Jim Andrakakis, April 2023
*/
-- suppose you work on domain MYDOMAIN
-- and the server that hosts the Windows
-- service is MYSERVER
-- this is the name given by the hostname command
USE mydatabase;
GO
CREATE LOGIN [MYDOMAIN\MYSERVER$] FROM WINDOWS;
GO
CREATE USER [MYDOMAIN\MYSERVER$] FOR LOGIN [MYDOMAIN\MYSERVER$]
GO
/* db_datareader grants read-only access */
ALTER ROLE [db_datareader] ADD MEMBER [MYDOMAIN\MYSERVER$]
GO
/* if you want to insert, update or delete, add db_datawriter */
ALTER ROLE [db_datawriter] ADD MEMBER [MYDOMAIN\MYSERVER$]
GO
That done, you can use the following connection string to connect to the database:
I hate, hate, hate clicking and clicking again, especially for tasks that can be automated. And one of these tasks is doing DB backups, which I have to do every time before starting a deployment.
So here’s what I’ve come up with. This generates a backup for every database except the system ones (see line 20) as databasename_yyyymmdd.bak, e.g. ERP-UAT-DB_20230321.bak.
/*
Source: DotJim blog (http://dandraka.com)
Jim Andrakakis, March 2023
*/
DECLARE @name NVARCHAR(256) -- database name
DECLARE @path NVARCHAR(512) -- path for backup files
DECLARE @fileName NVARCHAR(512) -- filename for backup
DECLARE @fileDate NVARCHAR(40) -- used for file name
/* specify database backup directory */
SET @path = 'D:\myBackups\'
/* get date as yyyyMMdd */
SELECT @fileDate = CONVERT(NVARCHAR(20),GETDATE(),112)
DECLARE db_cursor CURSOR READ_ONLY FOR
SELECT name
FROM master.sys.databases
WHERE name NOT IN ('master','model','msdb','tempdb') -- exclude these databases
AND state = 0 -- database is online
AND is_in_standby = 0 -- database is not read only for log shipping
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @fileName = @path + @name + '_' + @fileDate + '.BAK'
BACKUP DATABASE @name TO DISK = @fileName
FETCH NEXT FROM db_cursor INTO @name
END
CLOSE db_cursor
DEALLOCATE db_cursor
Change the path, obviously, and you can also change the WHERE name NOT IN (‘master’,’model’,’msdb’,’tempdb’). E.g. you can do something like WHERE name LIKE ‘%_PRODUCTION’ to suit your scenario.
When your SQL Server DB log files are growing and your disk is close to being full (or, as it happened this morning, fill up completely thus preventing any DB operation whatsoever, bringing the affected system down!) you need to shrink them.
What this means, basically, is that you create a backup (do NOT skip that!) and then you delete information that allows you to recover the database to any point in time before the backup. That’s what SET RECOVERY SIMPLE & DBCC SHRINKFILE do. And since you kept a backup, you no longer need this information. You don’t need it for operations after the backup though, that’s why we go back to full recovery mode with SET RECOVERY FULL at the end.
So what you need is to login to your SQL Server with admin rights and:
USE DatabaseName
GO
BACKUP DATABASE DatabaseName
TO DISK = 'C:\dbbackup\DatabaseName.bak'
WITH FORMAT,
MEDIANAME = 'DatabaseNameBackups',
NAME = 'Full Backup of DatabaseName';
GO
ALTER DATABASE DatabaseName SET RECOVERY SIMPLE;
GO
CHECKPOINT;
GO
DBCC SHRINKFILE ('DatabaseName_Log', 10);
GO
ALTER DATABASE DatabaseName SET RECOVERY FULL;
GO
Notice the 10 there -that’s the size, in MB, that the DB Log file will shrink to. You probably need to change that to match your DB needs. Also, the DatabaseName_Log is the logical name of your DB Log. You can find it in the DB properties. You probably also need to change the backup path from the example C:\dbbackup\DatabaseName.bak.
Anyone who develops software that interacts with a database knows (read: should know) how to read a query execution plan, given by “EXPLAIN PLAN”, and how to avoid at least the most common problems like a full table scan.
It is obvious that a plan can change if the database changes. For example if we add an index that is relevant to our query, it will be used to make our query faster. And this will be reflected in the new plan.
Likewise if the query changes. If instead of
SELECT * FROM mytable WHERE somevalue > 5
the query changes to
SELECT * FROM mytable WHERE somevalue IN
(SELECT someid FROM anothertable)
the plan will of course change.
So during a database performance tuning seminar at work, we came to the following question: can the execution plan change if we just change the filter value? Like, if instead of
SELECT * FROM mytable WHERE somevalue > 5
the query changes to
SELECT * FROM mytable WHERE somevalue > 10
It’s not obvious why it should. The columns used, both in the SELECT and the WHERE clause, do not change. So if a human would look at these two queries, they would select the same way of executing them (e.g. using an index on somevalue if one is available).
But databases have a knowledge we don’t have. They have statistics.
Let’s do an example. We’ll use Microsoft SQL server here. The edition doesn’t really matter, you can use Express for example. But the idea, and the results, are the same for Oracle or any other major RDBMS.
First off, let’s create a database. Open Management Studio and paste the following (changing the paths as needed):
CREATE DATABASE [PLANTEST]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'PLANTEST',
FILENAME = N'C:\DATA\PLANTEST.mdf' ,
SIZE = 180MB , FILEGROWTH = 10% )
LOG ON
( NAME = N'PLANTEST_log',
FILENAME = N'C:\DATA\PLANTEST_log.ldf' ,
SIZE = 20MB , FILEGROWTH = 10%)
GO
Note that, by default, I’ve allocated a lot of space, 180MB. There’s a reason for that; We know that we’ll pump in a lot of data, and we want to avoid the delay of the db files growing.
Now let’s create a table to work on:
USE PLANTEST
GO
CREATE TABLE dbo.TESTWORKLOAD
( testid int NOT NULL IDENTITY(1,1),
testname char(10) NULL,
testdata nvarchar(36) NULL )
ON [PRIMARY]
GO
And let’s fill it (this can take some time, say around 5-10 minutes):
DECLARE @cnt1 INT = 0;
DECLARE @cnt2 INT = 0;
WHILE @cnt1 < 20
BEGIN
SET @cnt2 = 0;
WHILE @cnt2 < 100000
BEGIN
insert into TESTWORKLOAD (testname, testdata)
values ('COMMON0001', CONVERT(char(36), NEWID()));
SET @cnt2 = @cnt2 + 1;
END;
insert into TESTWORKLOAD (testname, testdata)
values ('SPARSE0002', CONVERT(char(36), NEWID()));
SET @cnt1 = @cnt1 + 1;
END;
GO
What I did here is, basically, I filled the table with 2 million (20 * 100000) plus 20 rows. Almost all of them (2 million) in the testname field, have the value “COMMON0001”. But a few, only 20, have a different value, “SPARSE0002”.
Essentially the table is our proverbial haystack. The “COMMON0001” rows are the hay, and the “SPARSE0002” rows are the needles 🙂
Let’s examine how the database will execute these two queries:
SELECT * FROM TESTWORKLOAD WHERE testname = 'COMMON0001';
SELECT * FROM TESTWORKLOAD WHERE testname = 'SPARSE0002';
Select both of them and, in management studio, press Control+L or the “Display estimated execution plan” button. What you will see is this:
What you see here is that both queries will do a full table scan. That means that the database will go and grab every single row from the table, look at the rows one by one, and give us only the ones who match (the ones with COMMON0001 or SPARSE0002, respectively).
That’s ok when you don’t have a lot of rows (say, up to 5 or 10 thousand), but it’s terribly slow when you have a lot (like our 2 million).
So let’s create an index for that:
CREATE NONCLUSTERED INDEX [IX_testname] ON [dbo].[TESTWORKLOAD]
(
[testname] ASC
)
GO
And here’s where you watch the magic happen. Select the same queries as above and press Control+L (or the “Display estimated execution plan” button) again. Voila:
What you see here is that, even though the only difference between the two queries is the filter value, the execution plan changes.
Why does this happen? And how?
Well, here’s where statistics are handy. On the Object Explorer of management studio, expand (the “+”) our database and table, and then the “Statistics” folder.
You can see the statistic for our index, IX_testname. If you open it (double click and then go to “details”) you see the following:
So (I’m simplifying a bit here, but not a lot) the database knows how many rows have the value “COMMON0001” (2 million) and how many the value “SPARSE0002” (just 20).
Knowing this, it concludes (that’s the job of the query optimizer) that the best way to execute the 2 queries is different:
The first one (WHERE testname = ‘COMMON0001’) will return almost all the rows of the table. Knowing this, the optimizer decides that it’s faster to just get everything (aka Full Table Scan) and filter out the very few rows we don’t need.
For the second one (WHERE testname = ‘SPARSE0002’), things are different. The optimizer knows that it’s looking only for a few rows, and it’s smartly using the index to find them as fast as possible.
In plain English, if you want the hay out of a haystack, you just get the whole stack. But if you’re looking for the needles, you go find them one by one.
Coders used in C#, Java etc. know there are two ways to evaluate a logical AND. In C# you can do either
if (test1) & (test2)
{
// whatever
}
or
if (test1) && (test2)
{
// whatever
}
The difference, of course, is that in the first case (&) BOTH test1 and test2 are evaluated. This doesn’t matter much if test1 and test2 are variables, but it matters a lot if they’re methods. This of the following example:
if (reserveItemsForOrder()) && (sendOrderToErp())
{
// whatever
}
In this fictional case, && means that the order will be sent to the ERP system only if items can be reserved. If the single & is used, however, it will be sent anyway –even if not enough stock can be found.
This is well known in languages like C, C++, C#, Java etc. But how is AND evaluated in Oracle?
In short, it’s the same as &&.But for a more complete explanation, let’s read it from Oracle itself:
Short-Circuit Evaluation
When evaluating a logical expression, PL/SQL uses short-circuit evaluation. That is, PL/SQL stops evaluating the expression as soon as the result can be determined. This lets you write expressions that might otherwise cause an error. Consider the following OR expression:
DECLARE
…
on_hand INTEGER;
on_order INTEGER;
BEGIN
..
IF (on_hand = 0) OR ((on_order / on_hand) < 5) THEN
…
END IF;
END;
When the value of on_hand is zero, the left operand yields TRUE, so PL/SQL need not evaluate the right operand. If PL/SQL were to evaluate both operands before applying the OR operator, the right operand would cause a division by zero error. In any case, it is a poor programming practice to rely on short-circuit evaluation.
Software, Greece, Switzerland. And coffee. LOTS of coffee !