Creating a trigger to protect a specific row or group of rows from deletion

— Create test table:
CREATE TABLE dbo.TriggerTest
(
[_ID] INT NOT NULL
IDENTITY ,
[TESTdata] [VARCHAR](500) NOT NULL
)
ON [PRIMARY];
GO

— add a row to delete

INSERT INTO [dbo].[TriggerTest]
( [TESTdata] )
VALUES ( ‘Fubar1’ );
GO

— add a row to protect
INSERT INTO [dbo].[TriggerTest]
( [TESTdata] )
VALUES ( ‘Fubar2’ );
GO

— create trigger
CREATE TRIGGER ProtectTriggerTest ON TriggerTest
FOR DELETE
AS
BEGIN
DECLARE @Tdata VARCHAR(500);
SELECT @Tdata = TESTdata
FROM Deleted
WHERE Deleted.TESTdata = ‘Fubar2’;

IF @Tdata IS NOT NULL
BEGIN
ROLLBACK TRANSACTION;
— add in other code here!!!!!!
EXEC msdb.dbo.sp_send_dbmail @profile_name = ‘a_sql’, — sysname
@recipients = ‘mark.horninger@xxx.com’, — varchar(max)
@copy_recipients = ”, — varchar(max)
@blind_copy_recipients = ”, — varchar(max)
@subject = N’DELETE WARNING!!!!’, — nvarchar(255)
@body = N’Someone tried to delete a row they were not supposed to!’, — nvarchar(max)

@mailitem_id = 0, — int
@from_address = ‘DoNotReply@xxx.com’, — varchar(max)
@reply_to = ‘DoNotReply@xxx.com’ — varchar(max);

–RAISERROR(‘You cannot delete that row from TriggerTEst!’,16,1);

RETURN;
END;

END;

GO

SELECT *
FROM dbo.TriggerTest;
GO

DELETE FROM dbo.TriggerTest
WHERE TESTdata = ‘Fubar1’;

DELETE FROM dbo.TriggerTest
WHERE TESTdata = ‘Fubar2’;

DELETE FROM dbo.TriggerTest;

SELECT *
FROM dbo.TriggerTest;
GO

— remove table
–DROP TABLE dbo.TriggerTest

SSIS Tuning – Interesting thought

Baseline source system extract speed.

Understand your source system and how fast you extract from it. After all, Integration Services cannot be tuned beyond the speed of your source – i.e., you cannot transform data faster than you can read it.

Measure the speed of the source system by creating a very simple package reading data from your source with the a destination of “Row Count”:

Execute the package from the command line (DTEXEC) and measure the time it took for it to complete its task. Use the Integration Services log output to get an accurate calculation of the time. You want to calculate rows per second:

Rows / sec = Row Count / TimeData Flow

Based on this value, you now know the maximum number of rows per second you can read from the source – this is also the roof on how fast you can transform your data.

Transactions

Transactions…
What are transactions?

Microsoft defines transactions as:
“A transaction is a sequence of operations performed as a single logical unit of work. A logical unit of work must exhibit four properties, called the atomicity, consistency, isolation, and durability (ACID) properties, to qualify as a transaction.
Atomicity
A transaction must be an atomic unit of work; either all of its data modifications are performed, or none of them is performed.
Consistency
When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction’s modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.
Isolation
Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either recognizes data in the state it was in before another concurrent transaction modified it, or it recognizes the data after the second transaction has completed, but it does not recognize an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.
Durability
After a transaction has completed, its effects are permanently in place in the system. The modifications persist even in the event of a system failure.”

Sounds great! Sign me up! Well there’s a drawback to transactions, they cause locking. Depending on how many records you change, whole tables could get locked.

How a lock is applied to an object is complicated – there are row locks, Page Locks and Table locks. SQL internally has rules around when to escalate from row locks  Page Locks  table locks

When an object is locked, nothing else can change that object.

So the long and the short of the matter is – use transactions sparingly – be really sure you need them and keep them short.

Transactions can cause log growth – careful you don’t run out of disk space! Transactions can take a long time to rollback.

WARNING!
Transactions are case-sensitive

Begin tran x
.
.
.
Commit X

Is not the same as:

Begin tran X
.
.
.
Commit X

Collation….

This is a quick post to talk about collation – and why [NOT] to change it from the default.

Microsoft defines collation as:

“Collations specify the rules for how strings of character data are sorted and compared, based on the norms of particular languages and locales. For example, in an ORDER BY clause, an English speaker would expect the character string ‘Chiapas’ to come before ‘Colima’ in ascending order. However, a Spanish speaker in Mexico might expect words beginning with ‘Ch’ to appear at the end of a list of words starting with ‘C’. Collations dictate these kinds of sorting and comparison rules. The Latin_1 General collation will sort ‘Chiapas’ before ‘Colima’ in an ORDER BY ASC clause, whereas the Traditional_Spanish collation will sort ‘Chiapas’ after ‘Colima’.”

If you have 2 tables with different collation and try to do a join, you’re gonna get:
an error like: Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AI” in the equal to operation.

Why jack with collation in the first place? 99% of the time you should just leave it alone, unless you have a really good reason….

Here’s the workaround when someone throws this at you.

INNER JOIN #Translations b
ON a.Sourceid COLLATE SQL_Latin1_General_CP1_CI_AS = b.SourceID COLLATE SQL_Latin1_General_CP1_CI_AS;

This forces the join to use the same collation.

Lock Pages in memory Setting?

SO what the heck is lock pages in memory? It enables SQL server to manage memory more efficiently!

To enable the lock pages in memory option.

1.On the Start menu, click Run. In the Open box, type gpedit.msc.

2.On the Local Group Policy Editor console, expand Computer Configuration, and then expand Windows Settings.

3.Expand Security Settings, and then expand Local Policies.

4.Select the User Rights Assignment folder.

The policies will be displayed in the details pane.

5.In the pane, double-click Lock pages in memory.

6.In the Local Security Setting – Lock pages in memory dialog box, click Add User or Group.

7.In the Select Users, Service Accounts, or Groups dialog box, add an account with privileges to run sqlservr.exe.

8.Log out and then log back in for this change to take effect.

Its user right for windows account and can be enabled by using secpol.msc or gpedit.msc

Why do we need this privilege for SQL Server startup account?

There are 3 different memory models in 64-bit SQL Server. They are conventional ,locked pages and large pages memory model.

Locked pages memory model: In lock pages memory mode SQL Server uses allocateuserphysicalpages and mapuserphysicalpages function to allocate memory. Caller token of this function should have LPIM privilege else the function call would fail, hence you need LPIM for startup account of SQL Server to use lock pages memory mode.

Large pages memory model: In large pages memory model I.e When you use TF834 in enterprise edition on systems with physical memory >8GB SQL Server uses large pages memory model. In this memory model SQL Server uses vitualalloc API with MEM_LARGE_PAGES allocation type. For using MEM_LARGE_PAGES in virtualalloc caller token must have LPIM privilege.

Memory allocated using AWE allocator API’s (or) Virtualalloc function with MEM_LARGE_PAGES are not part of Process working set ,hence cannot be paged out and not visible in private bytes or working set in task manger and Perfmon. process. Private bytes (or) Perfmon.process. working set.

In most cases, lock pages in memory should be enabled.

 

A good bit of this is at:

SQL Server lock pages in memory

 

So Whats faster? Where in or inner join… the answer is it depends!

 

So Whats faster? Where in or inner join… the answer is it depends!
Lets see what the execution plans look like in different scenarios.

First lets create our test environment:

 

p3_build_data

Queries: Here are the queries we’ll try:

p3_selects_data

 

 

Results with no optimizations (Indexes):

(50000 row(s) affected)
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest’. Scan count 1, logical reads 2225, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest’. Scan count 1, logical reads 2225, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest’. Scan count 1, logical reads 2225, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest’. Scan count 1, logical reads 2225, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest’. Scan count 1, logical reads 2225, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

p3_PLAn1a

p3_plan1b

 

 

Similar results for all queries. Each took roughly the same time and resources.
Optimizations:

Create a simple non-clustered index.
p3_optimize_data

(50000 row(s) affected)
Table ‘NumbersTest’. Scan count 10000, logical reads 32044, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘NumbersTest’. Scan count 10000, logical reads 32044, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘NumbersTest’. Scan count 10000, logical reads 32044, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘NumbersTest’. Scan count 10000, logical reads 32044, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

(50000 row(s) affected)
Table ‘NumbersTest’. Scan count 10000, logical reads 32044, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘NumbersTest2’. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

(1 row(s) affected)

p3_PLAn2a

p3_PLAn2b

 

Summary: in this case, where there’s a large indexed table (approx 1m rows) and we are looking for a subset of that table (50K rows) inner join will outperform where…in and where… exsits. Cross apply works the same as inner join.

Getting Started – the Select Statement

If you understand the SELECT statement thoroughly, you have understood 80% of SQL, because this is the most commonly used statement, as well as the most complex (when you understand its ramifications in terms of all the things it lets you do). The original definition of SQL included only the SELECT statement.

Select * from tblClientData

table

means all rows and all columns.

this will be alot of data. in fact, this query would be almost useless, because it would return too much data, as well as kill the server. Generally this kind of a query is a bad idea.