Read Committed does not read ‘all’ committed data

This is an excerpt of a blog post by Adam Haines from DEMYSTIFYING SQL SERVER.
http://jahaines.blogspot.com/2009/10/missing-committed-rows-in-read.html
This is a quick recap of the post:

  • When SQL Server scans a given table it takes a shared lock one row at a time
  • If the inserted value occurs before the currently scanned row, the row does not appear in the result set
  • When a CLUSTERED index is scanned thus it returns the data in the order of the index sort
  • When a heap is scanned it returns data as it finds it on disk, unless an order by is specified
  • No sort is ever guaranteed, without an ORDER BY clause
  • The direction of the scan impacts which data will be missing and which data is displayed
Advertisements

Truncate table process blocked by select with(nolock)

I had to kill a user Query today since it was causing blocking. The user did just what was advised, use NOLOCK.. yet the user’s Query had to be killed because of this.

no-nonsense SQL Server

Where you would expect that writers don’t get blocked by readers using dirty reads, this is not the case when the writer executes a TRUNCATE TABLE command. This is because a truncate is regarded as a DDL operation (not DML). More technically, a select statement holds a sch-s(schema stability) lock (even using isolation level read uncommitted) on the object where a truncate will hold a sch-m(schema modification) lock on the object and these locks are not compatible.

View original post

Not every error can be handled..!

Did you know that there are a whole bunch of errors that cannot be trappedby a TSQL ‘TRY – CATCH ‘ block?

Behold ..!

Microsoft Docs says…

Errors Unaffected by a TRY…CATCH Construct

TRY…CATCH constructs do not trap the following conditions:+

  • Warnings or informational messages that have a severity of 10 or lower.
  • Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY…CATCH will handle the error.
  • Attentions, such as client-interrupt requests or broken client connections.
  • When the session is ended by a system administrator by using the KILL statement.The following types of errors are not handled by a CATCH block when they occur at the same level of execution as the TRY…CATCH construct:
  • Compile errors, such as syntax errors, that prevent a batch from running.
  • Errors that occur during statement-level recompilation, such as object name resolution errors that occur after compilation because of deferred name resolution.These errors are returned to the level that ran the batch, stored procedure, or trigger.

    If an error occurs during compilation or statement-level recompilation at a lower execution level (for example, when executing sp_executesql or a user-defined stored procedure) inside the TRY block, the error occurs at a lower level than the TRY…CATCH construct and will be handled by the associated CATCH block.

     

Kill Negative SPID -2 in SQL Server 2014

There can be a rare scenario when database restore or a query is being blocked by negative SPIDs when trying to find the lead blocker by running SP_WHO2 or using Dynamic Management Views. Negative SPIDs in SQL Server cannot be killed using KILL Command. This article explains how to kill Negative SPIDs in SQL Server and what they mean in SQL Server.

Source: Kill Negative SPID in SQL Server – MyTechMantra.com

I had an ugly case of persistent blocking on a sunday morning at 7 AM. I had database blocking going on with -2 SPID as lead blocker. -2 is an orphaned MS DTC transaction.

You cannot just do a

KILL -2

I found out as much information as I could about the offending query for blaming purposes 😉 like which database this is happening on using the resource_database_id from request_session_id etc. Now its time to kill the offending DTC orphan transaction.

Identify the offending transaction:

SELECT 
 DISTINCT(request_owner_guid)
FROM sys.dm_tran_locks
 WHERE request_session_id IN (-2)
GO

guess what I had multiple Unit of Work IDs (UOW):

00767C6D-5981-4488-B151-43E0A2DC5C31
3AFCD1C4-581E-4F68-A11F-5D8C6B5EABB4
B36C240B-EEF5-41A1-9863-C4DD8C313F8A
D826E956-39ED-45FB-9B57-DBF5DB236644
185E7533-6532-4639-A023-EC0371EE952E

and this how I killed’em all:

kill '00000000-0000-0000-0000-000000000000'
--The distributed transaction with UOW {00000000-0000-0000-0000-000000000000} does not exist.
kill '00767C6D-5981-4488-B151-43E0A2DC5C31'
kill '3AFCD1C4-581E-4F68-A11F-5D8C6B5EABB4'
kill 'B36C240B-EEF5-41A1-9863-C4DD8C313F8A'
kill 'D826E956-39ED-45FB-9B57-DBF5DB236644'
kill '185E7533-6532-4639-A023-EC0371EE952E'

Once I killed the orphaned -2 things got back to normal and the application started behaving normally.

Who deleted my database?

Context: SQL Server RDBMS

Often times in database environments where more than one person possesses sysadmin role things could get messy. Especially when you realize an object (Database,SP, Table etc) was deleted and you have no idea by whom and when this object was deleted. Unfortunately SQL server error log

EXEC XP_READERRORLOG

doesn’t do a very good job of showing logs related to deleted objects. There are answers; you just need to know where to look. SQL server default trace comes to your rescue.  The TSQL below helps you identify who deleted the object and when. This can be used in the both events of knowing or not knowing the database/object name.

SELECT DatabaseID,NTUserName,HostName,LoginName,StartTime
FROM
sys.fn_trace_gettable(CONVERT(VARCHAR(150),
( SELECT TOP 1
f.[value]
FROM    sys.fn_trace_getinfo(NULL) f
WHERE   f.property = 2
)), DEFAULT) T
JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
WHERE TE.trace_event_id =47 AND T.DatabaseName = 'delete'
-- 47 Represents event for deleting objects.

 

The result will looks like this:

Trace_results

 

Thanks for reading.