Monday 11 July 2016

Removing Execution Plans from the Procedure Cache


To manually remove a single plan or all plans from the cache, we use DBCC FREEPROCCACHE (Transact-SQL).

Note: To all those who are not aware of what execution plan is, please go through the below links (Article may be bit lengthy, but till day no one has documented the way he(Grant Fritchey) did)

https://www.simple-talk.com/sql/performance/execution-plan-basics/
https://www.simple-talk.com/sql/performance/graphical-execution-plans-for-simple-sql-queries/

How SQL Server does this automatically for us?

Execution plans remain in the procedure cache as long as there is enough memory to store them.
When memory pressure exists, the Database Engine uses a "COST-BASED" approach to determine which execution plans to remove from the procedure cache.
To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan according to the following factors.

1) When a user process inserts an execution plan(new query) into the cache, the user process sets the current cost equal to the original query compile cost;
for ad-hoc execution plans(ex: CTE query), the user process sets the current cost to zero. Thereafter, each time a user process references an execution plan, it resets the current cost to the original compile cost;
for ad-hoc execution plans the user process increases the current cost. For all plans, the maximum value for the current cost is the original compile cost.

2) When memory pressure exists, the Database Engine responds by removing execution plans from the procedure cache.
To determine which plans to remove, the Database Engine repeatedly examines the state of each execution plan and removes plans when their current cost is zero.
An execution plan with zero current cost is not removed automatically when memory pressure exists; it is removed only when the Database Engine examines the plan and the current cost is zero.
When examining an execution plan, the Database Engine pushes the current cost towards zero by decreasing the current cost if a query is not currently using the plan.

3) The Database Engine repeatedly examines the execution plans until enough have been removed to satisfy memory requirements.
While memory pressure exists, an execution plan may have its cost increased and decreased more than once.
When memory pressure no longer exists, the Database Engine stops decreasing the current cost of unused execution plans and all execution plans remain in the procedure cache, even if their cost is zero.

4) The Database Engine uses the resource monitor and user threads to free memory from the procedure cache in response to memory pressure.
The resource monitor and user threads can examine plans run concurrently to decrease the current cost for each unused execution plan.
The resource monitor removes execution plans from the procedure cache when global memory pressure exists. It frees memory to enforce policies for system memory, process memory, resource pool memory, and maximum size for all caches.

The following examples illustrate which execution plans get removed from the procedure cache:

An execution plan is frequently referenced so that its cost never goes to zero. The plan remains in the procedure cache and is not removed unless there is memory pressure and the current cost is zero.

An ad-hoc execution plan is inserted and is not referenced again before memory pressure exists. Since ad-hoc plans are initialized with a current cost of zero, when the database engine examines the execution plan, it will see the zero current cost and remove the plan from the procedure cache. The ad-hoc execution plan remains in the procedure cache with a zero current cost when memory pressure does not exist.

Where can I see plan cache information?

dbcc memorystatus; -- (requires sysadmin privileges)

The most advanced and much awaited feature to persist execution plans(even when we restarts SQL instance) introduced in SQL Server 2016.
Will come up with the Query Store features on the next article.

Monday 4 July 2016

Delayed Durability in SQL Server 2014


                                                                                                                                       
Delayed Durability in SQL Server 2014

Firstly what is Delayed Durability?

We all aware of ACID property, the last letter in it is "D"[Durability] which brought this topic.

In SQL Server, changes to data are written to the log first. This is called write ahead logging (WAL).
Control isn't returned to the application until the log record has been written to disk (a process referred to as "hardening").
Delayed durability allows you to return control back to the application before the log is hardened.
This can speed up transactions if you have issues with log performance. Nothing is free, though, and here you sacrifice recoverability.
Should the database go down before the log is committed to disk, then you lose those transactions forever.

History behind Transaction Commit:

Whenever we commit the transaction(SQL Server is auto commit by default), log buffer data are the one which is first flushed into the disk
even before the original data[present in data buffer] into the physical disk. On completion of log flush into disk, all locks associated with the transaction will be released.
The transaction’s locks cannot be dropped until the log flush completes. So whenever log buffer entries made into the physical log files, transaction attains the final property of Durability.

Normal Transaction vs Delayed Durability Transaction 

Under normal circumstances, when a transaction commits, the commit doesn’t complete until the log block for the transaction has been flushed to disk.
Whereas in the case of delayed durability transactions are considered to be complete, even before log flush occurs.
Hence other transactions can acquire locks held by current transaction.

Scenario:

Think of a workload such as, all the other transactions are waiting for the one that is committing, as they all need the same locks, so Transactions/sec is tied to Log Flushes/sec in this case.

With delayed durability, the transaction commit proceeds without the log block flush occurring – hence the act of making the transaction durable is delayed.
Under delayed durability, log blocks are only flushed to disk when they reach their maximum size of 60KB.
This means that transactions commit a lot faster, hold their locks for less time, and so Transactions/sec increases greatly (for this workload).
You can also see that the Log Flushes/sec decreased greatly as well, as previously it was flushing lots of tiny log blocks and then changed to only flush maximum-sized log blocks.

Advantages/Benefits: 

By enabling the delayed durability, no of transactions per sec will be greatly improved.
Since other transactions doesn't need to wait for the current transaction till it is being logged.

Disadvantages: 

Your transactions aren’t durable when they commit. If the system crashes we will end up losing the transactions(though they are committed)
which is in the log buffer.

Key Notes:

Delayed Durability can be enabled at database level, COMMIT level, or ATOMIC block level in Natively Compiled Stored Procedures.
For more details please refer: https://msdn.microsoft.com/en-us/library/dn449490.aspx