Tuesday 1 November 2016

Filtered Index in SQL Server

Is it possible to create a unique constraint/unique index on a column which already has duplicate values?

Answer is "YES" with the help of filtered index.

In general, if we create a index every single data on a column will be moved to index pages whereas in the case of filtered index(since we are using where clause to filter the data) only the data which matches the filtered condition will be moved to index pages.

create unique index IX_UNIQUE_TABLE_TEST_COL1 on TABLE_TEST(COL1) 
where id > 1000;

The above syntax will create unique index only on the ID's which is greater than 1000(assume that we have duplicate values till ID = 1000).

In case if we are already aware of duplicate values then we can do something like below

create unique index IX_UNIQUE_TABLE_TEST_COL1 on TABLE_TEST(COL1) 
where ID not in (<list of duplicate values>)

Monday 24 October 2016

SQL Service Restarts Time/TempDB Creation Time

As we all know tempdb will be recreated everytime when the SQL Server service restarts.

How do we know the timestamp of SQL Server service restarts/tempdb creation through query?

1) Using Dynamic Management View: (Not everyone will have this permission)

SELECT SQLSERVER_START_TIME FROM SYS.DM_OS_SYS_INFO

2) Whoever with Read Access can use the below query.

SELECT CREATE_DATE AS LAST_RESTART_TIME
FROM SYS.DATABASES
WHERE  NAME = 'tempdb'

(or)

SELECT CRDATE AS LAST_RESTART_TIME
FROM MASTER.DBO.SYSDATABASES
WHERE NAME = 'tempdb'

Thursday 20 October 2016

Retrieve Error Message in SSIS

Is there any way to get a string variable of error message shown in Execution result tab of a package.
Assume you encountered an error in your SSIS package, now all we need to log that error into your own table (other than SSIS catalog DB's), you can create a Execute SQL task with the INSERT script parameterized. Paramerter you need to use in Parameter Mapping tab isSystem::ErrorDescription.
INSERT INTO LOG_TABLE(ID,ERROR,ERROR_TIMESTAMP) VALUES(1,?,GETDATE());
Note: You can utilize other System Variables like PackageName,MachineName etc to log into the table.
Please refer screenshot below:

Friday 30 September 2016

MySQL Stored Procedure and Error Handling Construct

/*Changing Default Delimiter from ; to $$ for creating stored procedures.*/

DELIMITER $$ 
DROP procedure IF EXISTS 'SP_PD_CREATE_EMPLOYEE'$$

CREATE PROCEDURE SP_PD_CREATE_EMPLOYEE()
BEGIN
-- Variable Declaration goes here.

DECLARE V_EMP_CITY VARCHAR(20);
DECLARE V_EMP_ID BIGINT;  

/*
(Note: Variables should be declared before handlers and assigning values to variables should come only after handler declarations);
*/

DECLARE exit handler for sqlexception
  BEGIN
    -- ERROR
  ROLLBACK;
END;
  
DECLARE exit handler for sqlwarning
 BEGIN
    -- WARNING
 ROLLBACK;
END;

START TRANSACTION;

SET V_EMP_CITY  = 'CHENNAI';

  INSERT INTO table_name (id, name, address) values ('1','Avinash','xpertdeveloper.com');
  UPDATE second_table set name="xyz" where id=4;
COMMIT;

COMMIT;
END $$

DELIMITER ; -- Switch to Default Delimiter ; from $$



-- Error Handling - Rollback,Capturing Error messages and log in table.

DECLARE EXIT HANDLER FOR SQLEXCEPTION -- ERROR HANDLING STARTS HERE (CATCH BLOCK)
  
BEGIN

ROLLBACK;
    
GET DIAGNOSTICS CONDITION 1
code = RETURNED_SQLSTATE, msg = MESSAGE_TEXT, ERRNO = MYSQL_ERRNO;
    
SET FORMATTED_ERR_MSG = CONCAT('[SQLSTATE : ',code,'] [MYSQL_ERRNO : ',ERRNO ,'] MESSAGE : ',msg);

 /*Log Fail Status; */
    
INSERT INTO LOG(ERROR_MSG) VALUES(FORMATTED_ERR_MSG) VALUES(FORMATTED_ERR_MSG);

END;

Thursday 22 September 2016

Move Multiple Files from Source to Destination using Execute Process Task

When multiple files to be moved from source to destination we might have used ForEachLoop Container with the File System Task in it.(ForEachLoop Container iterates and move one file at a time to destination location using File System Task).

There is one other way(using Execute Process Task) through which we can move all files at once from source to destination without iteration logic.

Execute Process Task is the task in SSIS where in which we can write windows dos commands(as we write in cmd prompt in windows).

We do write dos commands as follows for moving files from one location to other as below:

move "F:\Entertainment\*.*"  "D:\MyFolder\Entertainment\"

*.* represent file name of any names and file type of any type.
We can add filters by placing wildcard character as per our needs.
For example: 
Move only movies starts with character "S" of type mkv from
F:\Entertainment to D:\MyFolder\Entertainment can be achieved as below

move "F:\Entertainment\S*.mkv"  "D:\MyFolder\Entertainment\"

Similar way in SSIS, we need to write code dos commands in Execute Process Task.
Below is the screenshot of the Execute Process Task for reference.



Firstly Switch to Process Tab and in the Executable Field paste the below command

C:\Windows\System32\cmd.exe

Second step is to write the dos commands as below for our requirement(i.e. move files in our case) in the Arguments Field.

/c move /y "F:\Entertainment\S*.mkv"  "D:\MyFolder\Entertainment\"

/c – required when running windows commands from the Execute Process Task

/y – Disable windows prompts such as  "Do you want to overwrite an existing destination file?".

Yes we all set to execute the task to move multiple files from one place to other using Execute Process Task.

Additional Comments:

If you want to write the same as configurable through expressions use below construct:

Create two SSIS variables for storing SRC and DEST location as follows:

@[User::V_FILE_SRC_LOCATION] = F:\Entertainment
@[User::V_FILE_DEST_LOCATION] = D:\MyFolder\Entertainment

Switch to Expressions tab and paste the below in Arguments field:

"C:\\Windows\\System32\\cmd.exe /c move /y \"" +  @[User::V_FILE_SRC_LOCATION]  + "\\S*.mkv\"" + " \""+ @[User::V_FILE_DEST_LOCATION] + "\\\""




Thursday 8 September 2016

SSIS Run64BitRunTime

I created one simple package which transforms data(Table Data) from Execute SQL Task to Flat File Destination.

While creating connection manager for OLEDB I ensured i could reach server by checking "Test Connection", it connected successfully.

Now when i try to execute, i came across this below error:

Source: Start Execute SQL Task 
   Description: Failed to acquire connection "<Connection Name>". Connection may not be configured correctly or you may not have the right permissions on this connection.

Solution which i found was to change the Debug options Run64BitRunTime to False.(By default, it will be set to TRUE).
How to reach the below screen:
Right Click Package name->Properties->Debugging(Under Configuration Properties)
Please refer the screenshot below:



Error generated has no relation to the solution,but still documenting it here for searching solution in future.

Thursday 1 September 2016

Read and Write Object Variable in SSIS


Please consider the below scenario:



Aim is to load the excel sheet data into RDBMS table only through SSIS.
(i.e. There is a limitation to not create any tables for separating comma separated values).

1) Create two variables(User::V_SOURCE_DATA,User::V_DEST_DATA) of object type in SSIS.

2) I directed all the rows read from excel sheet into a Record Set Destination.
So now i captured all my source data in the object named User::V_SOURCE_DATA.

3) Now i am going to create a simple C# script which reads data from object(User::V_SOURCE_DATA) and separates HOBBIES column using ","
and populates data into object(User::V_DEST_DATA).

using System.Data.OleDb;

// Create data table to hold data of output format with two columns(EMP_ID,HOBBIES).

DataTable dest= new DataTable();
dest.Clear();
dest.Columns.Add("EMP_ID");
dest.Columns.Add("HOBBIES");

// Read Source Data

DataTable dt= new DataTable();
OleDbDataAdapter adapter = new OleDbDataAdapter();
adapter.Fill(dt, Dts.Variables["User::V_SOURCE_DATA"].Value);

string emp_id = "";
string hobbies_str = "";

foreach (DataRow row in dt.Rows)
{
emp_id = row[0].ToString();
hobbies_str = row[1].ToString();

// MessageBox.Show("EMP_ID = " + emp_id + "; HOBBIES = " + hobbies_str);

string[] hobbies_str_split = hobbies_str.Split(',');

foreach (string item in hobbies_str_split)
{
// MessageBox.Show("EMP_ID = " + emp_id + "; HOBBIES_SPLIT_STR = " + item);
  // Fill new DataTable with Fund_ID
DataRow _fill_data = dest.NewRow();
_fill_data["EMP_ID"] = emp_id;
_fill_data["HOBBIES"] = item;
dest.Rows.Add(_fill_data);
  
}
   
}

Dts.Variables["User::V_DEST_DATA"].Value = dest;

4) Now the output data available in the object named User::V_DEST_DATA.

5) Atlast data from object read using FOR_EACH_LOOP container and passed into destination using EXECUTE SQL TASK Insert Script.

Note: When reading from object, map the output column values to object type variable else datatype mismatch problem will arise.

INSERT INTO EMP_HOBBY(EMP_ID,HOBBY) VALUES (?,?);

Monday 29 August 2016

Refresh Table Changes made on DB to SSIS package Data Flow Items

Recently i came across a situation to add additional columns into the existing table, hence the change needs to be effected\changed in SSIS packages as well for the tasks which makes use of these tables.

In order to do that, Right Click->Show Advanced Editor..-> Click Refresh on the extreme left corner at the bottom of Show Advanced Editor window. Please refer the below screenshot of the same.


Thursday 18 August 2016

NULL values in field(s) or row(s) handling in SSIS Expression

Use the below construct to convert your field from empty string to NULL.

ColumnName == "" ? NULL(DT_WSTR,50) : ColumnName


I have used the above construct in my Derived Column Transformation editor as follows:



To eliminate NULL rows we can do a check of any particular column to eliminate the row as below in Conditional Split:

ISNULL(FUND_ID) ==  FALSE 





NULL values in field(s) or row(s) handling in SSIS Expression

Use the below construct to convert your field from empty string to NULL.

ColumnName == "" ? NULL(DT_WSTR,50) : ColumnName


I have used the above construct in my Derived Column Transformation editor as follows:



To eliminate NULL rows we can do a check of any particular column to eliminate the row as below in Conditional Split:

ISNULL(FUND_ID) ==  FALSE 





Monday 11 July 2016

Removing Execution Plans from the Procedure Cache


To manually remove a single plan or all plans from the cache, we use DBCC FREEPROCCACHE (Transact-SQL).

Note: To all those who are not aware of what execution plan is, please go through the below links (Article may be bit lengthy, but till day no one has documented the way he(Grant Fritchey) did)

https://www.simple-talk.com/sql/performance/execution-plan-basics/
https://www.simple-talk.com/sql/performance/graphical-execution-plans-for-simple-sql-queries/

How SQL Server does this automatically for us?

Execution plans remain in the procedure cache as long as there is enough memory to store them.
When memory pressure exists, the Database Engine uses a "COST-BASED" approach to determine which execution plans to remove from the procedure cache.
To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan according to the following factors.

1) When a user process inserts an execution plan(new query) into the cache, the user process sets the current cost equal to the original query compile cost;
for ad-hoc execution plans(ex: CTE query), the user process sets the current cost to zero. Thereafter, each time a user process references an execution plan, it resets the current cost to the original compile cost;
for ad-hoc execution plans the user process increases the current cost. For all plans, the maximum value for the current cost is the original compile cost.

2) When memory pressure exists, the Database Engine responds by removing execution plans from the procedure cache.
To determine which plans to remove, the Database Engine repeatedly examines the state of each execution plan and removes plans when their current cost is zero.
An execution plan with zero current cost is not removed automatically when memory pressure exists; it is removed only when the Database Engine examines the plan and the current cost is zero.
When examining an execution plan, the Database Engine pushes the current cost towards zero by decreasing the current cost if a query is not currently using the plan.

3) The Database Engine repeatedly examines the execution plans until enough have been removed to satisfy memory requirements.
While memory pressure exists, an execution plan may have its cost increased and decreased more than once.
When memory pressure no longer exists, the Database Engine stops decreasing the current cost of unused execution plans and all execution plans remain in the procedure cache, even if their cost is zero.

4) The Database Engine uses the resource monitor and user threads to free memory from the procedure cache in response to memory pressure.
The resource monitor and user threads can examine plans run concurrently to decrease the current cost for each unused execution plan.
The resource monitor removes execution plans from the procedure cache when global memory pressure exists. It frees memory to enforce policies for system memory, process memory, resource pool memory, and maximum size for all caches.

The following examples illustrate which execution plans get removed from the procedure cache:

An execution plan is frequently referenced so that its cost never goes to zero. The plan remains in the procedure cache and is not removed unless there is memory pressure and the current cost is zero.

An ad-hoc execution plan is inserted and is not referenced again before memory pressure exists. Since ad-hoc plans are initialized with a current cost of zero, when the database engine examines the execution plan, it will see the zero current cost and remove the plan from the procedure cache. The ad-hoc execution plan remains in the procedure cache with a zero current cost when memory pressure does not exist.

Where can I see plan cache information?

dbcc memorystatus; -- (requires sysadmin privileges)

The most advanced and much awaited feature to persist execution plans(even when we restarts SQL instance) introduced in SQL Server 2016.
Will come up with the Query Store features on the next article.

Monday 4 July 2016

Delayed Durability in SQL Server 2014


                                                                                                                                       
Delayed Durability in SQL Server 2014

Firstly what is Delayed Durability?

We all aware of ACID property, the last letter in it is "D"[Durability] which brought this topic.

In SQL Server, changes to data are written to the log first. This is called write ahead logging (WAL).
Control isn't returned to the application until the log record has been written to disk (a process referred to as "hardening").
Delayed durability allows you to return control back to the application before the log is hardened.
This can speed up transactions if you have issues with log performance. Nothing is free, though, and here you sacrifice recoverability.
Should the database go down before the log is committed to disk, then you lose those transactions forever.

History behind Transaction Commit:

Whenever we commit the transaction(SQL Server is auto commit by default), log buffer data are the one which is first flushed into the disk
even before the original data[present in data buffer] into the physical disk. On completion of log flush into disk, all locks associated with the transaction will be released.
The transaction’s locks cannot be dropped until the log flush completes. So whenever log buffer entries made into the physical log files, transaction attains the final property of Durability.

Normal Transaction vs Delayed Durability Transaction 

Under normal circumstances, when a transaction commits, the commit doesn’t complete until the log block for the transaction has been flushed to disk.
Whereas in the case of delayed durability transactions are considered to be complete, even before log flush occurs.
Hence other transactions can acquire locks held by current transaction.

Scenario:

Think of a workload such as, all the other transactions are waiting for the one that is committing, as they all need the same locks, so Transactions/sec is tied to Log Flushes/sec in this case.

With delayed durability, the transaction commit proceeds without the log block flush occurring – hence the act of making the transaction durable is delayed.
Under delayed durability, log blocks are only flushed to disk when they reach their maximum size of 60KB.
This means that transactions commit a lot faster, hold their locks for less time, and so Transactions/sec increases greatly (for this workload).
You can also see that the Log Flushes/sec decreased greatly as well, as previously it was flushing lots of tiny log blocks and then changed to only flush maximum-sized log blocks.

Advantages/Benefits: 

By enabling the delayed durability, no of transactions per sec will be greatly improved.
Since other transactions doesn't need to wait for the current transaction till it is being logged.

Disadvantages: 

Your transactions aren’t durable when they commit. If the system crashes we will end up losing the transactions(though they are committed)
which is in the log buffer.

Key Notes:

Delayed Durability can be enabled at database level, COMMIT level, or ATOMIC block level in Natively Compiled Stored Procedures.
For more details please refer: https://msdn.microsoft.com/en-us/library/dn449490.aspx

Thursday 12 May 2016

How to save query results into excel file with column names in mysql


Include headers when using SELECT INTO OUTFILE?


If you ever tried the above scenario, your solution is here.


If you are using MySQL Workbench, you can achieve the same with the help of Export\Import icon in the query output window.

Please refer the image below for the same.

Execute the query which you would like to extract. Then click highlighted icon shown above

Browse desired path and choose Excel Spreadsheet as Save as type in File Dialog Box.














































Tuesday 10 May 2016

Snippets in SQL Server Management Studio(SSMS)


SQL Server Management studio offers simple snippets which helps to create objects of  the Database with ease without bothering much about the syntax of each object.

Snippets can be included in the query window either by browsing through
Edit->IntelliSense->InsertSnippet... or (Keyboard shortcut Ctrl K+X)

(or)

Edit->IntelliSense->SurroundWith..or (Keyboard shortcut Ctrl K+S)

Please have a look at the picture below for details of the above mentioned keyboard shortcuts.












Above picture shows the Begin End, If and While block Construct which you can include anywhere in the query window just by hitting the keyboard shortcut of Ctrl K+S.




Above picture shows the object creation template(table in the above case). By clicking on any of the listed object, respective construct will be shown for the user to edit.

Wednesday 27 April 2016

Drop all temp tables(local and global) of a session in SQL Server.

Drop all temp tables(local and global) of a session in SQL Server.


Below code is helpful wherein you are not aware of the temp tables created in your current session.
When we debug the procedure,if that procedure doesn't have construct of DROP and CREATE.



DECLARE @DropGlobal bit=0 --Default dont drop global temp table
DECLARE @DROP_STATEMENT nvarchar(1000)
DECLARE cursorDEL CURSOR FOR
SELECT 'DROP TABLE '
   + case
           when name like '##%' then name
           when name like '#%' then SUBSTRING(name, 1, CHARINDEX( '____', name)-1)
    end as DropSQL
from tempdb..sysobjects
WHERE name LIKE '#%'
   AND OBJECT_ID('tempdb..' + name) IS NOT NULL
   AND name not like case
                       when @DropGlobal=0 then '##%' --//Exclude global temp
                       else '#######%'    --//some fack expression so we can
                                           --//select global temp for delete
                    end

   --//eventhough we have selected all records from sysobjects
   --//but one can access only temp table created by same connection
   --//executing this procedure

OPEN cursorDEL
FETCH NEXT FROM cursorDEL INTO @DROP_STATEMENT
WHILE @@FETCH_STATUS = 0
BEGIN
--EXEC (@DROP_STATEMENT)
print @DROP_STATEMENT
FETCH NEXT FROM cursorDEL INTO @DROP_STATEMENT
END
CLOSE cursorDEL
DEALLOCATE cursorDEL




Monday 25 April 2016

Find Dependent objects related to TableName in SQL Server.


Below query will fetch you the list of dependent objects(SP or Views or Functions) for the given tablename:


SELECT B.NAME as OBJECT_NAME,A.[DEFINITION],B.TYPE_DESC
FROM SYS.SQL_MODULES A
JOIN SYS.OBJECTS B
ON A.OBJECT_ID = B.OBJECT_ID
WHERE A.DEFINITION LIKE '%F_PARTY_TRANCHE_ROLE%'












Note: sys.syscomments comes under deprecated system table, so in the above query i used sys.sql_modules which Microsoft recommends for future development.


Friday 25 March 2016

Parent Package Variable in SSIS


Parent Package Variable in SSIS

There are situations wherein you need value which is assigned to variable in one package to be used in other package.

There exist a way in SSIS to access a variable defined in one(Parent) package and utilized in other(Child) package.

Just visualize a scenario where you have a Parent Package and "n" number of child packages.Through Execute Package task you are calling all child's from Parent.

All child packages should get values from Parent package.(Values can be "Server name or DB_Name")

When we want to make use of variable values which is defined and assigned in other package(let's say Parent), the place we need to target is Child Package.

Configuration Steps as follows:

1) Go to the package(child),right click anywhere in the Control Flow Designer Area and choose Package Configuration.

2) "Package Configurations Organizer" window will pop up, click on Check-box "Enable Package Configuration" which enables "ADD" button.Click Add.

3) Package Configuration Wizard(welcome page) will launch, just Click Next. (You may or may not come across this Welcome page).

4) Now when you click Next, Select Configuration Type wizard appears.

5) From the Configuration Type drop down(defaults to XML configuration file), choose Parent Package Variable.

6) Now we have reached the most important step of this article, which is nothing but defining the Parent Variable.

7) In this place, we need to give Variable Name from which we are going to pull out values. (Parent package Variable Name).
   Note: Variable Name is case sensitive and hence you need to use the same name defined in Parent Package. We all set, to assign value to the variable in the Child package.

8) Click Next, Select Properties to Export window will appear.
9) Click on Variable Name,expand Properties and click on "Value" property.
10)Click Next, Completion wizard will summarize the configuration. Click on Finish.

Now whatever value assigned to Parent Package Variable will be assigned to Child Package during run time.

Best example, i can think of is, let's say you developed package in dev environment(with OLEDB in Parent and all Child's). Time has come to move to UAT or SIT then instead of changing OLEDB connection in each and every child package if we change in Parent Package, that will be reflected in Child Package(provided your servername and dbname are configurable through expressions with variables).

Friday 11 March 2016

How to rename files in SSIS using Script task?


Input Filenames(under my root path folder): 

DW_DATA_MAS_UNIT_03112016
DW_DATA_MAS_PARCEL_03112016
DW_DATA_TRAN_ACCOUNT_03112016
DW_DATA_TRAN_BANKDETAILS_03112016
DW_DATA_CH_ACCDETAILS_03112016

Output(in the same root path folder): I just segregated Master,Transaction and Child tables based on the keywords(MAS,TRAN,CH) present in input file.

MASTER_DW_DATA_MAS_UNIT_03112016
MASTER_DW_DATA_MAS_PARCEL_03112016
TRAN_DW_DATA_TRAN_ACCOUNT_03112016
TRAN_DW_DATA_TRAN_BANKDETAILS_03112016
CHILD_DW_DATA_CH_ACCDETAILS_03112016

Code to achive the same :

string rootfolderPath = Dts.Variables["User::SOURCE_PATH"].Value.ToString();

string[] files = Directory.GetFiles(rootfolderPath, "*.xml", SearchOption.AllDirectories);

  foreach (string s in files)
{
string trgt = "";
if(Path.GetFileName(s).contains("MAS_"))
{
targt = Path.GetDirectoryName(s) +"\\MASTER_"+ Path.GetFileName(s);
}
else if(Path.GetFileName(s).contains("TRAN_"))
{
targt = Path.GetDirectoryName(s) +"\\TRAN_"+ Path.GetFileName(s);
}
else if(Path.GetFileName(s).contains("CH_"))
{
targt = Path.GetDirectoryName(s) +"\\CHILD_"+ Path.GetFileName(s);
}

}

if (!File.Exists(targt))
{
File.Move(s, targt);
}

Explanation: 

Firstly, add a variable(to store the Root Path of the files location) and a Script Task(to rename files using C#.net).

1)
Change the variable datatype to String and assign it with a RootFolder path value.

For Ex: C:\Users\kn\Documents\NewFolder

2) 
Make the variable as ReadOnly in Script Task.

 string rootfolderPath = Dts.Variables["User::SOURCE_PATH"].Value.ToString();

Above Line copies the root path into a string variable rootfolderPath.(value is being pulled from SSIS variable)

string[] files = Directory.GetFiles(rootfolderPath, "*.xml", SearchOption.AllDirectories);

Above Line reads out all files of specified type(in root folder and sub folder directories) and store it into a array called "files".

3)
Next step is to loop through each file in an array and rename it based on the certain condition.

foreach (string s in files)
{
string trgt = "";
if(Path.GetFileName(s).contains("MAS_"))
{
targt = Path.GetDirectoryName(s) +"\\MASTER_"+ Path.GetFileName(s);
}
else if(Path.GetFileName(s).contains("TRAN_"))
{
targt = Path.GetDirectoryName(s) +"\\TRAN_"+ Path.GetFileName(s);
}
else if(Path.GetFileName(s).contains("CH_"))
{
targt = Path.GetDirectoryName(s) +"\\CHILD_"+ Path.GetFileName(s);
}
}

Example : 

Before Change : C:\Users\kn\Documents\NewFolder\DW_DATA_MAS_UNIT_03112016.txt
After  Change : C:\Users\kn\Documents\NewFolder\MASTER_DW_DATA_MAS_UNIT_03112016.txt

Note: 
Path.GetDirectoryName(C:\Users\kn\Documents\NewFolder\DW_DATA_MAS_UNIT_03112016.txt) will return the file Parent Directory i.e C:\Users\kn\Documents\NewFolder.

Path.GetFileName(C:\Users\kn\Documents\NewFolder\DW_DATA_MAS_UNIT_03112016.txt) will return file name from fully qualified path i.e DW_DATA_MAS_UNIT_03112016.txt

Variable "s" holds the existing filename and the variable "trgt" holds the new filename.

Now with the help of methods under File Class we can rename file.

if (!File.Exists(targt))
{
File.Move(s, targt);
}

Above Code will rename file by ensuring the destination location for the target filename.

Note : 
File.Exists(destination_path)
File.Move(source_path,destination_path)

Same can be done in SSIS using FOR EACH LOOP Container, File System Task and Expression Task as well.

Wednesday 2 March 2016

Drop Constraint of a column(s) in SQL Server




How to drop particular(FK OR UQ OR DF) constraint based on given COLUMN_NAME in SQL Server?

Key Note: 
Generally you will come across this situation when you don't create CONSTRAINT NAME on your own.
When you asked to drop "only one unique constraints" on a column of a table(assume that table has 'n' unique constraints on other columns as well) for a particular column(or for a set of columns), how do you do it when all the constraint names are system generated?

--Input for a single column: TABLE_NAME & COLUMN_NAME  

DECLARE @TABLE_NAME VARCHAR(100) = 'TRANSACTION_PARCEL' 
DECLARE @COLUMN_NAME VARCHAR(100) = 'fromdate' 
DECLARE @SQL NVARCHAR(max) = '' 

SELECT @SQL += 'ALTER TABLE ' + @TABLE_NAME 
               + ' DROP CONSTRAINT ' + constraint_name + ';' 
FROM   information_schema.constraint_column_usage A 
       JOIN sys.objects B 
         ON A.constraint_name = B.NAME 
WHERE  --B.TYPE = 'UQ' AND  
  A.table_name = @TABLE_NAME 
  AND A.column_name = @COLUMN_NAME 
GROUP  BY A.constraint_name 

PRINT @SQL 

EXEC Sp_executesql 
  @SQL 


----Input for multiple columns: TABLE_NAME & COLUMN_LIST

DECLARE @TABLE_NAME VARCHAR(100) = 'TRANSACTION_PARCEL' 
DECLARE @SQL NVARCHAR(max) = '' 

SELECT @SQL += 'ALTER TABLE ' + @TABLE_NAME 
               + ' DROP CONSTRAINT ' + constraint_name + ';' 
FROM   information_schema.constraint_column_usage A 
       JOIN sys.objects B 
         ON A.constraint_name = B.NAME 
WHERE  --B.TYPE = 'UQ' AND  
  A.table_name = @TABLE_NAME 
  AND A.column_name IN ('PARCEL_BSKEY','TRAN_BSKEY','TRAN_PARCEL_BALANCE_TYPE') 
GROUP  BY A.constraint_name 

PRINT @SQL 

EXEC Sp_executesql 
  @SQL

Note: Code can be customized to pull out particular keys as well (F - foreign key, D - Default constraint, UQ - Unique constraint, PK - Primary Key constraint)

Before you drop constraint of type PK, you need to drop FK's which falls upon this PK.

Friday 19 February 2016

MODEM Connectivity Problem (Airtel or BSNL)

Have you ever come across a situation wherein ASDL light in your modem goes OFF continuously in a frequency of 5 to 10 minutes which leads to disrupt in Internet Connection?

There is a workaround exist to make it stable and to make use of Internet Connection without any interrupt.

Solution: Keep OFF the receiver from phone. (Take OFF the receiver and keep somewhere)
By placing your Phone Receiver aside, your Phone Line will no more Listen for calls from outside network since it is in Engaged mode. Now your Internet Line will have the stable connectivity since the other line(CALLS) is put OFF.

Note: It is just a workaround, by doing so you may not able to receive calls from outside people but at the same time you can enjoy the uninterrupted Internet Connection.

Will update the permanent solution of the same SHORTLY!!!.

Monday 25 January 2016

Command Prompt : PAUSE keyword usage in BATCH file and Redirect output to File.


When we create batch file to execute SQL Commands using sqlcmd utility (or) to execute SSIS packages using DTEXEC utility, we knew that once the execution is over, black screen (CMD prompt screen) will flash off within a sec.

We often more interested in seeing the output status of the execution(successful or failed because of any reasons) once it is done.

All we need to achieve the same is by using the PAUSE keyword at last in the batch file.

PAUSE keyword ensures control gets back to the user by prompting a message
"Press any key to continue . . ." .
In this case after user hits any key in the keyboard only CMD prompt will goes off.

1)using SQLCMD

sqlcmd ....................
PAUSE

(or)

2)using DTEXEC

DTEXEC
PAUSE

Note: PAUSE keword is case insensitive.

3) Create or OverWrite

If you want to redirect the output status to any file, you can use > (Greater than) operator followed by filepath with filename.


sqlcmd .................................. > "C:\Users\TEST\output.txt"
PAUSE

Note: File will be created if not exists. File content will be overriden if file with the same name already exists.

4) Append
If you want to append use >> operator.



sqlcmd .................................. >> "C:\Users\TEST\output.txt"
PAUSE

Tuesday 5 January 2016

Extract text from image

Is there a way to copy out or grab the text content from Image?

Answer is YES.

Sometimes i do get images wherein SQL queries(with complicated join conditions and respective aliases appended column output list)exist which i need to type it out in SSMS.

Traditional way of doing this is "keep both the windows(Image and SSMS) side and side and kick start the boredom work(type till you lasts your energy)".

Easy way of doing the same is by using Microsoft OneNote (one among application in Microsoft bundle listed along with Word, Excel, Outlook etc)

OneNote supports Optical Character Recognition (OCR).

To extract text from a picture:

1) Copy and Paste the image in OneNnote (or) Use "File->Open options" to bring the image to OneNote editor.

2) Right-click the picture, and click Copy Text from Picture

3) Click where you’d like to paste the copied text, and then press Ctrl+V.