Home About Us SQL Interview Book Contact Us RSS
Code Snippets
Tips & Tricks
Code Snippets

Saravana Kumar
Vinod Kumar

Introduction to SQL Server 2005 DTS 

DTS has come a long way in SQL Server 2005. The architecture of DTS has changed and transformed to the level where all users have to rethink the way DTS works from its previous versions. Nevertheless a more fresh perspective to understand this new version. I would surely consider that DTS has become more mature. And in this article we will take a quick tour to look at how a simple DTS Package can be created and used.

Creating your first DTS Project

To start our first project up and running, we need to open the Business intellegence Project Folder. The BI Environment allows us to work as a disconnected environment and still develop projects for SQL Server. You can find for yourself that some of the interesting projects include DTS, Reports Designer, Analysis services projects etc.

Click on the DTS Project and give the location for the project to be created and other parameters.


As soon as the project opens up the typical Solution Explorer looks like above. And I've in the above project renamed my DTS package file.

Selecting the DTS Package we are presented in the main application frame. And looking at the Design view we find interesting tabs.

1. Control Flow: Gives a logical window to look at tasks. The concept is same to what it ued to look in the DTS 2000 world too. This window contains some interesting new tasks, contraints and looping operators like the FOR Loop, FOR EACH Loop, SEQUENCE etc. This is the main task window. All the related control flow tasks are presented in the next task frame. Some of the interesting events are:

2. Data Flow: The Data Flow designer manages all of the data movement and transformations between a source and target. In order to include data flow within your package, you must manually add a Data Flow task to the Control Flow designer or let DTS do it for you when you open the Data Flow designer window. Each projects can have multiple Data Flow tasks and each tasks can have a source and destination activities. There can be various transformations possible like the Aggregation, Derived Columns are possible. A typical Data flow package view will look like:

In the above example we have taken a Flat File Source task. This task is to use a flat file as input to our First DTS Package.  

3. Event Handlers: This is an interface through which the task raises events and exposes the opportunity for events to be handled. DTS events for every executable container task within the package's runtime environment, are exposed in the user interface with each event having the possibility of its own event handler design surface for implementing compound work flows. Some event handlers provided are: OnCustomEvent, OnError, OnExecStatusChanged, OnNMProgress, OnPostExecute, OnPostValidate, OnPreExecute, OnPreValidate, OnProgress. All the parameters available for the present package are displayed in this section.

4. Package Explorer: This gives us an interface through which we can execute the package and view the health of the running Package. It gives you statistics of the various pipelines available, how they were executed and how they were synchronized. The total execution time is also monitored through this window. An typical execution will give an output as:

Building our First Package

Having made a nice foundation for our DTS Package understanding. In the previous Data Flow task we have created a flat file source. Now we need to provide the connection details for our flat file source. And the steps for the same are as follows. Double click the Flat File Source in the Data Flow window. You will be presented with:

Click on the "New Connection" and a new connection is created. And then we are presented with a whole host of other information. For the new connection we need to provide the "path" (File Name(s) as in the diagram) from which we are going to load the file. And I like this part a bit because there seems to be intelligence created within the system. It automagically uses the information and populates the fields data. You are free to change the same later.

Give the above information and move to the next tab. Automatically the grid is loaded with sample data.The same is shown below:

The best part lies in the next column properties tab. Here we will map the input columns to the corresponding datatypes. As said earlier the intellegent DTS service allows you to use the "Suggest Types" button and the columns from our datafile are mapped to appropriate datatypes based on the data present in those fields. And I feel DTS has become more usable and mature that the complex works have become simple.

So we have created the data required for our input file to be loaded.

Manipulating data

The next step will be to create a sample transformation. Here we require is a "Derived Column" data flow task. And in this task we plan to transform some data from out datafile before we could actually dump the same into our tables. For this example we change the weight column to a value .25 times the value of the actual.

All these tasks are connected using the green task flow arrow. Hence this task of transformation used the output from the flat file. And it transforms the data for loading into the table.

Load into Table

Insert the data into the table is our next objective. For this drag and drop an OLE DB Destination task. And this task would require connection information. Give the connection information to connect to your localhost SQL Server 2005 desctination. For this you can also connect to SQL Server 2000 or 7.0. This will work flawlessly. Create a new table button. And I like this option because depending on the input given to this task DTS automatically gives us an structure to work with. And for our example I've changed the same as:

On clicking OK the table will be created into the OLE DB Destination. After the connection has been established and the destination objects created, our focus now is to create the input and output column mappings. For the same click on the mappings tab:

We will not get into the "Error Output" tab. But here you can configure for the error cases in input data. You can optionally redirect your output to a different destination or you can even abort the whole DTS flow process. For the present example I've configured it to Ignore any errors since this is a very very simple task.

With around 10000 rows in the input file and running the DTS Package the output finally looks like:

The output from the execution explorer as shown above, it takes just around 1.5 seconds to load this data into SQL Server 2005. And the PC I ran is just a 512 MB RAM Win XP machine. Hence I can surely assume on a better high end machine the performance can be multi-folds faster for sure. So lets take a look at the tables data:


I think this article does take a simple tour around creating your first simple, really simple DTS package. The opportunity that DTS opens up in SQL Server 2005 is enormous and we will surely take a tour into some of the simple new DTS tasks that are available in this new version in our future articles. Do drop a line about your views on this article anytime. And again, I just take this opoortunity to say "Yukon Rocks" !!!

Update: On Popular demand we have added the Source Code and Sample datafile. Yukon Beta2 users can use the same:

Download Sample the datafile

Download Source DTSX file