Posts tagged ‘Performance’

Tuning SSAS Processing Performance

Thanks to all those that attended either the webcast of this that I did for the PASS BI virtual chapter, or the presentation at SQLSaturday #74 in Jacksonville this weekend. I really appreciate all the great feedback that I’ve received from the attendees at both events.

I’ve attached the slides from the presentation. I’ve also attached a few definition files, one for a Performance Monitor data collector set, and two trace templates, one for SSAS and one for SQL Server. Feel free to customize these files as necessary for your own tuning efforts. Also, while these trace templates can be used in Profiler, the best thing to do is to generate the scripts from them to run the trace directly on the server – lower overhead and less impact on performance.

The file is located on my SkyDrive.

Performance of Raw Files vs. Flat Files

It’s pretty well accepted that raw files in SSIS are a very fast means of getting data in and out of the pipeline. Jamie Thomson has referenced the use of raw files a number of times and Todd McDermid recently posted about using them for staging data. It occurred to me, that even though I’d always heard they were faster than other options, I’d never actually tested it to see exactly how much of a difference it would make. So, below I’ve posted some admitted unscientific performance testing between raw files and flat (or text) files.

I tested two variations of flat files, delimited and ragged right. The delimited file was configured with a vertical bar (|) as the column delimiter and CR/LF as the row delimiter. The ragged right file was configured as a fixed width with row delimiters – each column had a fixed width, and a final, zero-width column was appended with CR/LF as the delimiter. The same data was used for each test, the following columns being defined:

Name Data Type Precision Scale Length
TestInt32 DT_I4 0 0 0
TestString DT_STR 0 0 50
TestBool DT_BOOL 0 0 0
TestCurrency DT_CY 0 0 0
TestDBTimestamp DT_DBTIMESTAMP 0 0 0
TestWString DT_WSTR 0 0 50
TestNumeric DT_NUMERIC 18 6 0

One thing to note is that when importing from flat files, everything was imported as strings, to avoid any data conversion issues. This is one of the strengths of raw files – no data conversion necessary. But for this test, I was primarily looking at speed of getting the data on and off disk. I also looked at the difference in file sizes between the formats.

I tested each option with 500,000, 1 million, and 10 million rows. I ran each one 4 times for each row count, and discarded the first run to offset the effects of file caching. The results of the runs were averaged for comparison.

When writing files, there’s no big surprises between the options. raw files are faster on 10 million rows by 9.8 seconds. The difference on smaller numbers of rows is pretty insignificant. Here’s a chart showing the times (the raw data is at the end of the post):

image

Reading files did show a difference that I didn’t expect. Read speeds on raw files and delimited files are fairly comparable, with raw files still having the edge in speed. However, reads on ragged right files are significantly slower – well over twice as slow when compared to raw files.

image

File sizes were also as expected, with delimited files having a slight edge over raw files, likely because the string values I used were not all 50 characters in length.

image

In summary, it’s clear that raw files have an advantage in speed. However, the differences weren’t as large as I was expecting, except in the case of ragged right files. So, in general, using raw files are best for performance, but if you are dealing with row counts of less than 1 million rows, it’s not a huge difference unless you are really concerned with performance. Of course, there are plenty of other differences between the formats, and I’d encourage you to research them before making a decision.

Here’s the raw data on the number of seconds to produce each file:

  500,000 1,000,000 10,000,000
Write To Delimited 2.61 5.16 47.02
Write To Ragged 2.66 5.31 49.03
Write To Raw 2.21 4.23 39.21
  500,000 1,000,000 10,000,000
Read From Delimited 0.77 1.52 16.59
Read From Ragged 2.74 5.89 35.39
Read From Raw 0.60 1.08 10.03

and the file size in KB for each:

  500,000 1,000,000 10,000,000
Delimited 44,624 89,792 946,745
Ragged 92,286 184,571 1,845,704
Raw 47,039 94,402 973,308

Please let me know if you’d like more details or have any questions.

SSIS Slow Down Performance Script Component

I’ve been meaning to post this for a while, but real life keeps intruding. I did a presentation on SSIS performance for the Charlotte SQL Server User Group at the end of November. It went well, though I was trying to squeeze a 90 minute presentation into 60 minutes, so I had to skim over some of the good stuff. You can find the presentation here (it’s the first link, at least till we have another meeting).

One item that I did need for the demos in the presentation was the ability to slow down the data flow. Normally, SSIS processes so fast that you can’t really see what’s happening. I found a few examples of script components that introduced a delay every time a buffer was processed. Unfortunately, that wasn’t flexible enough for me, as part of my demo was adjusting the numbers of rows per buffer. I really wanted something that would give a delay every X rows, while being easy to configure without having to edit the script each time. I ended up writing the following code:

Public Class ScriptMain
    Inherits UserComponent

    Private RowCount As Integer
    Private rowDelay As Integer
    Private everyNRows As Integer

    Public Overrides Sub PreExecute()
        MyBase.PreExecute()
        Dim vars As Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSVariables90
        Me.VariableDispenser.LockForRead("EveryNRows")
        Me.VariableDispenser.LockForRead("MSTimeDelay")
        Me.VariableDispenser.GetVariables(vars)
        rowDelay = CType(vars("MSTimeDelay").Value, Integer)
        everyNRows = CType(vars("EveryNRows").Value, Integer)
        vars.Unlock()
    End Sub

    Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
        RowCount += 1

        If RowCount Mod everyNRows = 0 Then
            System.Threading.Thread.Sleep(rowDelay)
        End If
    End Sub
End Class

This relies on two integer variables to be declared in the package. EveryNRows specifies how many rows should be processed before introducing a delay. MSTimeDelay sets how long the the delay should be, in milliseconds.

One thing to be aware of – in the above script, I am reading the package variables in the PreExecute method and storing them in instance variables. Locking and unlocking the variables is time consuming, and will drastically slow things down if you do it in the ProcessInputRow method.

SSIS Performance Webcast for the Charlotte SQL Server User Group

I’ll be doing a web cast for the Charlotte SQL Server User Group, our local PASS chapter, on SSIS Performance Tuning on Nov. 28, 2007 at 12:00 EST. From the abstract:

“This presentation will cover the various aspects of performance tuning in SQL Server Integration Services. We will start by covering some general best practices for performance. We will then drill into details of performance optimization for SSIS data flows, including optimizing execution trees, managing buffer size, and evaluating performance metrics to determine where the bottlenecks are. We’ll also cover the logging options and performance counters that can be used to monitor performance.”

If you are interested in tuning in, please sign up here. We’ll be giving away a few copies of Windows Server 2003 Enterprise Edition. To sign up as a member of the user group, go here and select the sign-up option.

Comparing methods for translating values in SSIS

A question came up recently on the SSIS forums on translating lookup codes into a description within the data flow. That is, given code “A”, how do you translate it into a more descriptive value, such as “Apple”?


As a general rule of thumb, if the the codes and values might change over time, or if there are a significant amount of them, I would store them in a database table, and use the Lookup component to handle the translation. This makes maintenance much easier, and Lookup components have great performance. In fact, I do this in most cases, since even things that aren’t supposed to change have a habit of doing so.


The question posed on the forum was for a small set of 4 translations. In this case, assuming these values were not going to change often or at all, the suggestions were to use one of three methods: the script component, the derived column component, or the lookup component with a hard-coded list of values (suggested by Phil Brammer http://www.ssistalk.com).


I decided to run each method through its paces to determine how each one performs. I was fairly confident that the script component would be slower than the other options, but I was less sure about the lookup and derived column. So I set up a test package for each scenario, and ran some rows through them to see how they did.


I’ve got the testing methodology below this if you’re interested, but I’m going to jump right into the results. I ran each package 5 times for 500,000 rows and another 5 times for 1,000,000 rows. The lookup performed the best in each test, and the script performed the worst. The derived column transform came close to the lookup and actually beat the lookup once when running the million row test, but the lookup still come out ahead based on the overall average. We are talking about a couple of seconds over a million rows, but every little bit helps.



So, it looks safe to continue using the lookup transform for any type of translation, even when the values are hard-coded. 


 


Scenarios for Testing


 


Each package has a single data flow in it. The script package has the following in the data flow:



and has this code in it:


Select Case Row.LOOKUPVALUE


Case “A”


Row.Description = “APPLE”


Case “B”


Row.Description = “BANANA”


Case “C”


Row.Description = “CANTALOUPE”


Case “D”


Row.Description = “DATE”


Case “E”


Row.Description = “EGGPLANT”


Case Else


Row.Description = “UNKNOWN”


End Select


The derived column flow looks like:



The derived column is using this code:


 LOOKUP_VALUE == “A” ? “APPLE” : LOOKUP_VALUE == “B” ? “BANANA” : LOOKUP_VALUE == “C” ? “CANTALOUPE” : LOOKUP_VALUE == “D” ? “DATE” : LOOKUP_VALUE == “E” ? “EGGPLANT” : “UNKNOWN”


The lookup looks like:



and uses this to generate the values:


SELECT ‘A’ AS VALUE, ‘APPLE’ AS DESCR
UNION ALL
SELECT ‘B’, ‘BANANA’
UNION ALL
SELECT ‘C’, ‘CANTALOUPE’
UNION ALL
SELECT ‘D’, ‘DATE’
UNION ALL
SELECT ‘E’, ‘EGGPLANT’


The unknown values are dealt with by ignoring failures in the lookup, and setting the value in the derived column transform:


TRIM(LOOKUP_VALUE) == “” ? “UNKNOWN” : DESCRIPTION


I used 6 values (A, B, C, D, E, and an empty string) in the source. A, B, C, and D were each used in 20% of the source rows, and E and the empty string accounted for 10% of the source rows each.