Warning about Using String Variables in Derived Column Expressions

I ran across an interesting behavior in SSIS this week, and thought it was worth sharing. Occasionally, I’ll have the need to use the value of a variable (set in the control flow) in the data flow. The typical way to accomplish this is to use a Derived Column transformation to introduce the variable’s value into the data flow. However, you need to be cautious if you are doing this with a String variable.

When you are working with Derived Column expressions, the output type of the Derived Column expression is based on the input values. For example, inputting an expression like 1+2 will result in an output column data type of DT_I4 (an integer). Basically, the editor guesses the data type based on the input values. Generally, this is OK – if you are referencing other data flow columns, or variables that aren’t of type string, the lengths of the values are static. However, when the editor calculates the length of string variables, it uses the current value in the variable. So, if your variable has 3 characters, the output data type only expects three characters. If the value in the variable is static and never changes, this is fine. But if the value is updated regularly (perhaps it’s a variable in a template package) or the value is set through an expression (so that the value changes at runtime), this can cause truncation errors in the package.

I created a sample package that demonstrates this. The package is pretty simple: a single Data Flow task, with an OLE DB Source, a Derived Column transformation, and a Union All (just there to terminate the data flow).

image

There is a single variable named TestVar that contains the value “1234”.

image

In the Derived Column transformation, there is a single new column added. Note that the calculated length for the output column is 4, matching the value “1234” in the variable.

image

If the package is run right now, everything works fine. But that’s not the case if you change the value in the variable to “123456”.

image

Running the package after changing the value results in the following error:

[Derived Column [91]] Error: The "component "Derived Column" (91)" failed because truncation occurred, and the truncation row disposition on "output column "ColumnFromStringVar" (289)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.

This occurs because the metadata in the Derived Column transformation isn’t updated when the variable’s value is changed. So, to avoid seeing this error occur in your packages, you need to explicitly set the output column’s length.

In 2005, you could change the calculated data type by editing the data type, length, precision, and scale fields. In 2008, however, the values are locked. You can change the data type by going into the advanced editor for the Derived Column. However, it’s easier to simply cast the string value, to force the Derived Column editor to treat it as if it has constant length.

image

By using this approach, as long as your string variable’s value is less than 50 characters, the Derived Column will continue to work. It’s best to set the length of the cast to the same value as the destination column’s length.

There’s a Connect submission on improving this behavior, either by updating the pipeline’s metadata as the string variable’s value changes, or by throwing a validation warning or error if the current value of the variable exceeds the output length in the Derived Column transformation. If you agree that this could use some improvement, you can vote here: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=470995

In the meantime, I’d highly recommend performing an explicit cast in any Derived Column expression that uses a string variable, particularly if the value is subject to change.

The sample package is on my SkyDrive, if you’d like to see the error in action.

Posted in Uncategorized | Comments Off on Warning about Using String Variables in Derived Column Expressions

Filtering Objects in PowerShell based on a List of Accepted Values

I was writing a script the other day where I want to return a collection of Services based on the name.  It took me a few minutes to figure out how to do this, so I thought I’d jot it down.  Nothing revolutionary, but I’ve definitely found this pattern to be handy.

Let’s start with a comma separated list of objects we want to filter by.  In this case I only want to return objects that start with “a”, “c”, “e”, or “g”.  I’ll take that list, and split it into an array.

# List we want to filter against
# Split it into an array
$filterList = ‘a,c,e,g’.Split(",")

Next, I’ll generate a collection of objects to try and filter.  I’ll just use a function to make life easy:

# Function to generate a collection of objects to test on
function Create-TestCollection {
    # Turn this into an object to filter
    $objectList = 'a,b,c,d,e,f,g'.Split(",")
    
    foreach($object in $objectList)
    {
        $output = new-object PSObject
        $output | add-member noteproperty Name $object
        
        Write-Output $output 
    }
}

# Populate the variable to test with our collection of objects
$collection = Create-TestCollection

Now $collection contains a list of objects with a single Name property containing “a”, “b”, “c”, “d”, etc.

Now list filter $collection using a Where:

Write-Host "Objects from the Collection where the Name property exists in an Array"
# Filter the collection of objects based on a CSV list
$collection | Where {$_.Name -and $filterList -eq $_.Name }

And it will return only the objects from our initial comma separated list:

Name                                                                                     
----                                                                                     
a                                                                                        
c                                                                                        
e                                                                                        
g    

This also works with other strings:

# An array we want to filter
$collection2 = 'a,d,g,j,m'.Split(",")

Write-Host "Strings in an Array where that exist in an Array"
# Filter the collection of objects based on a CSV list
$collection2 | Where {$_ -and $filterList -eq $_ }

Strings in an Array where that exist in an Array
a
g

Enjoy…

David

 

Posted in Uncategorized | Tagged | Comments Off on Filtering Objects in PowerShell based on a List of Accepted Values

Building a SQL Server Analysis Services .ASDatabase file from a Visual Studio SSAS Project

There are several methods available for deploying Analysis Services databases once you’ve build your solution, including direct connections, generating XMLA to deploy, and using the Deployment Wizard with answer files.  Still, building and deploying AS Databases can sometimes be a challenge in enterprise development scenarios.  One common scenario is when you have multiple developers working on a single solution where all the files are under source control.  I wrote a blog about about SQL Server Analysis Services Projects with Multiple Developers recently, and this is one of the issues that you tend to run into.  I created a sample you can use to help with this problem.  I’m not going to go over how to deploy AS solutions (it’s documented plenty of other places)… this is just a tool to address this particular scenario.

I create the SsasHelper sample demonstrate the functionality, and posted it to the Microsoft SQL Server Community Samples:  Analysis Services site on CodePlex.  I posted about this functionality in my blog SQL Server Analsysis Services ‘Project Helper’ if you want details on the mechanics.

Background

So you’ve built your solution, tested it, and everything works great.  All of your developers are hard at work constantly improving your OLAP capabilities, and they’re storing all of their changes under source control.  Now you need to do routine deployments to multiple environments.  One of the problems you might have run into is that you have to use Visual Studio to actually do a deployment.  Even if you’re deploying via the Deployment Wizard (using answer files), you still need to generate the .ASDatabase by building the solution.  This problem becomes a little bigger if you want to use a Build Server, that doesn’t have Visual Studio, where you compile your solution, test, deploy it, etc.  Currently, the only option you really have for this scenario is to make sure every developer Builds the solution after all changes and checks in a new .ASDatabase file.  This is a bit of a pain, and tends to lead to (at least occasional) issues with deploying updated versions.

SSAS Helper

I created a class library to work with Analysis Services projects.  This library has a couple of neat pieces of functionality, including being able to de-serialize a SSAS project to an AMO database, write an AMO database back out to component files, clean a AS project (removing volatile, non-essential fields… I went over it in SQL Server Analysis Services Projects with Multiple Developers), and creating a .ASDatabase file based on a project.  The library is written in C# 3.5, and is designed/tested with SSAS 2008 (though it should work with 2005… test it though).

Using SSAS Helper

If you’re running into this problem, you probably already have a good build solution, so I won’t go over that here.  What you’ll want to do is create a new build task (using whatever framework you’re using) using this component.  As an example, you can create a custom MSBuild using this library that takes the SSAS Project as an input, and delivers a .ASDatabase file as an output.  The task would then be added to the build file, so you will no longer have a dependency on Visual Studio to compile all your .cube, .dim, .role, etc. files into something that can be deployed.  I’ll try and post a sample in the next few weeks, but it shouldn’t be a major task.

Caveats

I built this tool and tested on my current project, and tested with Adventure WOrks… I haven’t seen any problems with it, but your mileage may vary.  Make sure you test with your project.

Here are some known issues for this functionality:

  1. Partitions are reordered when De-Serialized/Serialized. Makes it a pain to validate, but I’ve seen no ill effects. Use the SortSssasFile functionality to make a copy of the files for easier comparison.
  2. Some fields maintained for Visual Studio (State (Processed, Unprocessed), CreatedTimestamp, LastSchemaUpdate, LastProcessed, dwd:design-time-name, CurrentStorageMode) are lost when a project is De-Serialized/Serialized.

When I use this tool on a project, I work on a copy, and then do a file diff to check the output the first time.  I used Beyond Compare (http://www.scootersoftware.com/) to compare the entire directory, and each file side by side, just to make sure there are no unintended side affects.  I would recommend you do the same to make sure… this works fine on the projects I’ve used it on, but you need to make sure there’s nothing special about yours so you don’t accidently destroy something.

How it Works

This project works by de-serializing all the files referenced by the .dwproj file into AMO objects then serializing the entire database.  There is more detail on de-serializing/serializing the objects in my post SQL Server Analsysis Services ‘Project Helper’ .  The code is fairly well commented (well… I think it is 🙂 and should be fairly straight forward.

Next Steps

To use this, you’ll need create a build task in your framework of choice, and just plug it in to your solution.  If this proves difficult for people, I’ll try and provide a sample, but it should be fairly straight forward to do.  If it doesn’t work right out of the box, it should work with minimal modification.  Just make sure all of your project files (.dwproj, .cube, .dim, .role, .dsv, .ds, .dmm, etc.) are published and you have a way to push the .ASDatabase file to your target.

Conclusion

That’s about it… I hope this helps some folks out.  Let me know if you have any problems, and we’ll see what we can get working.

Cheers,

David

Posted in Uncategorized | Comments Off on Building a SQL Server Analysis Services .ASDatabase file from a Visual Studio SSAS Project

Using PowerShell to Manipulate SQL Server Analysis Services Traces

I recently started using SSAS Server Traces a lot with SQL Server Analysis Services.  This type of trace is basically the same trace you can create with SQL Server Profiler, but it runs without Profiler, uses less resources, and can be persisted across reboots.  They’re a really handy tool.

I started using these when I built some AS monitoring tools based on the “Solution for Collecting Analysis Services Performance Data for Performance Analysis”  sample on CodePlex.  Seriously, totally revolutionized my life (at least the part related to administering complex AS installations and projects).  After installing, adapting, and enhancing the functionality there I found I wanted more and easier ways to control AS traces, so I built some PowerShell functions to help manage them.  These functions basically just wrap XMLA commands to make them easier to use.

Here are some sample files I’ll be talking about in this post:  Download Sample Files

Don’t worry about copying the sample code out of the post… it’s all included as part of the sample.

Creating SSAS Traces

You use a XMLA command to create a trace.  A trace looks something like this:

1: <Batch xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”>

   2:   <Create xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">

   3:     <ObjectDefinition>

   4:       <Trace>

   5:         <ID>My Trace</ID>

   6:         <Name>My Trace</Name>

   7:         <Events>

   8:           <Event>

   9:             <EventID>15</EventID>

  10:             <Columns>

  11:               <ColumnID>28</ColumnID>

  12:               <!-- ... More Columns ... -->

  13:               <ColumnID>3</ColumnID>

  14:             </Columns>

  15:           </Event>

  16:           <Event>

  17:             <EventID>16</EventID>

  18:             <Columns>

  19:               <ColumnID>24</ColumnID>

  20:               <!-- ... More Columns ... -->

  21:               <ColumnID>36</ColumnID>

  22:             </Columns>

  23:           </Event>

  24:           <!-- ... More events ... -->

  25:         </Events>

  26:         <Filter>

  27:           <NotLike>

  28:             <ColumnID>37</ColumnID>

  29:             <Value>Application I don't care about events from</Value>

  30:           </NotLike>

  31:         </Filter>

  32:       </Trace>

  33:     </ObjectDefinition>

  34:   </Create>

  35: </Batch>

Not the most fun to create by hand, but you could make it happen.  However, there is an easier way to come up with the CREATE statement for your trace.  Just do the following:

  1. Start up a SQL Server Profiler session and monitor the AS instance you’re working on.  You only need to capture the Command Begin event.
  2. Start up a 2nd instance of SQL Server profiler.  Use the GUI to create the trace you’re actually interested in, with all the events, columns, and filters.  Then start the trace.
  3. Snag the CREATE XMLA from the 1st Profiler section and save it off.

Now you have XMLA you can use as the base for the trace you want.  You’ll want to add a few more elements to the XMLA to make the server trace work though.  It will look something like this:

1: <Batch xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”>

   2:   <Create mlns="http://schemas.microsoft.com/analysisservices/2003/engine">

   3:     <ObjectDefinition>

   4:       <Trace>

   5:         <ID>My Trace</ID>

   6:         <Name>My Trace</Name>

   7:         <LogFileName>\MyServerTraceFilesMyTrace.trc</LogFileName>

   8:         <LogFileAppend>0</LogFileAppend>

   9:         <AutoRestart>1</AutoRestart>

  10:         <LogFileSize>100</LogFileSize>

  11:         <LogFileRollover>1</LogFileRollover>

  12:         <Events>

  13:           <!-- ... The rest of the Create statement you just generated ... -->

There are just a few extra fields there.  Here’s what they’re used for:

LogFileName Name of the log file.  Must end in .trc.  The AS Service Account must have permission to write to the directory.
LogFileAppend 0 for Overwrite, 1 for Append.
AutoRestart 0 for No, 1 to restart when the server restarts.
LogFileSize Size in MB.  The log will roll over when it reaches this size.
LogFileRollover 1 means create a new log file (it appends 1, 2, 3, etc. for each new log) when you reach the LogFileSize.

Deleting SSAS Traces

So, we’ve created a trace that auto restarts.  How do you get rid of it?

1: <Batch xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”>

   2:   <Delete xmlns="http://schemas.microsoft.com/analysisservices/2003/engine" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">

   3:     <Object>

   4:       <TraceID>My Trace</TraceID>

   5:     </Object>

   6:   </Delete>

   7: </Batch>

SSAS Trace PowerShell Library

I found that I wanted a convenient way to see what traces were running on a server, create them, delete them, and flush them (i.e., close out the current trace and create a new one, so you can process or otherwise work with events that were just logged).  I have included two versions of my library in this sample.  This first (SsasTraceLibrary.ps1) runs with PowerShell V1.  The second (SsasTraceV2Library.ps1) is basically identical, but uses function header and parameter functionality from PowerShell V2 CTP3.  I keep the V1 version around to deploy to servers (more on this later), but load the V2 in my environment to take advantage of the examples, help, and all of the other V2 goodness.  I would encourage you to go with the V2 version, as it includes easy to use descriptions, examples, and better parameter help.

I created the following functions as part of this library:

Function Description
Get-SsasTrace Get details of a specific trace
Get-SsasTraceExists Check if a specific trace exists
Get-SsasTraces Get all traces running on a server
Start-SsasTrace Start a new trace based on a stored template
Delete-SsasTrace Delete an existing trace
Flush-SsasTrace Stop/Restart an existing trace

A Sample Function

Most of the functions in this library require the SSAS assemblies

1: # Load Required SSAS Assemblies

   2: $asm = [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices")

   3: $asm = [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices.Xmla")

You typically follow the pattern of connecting to a server, executing XMLA command, outputting the results, and disconnecting from the server.

1: # Connect to the server

   2: $xmlaClient = new-object Microsoft.AnalysisServices.Xmla.XmlaClient

   3: $xmlaClient.Connect($serverName)

   4:

   5: $xmlStringResult = "" # Initialize the variable so that it can be passed by [ref]

   6:

   7: # Fire off the discover command to return all traces

   8: $xmlaClient.Discover("DISCOVER_TRACES", "", "", [ref] $xmlStringResult, 0, 1, 1)

   9:

  10: # Convert the result to XML to make it easier to deal with

  11: [xml]$xmlResult = $xmlStringResult

  12:

  13: return $xmlResult.return.Root.row

  14:

  15: # Disconnect the session

  16: $xmlaClient.Disconnect()

Occasionally we want to work with the XML result set a little bit to verify the results, but usually nothing major.

   1: # Create the trace

   2:  $xmlaClient.Execute($createTraceXmla, "", [ref] $xmlStringResult, 0, 1)

   3:

   4:  # Convert the result to XML to make it easier to deal with

   5:  [xml]$xmlResult = $xmlStringResult

   6:

   7:  $xmlResultException = $xmlResult.return.results.root | ? {$_.Exception -ne $null}

   8:

   9:  if ($xmlResultException -ne $null)

  10:  {

  11:    throw $xmlResultException.Messages.Error.Description

  12:  }

The PowerShell is really just a wrapper around XMLA commands… it just makes it easier to use.

Using the SsasTraceLibrary.ps1 in the Dev Environment

I’ve found I use these functions a decent bit as part of my day to day operations.  I have the following command in my Profile.ps1 to load all the script files ending with “Library.ps1” in a given directory… I store command libraries like SsasTraceLibrary.ps1 in this folder, so they’re automatically loaded when PowerShell starts.

1: $powerShellScriptsDirectory = “c:PowerShellScripts”

   2: if (!$powerShellScriptsDirectory.EndsWith("")) { $powerShellScriptsDirectory += "" }

   3:

   4: Write-Host Welcome $Env:Username

   5:

   6: foreach($filename in Get-ChildItem $powerShellScriptsDirectory* -Include "*Library.ps1")

   7: {

   8:     & $filename

   9: }

Now, you just have to start PowerShell and run a command like

1: Get-SsasTraces LocalHost

to return all the traces running on your local machine.

Using the SsasTraceLibrary.ps1 in the Server Environment

I mentioned earlier that I also deploy this script to my various AS instances.  I do this because various people need to work on the machine, and I want an easy (read: single click) way to do things like start/stop/flush the trace on the machine.  This also makes it easy to automate these actions as part of an ETL or job.

I use a batch file with the following commands:

1: ECHO Setting System Variables

   2: SET DATA_COLLECTION_PATH=[INSTALLDIR]

   3: SET SSAS_TRACE_UNC=\[OLAPSERVER][TRACE_FILE_SHARE_NAME][OLAPSERVER]_SsasPerformanceErrorMonitoringTrace.trc

   4: SET SSAS_SERVER=[OLAPSERVER]

   5: SET SSAS_TRACE_FILE_SIZE_MB=100

   6:

   7: ECHO Running Commands

   8: REM: Create the Data Collection Trace File

   9: PowerShell -Command "& {&'%DATA_COLLECTION_PATH%DataCollectionSsasTraceLibrary.ps1'; Start-SsasTrace -ServerName '%SSAS_SERVER%' -TraceID 'Analysis Services Performance and Error Trace' -TraceName 'Analysis Services Performance and Error Trace' -UncFileName '%SSAS_TRACE_UNC%' -FileSizeInMB '%SSAS_TRACE_FILE_SIZE_MB%'}"

  10: ECHO Script Complete!

  11: pause

The parameters encased in ‘[‘ and ‘]’ are replaced whenever the scripts are deployed to a server with variables specific to their environment.  Someone can now just run one of the batch files to Start, Stop, or Flush a trace on the server.  I also typically call the file to Flush the trace file as part of my Processing job, so I can immediately load the results into a database for analysis.

Performance

So a question that will always come up when running traces like this is the amount of overhead they require.  And of course they require some, both in terms of CPU to log the events and Disk to write them.  I’ve typically seen this to be in the single digits of CPU, and I always write to a location where there isn’t disk contention.  You’ll of course want to test in your environment, but I haven’t seen a performance hit that makes the ROI of running these traces not worth it.  If you’re concerned, you could consider turning them on/off as part of a scheduled job, or just running them on an as needed basis.  Personally, I’ve seen a huge benefit from running them 24/7 as I capture detailed processing information (how long each step takes), query information (who is doing what, how bad, and how often) and error information (some errors that aren’t caught in any other logs are captured via traces).

Next Steps

Takes these libraries and modify to your heart’s content.  I use a template in the scripts that is my standard, but you can replace it, add more, or whatever you want to do.  You could also add a little bit better error handling if desired.

Conclusion

So, included here are some functions that will help you with some basic functionality around SSAS traces.  Feel free to post back if you have any ideas for improvements or things that would be cool to do.

Cheers,

David

Posted in Uncategorized | Tagged , | Comments Off on Using PowerShell to Manipulate SQL Server Analysis Services Traces

SQL Server Analysis Services Projects with Multiple Developers

A topic that often comes up when discussing enterprise level development with SSAS is how to have multiple developers work on the same project at the same time.  This issue doesn’t come up for many installations… a lot of teams get away with just having a single person working on their OLAP capabilities.  However, for a decent sized implementation, you’re going to want to have more than one person working on the solution at the same time.  I’ll be discussing some of the issues, workarounds, and tools you can use to make concurrent SSAS development easier.


Here’s a link to the source code from later in the post if that’s all you’re looking for.


Background


Analysis Services objects (cubes, dimensions, roles, etc.) are manipulated either programmatically or in Visual Studio (Visual Studio is the normal method). These objects are persisted by serializing them to XML. When you deploy an Analysis Services database, you either connect directly to an instance of Analysis Services, or you save off XMLA scripts/files that can be used to create the database on a remote server.


If you have a single person working on an AS project, you don’t have a problem.  If you’re using source control with exclusive locks (i.e., only one person can edit a file at a given time) you can have multiple people working on the same solution, but not on the same object at the same time.  This is somewhat complicated by the fact that modifying one object (such as a dimension) may require a change in associated objects (such as a cube where it is included).  You’re still fairly limited in the amount of work you can do concurrently.


The way to have multiple developers working concurrently is to use source control with non-exclusive check-outs, so multiple people can work on each file at the same time.  The down side is that you eventually have to merge the copies each person is working on back together.  Since the SSAS files are large, complicated XML documents this isn’t necessarily an easy task.  Most source control systems will attempt to automatically merge non-conflicting changes, but this usually doesn’t work very well with SSAS files (for reasons I’ll go into in just a minute).  There are, however, some things we can do to make the task a bit easier.


Challenges with Merging SSAS Files


When SSAS objects are persisted to XML, they contain structural information about the objects (which is required) as well as environmental and formatting metadata (which can be helpful in Visual Studio, but is not required for the solution to work correctly when deployed). The environment and formatting metadata elements tend to be extremely volatile, and vary for each developer. Stripping the volatile fields from the XML files will make the merge process easier without affecting the cubes and dimensions that are deployed.


Ex. A developer checks out “Adventure Works.cube” , fixes an error in a Description field, then deploys the cube to test. When he checks the file in, he will have to merge large XML file. He has only changed one line, but large sections of the file will be different from copies checked out to other developers due to metadata capturing the state of Visual Studio and the local AS server. By stripping this metadata, the developer can focus on merging the one change that matters without having to verify every other change in the file.


SSAS Elements that can be Removed


The following represent the environment and formatting metadata elements that are persisted in Analysis Services files. These fields can all be safely stripped from Analysis Services files prior to merging to remove the large number of unimportant conflicts that normally occur.




























Element


Description


CreatedTimestamp


When the object was created.


LastSchemaUpdate


When the schema was last pushed to a SSAS DB. Updated when an object is deployed using the development environment.


LastProcessed


When the object was last processed. Updated when an object is processed using the development environment.


State


The state (processed, unprocessed) of the object. Updated based on actions in the development environment.


CurrentStorageMode


The current storage mode of the object. Updated based on actions in the development environment.


Annotations


Annotations and metadata around the positioning of various objects (such as tables) on the canvas. This data is usually updated every time an object is opened. This element does have a user impact. The annotations section is where the layout of DSV objects is stored, and there is value in arranging those objects. However, this is where most conflicts occur, so it is often worth removing this section and losing custom positioning.


design-time-name


A GUID assigned to each object. It is generated when an object is created (either by a user in BIDS or by reverse engineering an existing database.


Programmatically Removing SSAS Elements


I’ve create a PowerShell function ‘Clean-SsasProject’ that will iterate over all writable SSAS objects in a directory and remove the volatile elements by manipulating the XML.  The function will make a copy of every file it modifies.  It is written using using PowerShell v2 CTP3, but should be easy to back port if you need to.  I’ve included a commented out section that will process the .ASDatabase file as well… this is used for a particular scenario on our team, just including it in case it is handy for anybody.  Use the $WhatIf and $Debug flags to know what the function will do before you do it for real.  This code is geared to the project I’m working on currently, and you may want to modify it to meet your precise needs. 


I would recommend creating a backup of your solution before you try this script, just in case.  I’ve been using this for awhile with no ill effects, but you could have a scenario I never dreamed about, so…


***DO THIS AT YOUR OWN RISK.  IT WORKS ON MY MACHINE.  ***


Consider comparing the cleaned XML side by side with the original to make sure this process works for you… it’s worked fine for every project I’ve used it on, but better safe than sorry.


You can download the source here.


Using Clean-SsasProject (for an individual)


I have my environment configured to load all files with the pattern ‘*Library.ps1’ when PowerShell loads via the following script in my ‘Profile.ps1’ file:



   1: $powerShellScriptsDirectory = “c:PowerShellScripts”
   2: if (!$powerShellScriptsDirectory.EndsWith(“”)) { $powerShellScriptsDirectory += “” }
   3:  
   4: Write-Host Welcome $Env:Username
   5:     
   6: foreach($filename in Get-ChildItem $powerShellScriptsDirectory* -Include “*Library.ps1”)
   7: {
   8:     & $filename 
   9: }

I store the .ps1 file with Clean-SsasProject and the other functions it depends on in my PowerShell scripts directory, so it’s loaded every time the PowerShell environment loads.  You can then just run ‘Clean-SsasProject’ from the PowerShell prompt.  I also have a .Cmd file in my path to automatically clean my normal SSAS project.  It just uses the following commands:



   1: SET SSAS_PROJECT_LOCATION=C:SourceMyProject
   2:  
   3: PowerShell -Command “Clean-SsasProject %SSAS_PROJECT_LOCATION%”

Running that command file will strip the volatile fields out of any file in the directory that is writable (i.e., checked-out of my source control system).


Using Clean-SsasProject (for a team)


This tool is designed to work when every team member does the following:



  1. Check-out all files required for a change.  Remember that modifying one object may require that another object be updated, so make sure and check out all objects that can possibly be affected).

  2. Make the change.

  3. Clean the files.

  4. Merge/Resolve conflicts.

  5. Build project output (if required for your solution… I’ll be posting on how to easy project builds/deployments in a few days)

  6. Check-in all files required for a change.

General Best Practices


There are some other general things you can do to make concurrent development a little bit easier (most of these go for software development in general, not just Analysis Services).  If you’ve attempted to have multiple developers work on a project, you’re probably doing all these things already.  Remember that it is always faster and easier not to have to merge when you don’t have to.


Do Separate AS Cubes and Databases by Subject Area


Including only related objects in a cube/database is a standard best practice. This approach avoids potential performance issues, increases manageability and maintainability, and improves the presentation and understandability for the end user. This design pattern also lessens the chance that multiple developers will need to be working on the same object at the same time.


Don’t Combine Unrelated Attributes in a Single Dimension


Including unrelated attributes in a single dimension causes problems with performance, maintainability, and general use of the solution. Including unrelated attributes also promotes conflicts by increasing the chance that developers working on unrelated areas will need to work on the same file.


Do Communicate and Schedule Work for Minimum Conflicts


Make sure to communicate with other developers to avoid working on the same objects when possible. If you need to work on the same object, ensure the design changes are compatible and that there is no way to optimize the work.


Major changes that will dramatically affect source merging should be performed with an exclusive lock on the file.


Ex. A developer wants to re-order the 200 calculated members in the calculate script. The developer should wait until everyone else has submitted their changes, then make the change and submit it.


Do Check-out late and Check-in Early


Minimize the time you keep AS files checked out. While it may take some time to develop new functionality for AS (modifying the source database, creating an ETL to load the database from a source system, etc.) the work in AS is typically fairly quick to do if properly designed and prepared for. Complete the design and other development before checking out the Analysis Services files.


Do Use Tools to Help Merge


Use a side-by-side differencing tool to compare and merge different versions of Analysis Services files. A good diffing tool will have features to make this operation significantly easier. Consider using a tool such as Beyond Compare for this task.  You can use this process to verify that Clean-SsasProject works for your solution the first time you it.


 


Next Steps


Modify the provided source/process to meet your needs and environment.  There is no 100% “right way” to handle development like this… everyone’s situation will be just a little bit different, and require a little bit of customization.  I’m just trying to give you the tools to make it a little bit easier.


Conclusion


That’s all there is.  If you use the tools, techniques, and approach above it should make developing Analysis Services solutions with multiple developers a bit easier for for you.  You’ll still have some of the headaches normally associated with this type of work, but hopefully you’ll have an easier time of it.


Cheers,


David

Posted in Uncategorized | Comments Off on SQL Server Analysis Services Projects with Multiple Developers

Creating Multiple Rows in a Text File from a Single Row

A recent post on the SSIS forums was asking about creating multiple rows in a text output file from a single row in the data flow. Given a set of rows like this:

John Smith 1/1/1900 Value A
Jane Smith 12/1/1900 Value B

the poster wanted this output in the text file:

John Smith  
1/1/1900 Value A
Jane Smith  
12/1/1900 Value B

Basically, the poster wanted a line break in the middle of a row of data, while keeping a line break at the end.

There are a couple of ways to accomplish this in SSIS. One way is the use of a script task to create the file, which gives you complete control over the format of the file. There’s also a couple of ways to do it directly in SSIS. The first way is to use a Multicast transform to create two copies of each row, perform some string concatenation, and then combine them using a Union All or a Merge.

image

The Derived Column transforms are used to put the multiple columns into a single column, so that a variable length record can be written to the flat file. The Sort transforms and the Merge combines the rows into the proper order, before sending them to a flat file.

The other option (and one that probably falls under the category of stupid SSIS tricks), is to hack the flat file connection manager a little bit. You can set the column delimiters so that a carriage return/linefeed is inserted in the middle of the row. However, this isn’t as simple as just choosing {CR}{LF} as the column delimiter. SSIS checks to make sure that none of the column delimiters are the same as the row delimiter. Why it does that check, I don’t know, given the way it parses flat files. Regardless, you have to work around it. So, you can simply select the column where you want to introduce the break, and set it’s delimiter to {CR}.

image

Then insert a new column immediately following that column, set the output width to 0, and set the column delimiter to {LF}.

image

Now the output will include a carriage return / linefeed between the columns.

The sample package for this is located here. It is SSIS 2008, but the concepts are the same for 2005.

Posted in Uncategorized | Comments Off on Creating Multiple Rows in a Text File from a Single Row

SQL Database Tuning Advisor Output Renamer

I’ve uploaded the ‘Database Tuning Advisor Output Renamer’ at http://DtaOutputRenamer.codeplex.com/.


OK… so Friday marked the first day I’ve ever gotten sunburned while coding.  I had a little bit of free time while at an outdoor event, and whipped up a a little utility to help apply standards to to DTA recommendations.


I use the SQL Database Tuning Advisor (DTA) a lot to generate basic recommendations for indexes and statistics based on a workload.  In my team, we store all index and statistics creation scripts in .SQL files, which are then run as part of our deployments.  We use a standard naming convention for each of the objects to enhance the maintainability.


Last week I ran the DTA against a workload I generated based on running reports on some new schema… not surprisingly, quite a few recommendations were generated.  It occurred to me my time could be better spent doing something besides renaming 50 database objects based on their definitions.  I decided to write a small application to help change the default names (such as ‘_dta_index_SsasProcessingRunArchive_c_7_1677965054__K10’) to something a little more user friendly (like ‘IX_dbo_SsasProcessingRunArchive_ObjectType_EventClass_SessionID_I_StartTime’).  You’ll want to modify the application to match your local coding standards, but it should be pretty straight forward to do. 


This application only handles a few cases, but does cover Clustered/Non-Clustered Indexes (with and without INCLUDE columns) and Statistics.  It should be easy to extend it if you need to.  This app is just something I whipped up in an hour or two, so it isn’t the most robust thing ever created.


I created the following regex (remove the line breaks… I just used those for presentation) to capture the index name, table name, and column/include lists for the indexes:



   1: CREATEs(?<NonClustered>NON)?CLUSTEREDsINDEXs[(?<IndexName>.*?)].*?ONs
   2: [(?<Schema>.*?)].[(?<Table>.*?)].*?((?<ColumnList>.*?))s*?(?:INCLUDEs
   3: ((?<IncludeList>.*?)))??s*?(?:WITHs*?(.*?)s*?ONs[.*?])

I created the following regex (remove the line breaks… I just used those for presentation) to capture the statistics name, table name, and column list for the statistics:



   1: CREATEsSTATISTICSs[(?<StatisticsName>.*?)].*?ONs[(?<Schema>.*?)]
   2: .[(?<Table>.*?)].*?((?<ColumnList>.*?)

I then just update the object names with a new name created based column lists and such.  I also through in the functionality to strip ‘go’ statements from input.


Enjoy…

Posted in Uncategorized | Comments Off on SQL Database Tuning Advisor Output Renamer

Setting “Work Offline” without Opening the Project

If you’ve done much work with SSIS, you’re probably aware that on opening a package in BIDS, SSIS validates all the objects in the package. This can cause the packages to open very slowly, particularly if it has connections to a database that is currently unavailable.


I recently needed to upgrade several older packages to SSIS 2008. Unfortunately, these packages were referencing a database that I no longer have access to. On top of that, there are a number of data flows in each package, all of which use this non-existent database. Opening a package in BIDS was taking more than 10 minutes, which is about 9 minutes and 55 seconds past the point where my patience runs out. Normally, this wouldn’t be that big of a deal. Once the project was opened, I would just set the Work Offline option (located under the SSIS menu in BIDS), which prevents the validation from running.


image


However, each package was in its own project (for reasons I won’t go into in this post, but primarily failure to plan ahead), so I was looking at a very slow and painful process to upgrade these packages.


Fortunately, there is a way to enable the Work Offline option prior to actually opening the project. Locate the BIDS *.user file associated with the project. For SSIS, this file should be located in the same folder as the project (.dtproj) file, and will have a filename like “<project name>.dtproj.user”. Open this file in Notepad, and you should see something like the following (it’s got a few additional tags in 2008, but the general format is the same):

<?xml version=“1.0” encoding=“utf-8”?>
<DataTransformationsUserConfiguration xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=“http://www.w3.org/2001/XMLSchema”>
<Configurations>
<Configuration>
<Name>Development</Name>
<Options>
<UserIDs />
<UserPasswords />
<OfflineMode>false</OfflineMode>
</Options>
</Configuration>
</Configurations>
</DataTransformationsUserConfiguration>

Locate the <OfflineMode> tag (in red above) and change the value from false to true. Now, when the project is opened, it will already be in Offline mode, so you won’t have to suffer through a lengthy validation process.

Posted in Uncategorized | Comments Off on Setting “Work Offline” without Opening the Project

SSWUG Free Community Event

There is a free community event from SSWUG coming up on April 17th. For people interested in Analysis Services (and who isn’t?), you’ll be able to see a webcast from Donald Farmer on developing high performance cubes. If you haven’t heard Donald speak before, I highly recommend signing up. If you have heard him, you’ve probably already clicked the link to register.

If you are interested in attending the Business Intelligence vConference, but haven’t made up your mind yet, you can now preview some of the sessions. A preview of my session on automating SSIS is available, but please don’t let that scare you off. There is going to be a lot of good content. You can still use the code SPVJWESP09 to get $10 off the registration.

Posted in Uncategorized | Comments Off on SSWUG Free Community Event

PowerShell Script to reset the local instance of SQL Server

I use virtual machines a lot for development and testing.  I typically start with a sysprepped base image that I then initialize every time I need a new machine.  One issue is that SQL Server doesn’t know it has been sysprepped… if you execute



   1: SELECT @@SERVERNAME

 


You will get the name of the machine from when you installed SQL Server.


I use the following PowerShell script to reset the name of the local instance to the current name of the machine:

# Load Assemblies we need to access SMO
$asm = [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.ConnectionInfo”)
$asm = [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”)
$asm = [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.SmoEnum”)
$asm = [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.SqlEnum”)
$asm = [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.WmiEnum”)
$asm = [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.SqlWmiManagement”)
##############################################################################
# Description:
# Change the name SQL Server instance name (stored inside SQL Server) to the name
# of the machine. When a machine is unboxed after being sysprepped, it will still
# use the original SQL Server name as the instance name for SQL Server
#
# Input:
#
# Output:
#
# Author: DDarden
# Date : 200904030748
#
# Change History
# Date Author Description
# ——– ————– ————————————————-
#
###############################################################################
function global:Set-SqlServerInstanceName{
Write “Renaming SQL Server Instance”
$smo = ‘Microsoft.SqlServer.Management.Smo.’

$server = new-object ($smo + ‘server’) .
$database = $server.Databases[“master”]
$mc = new-object ($smo + ‘WMI.ManagedComputer’) .

$newServerName = $mc.Name

$database.ExecuteNonQuery(“EXEC sp_dropserver @@SERVERNAME”)
$database.ExecuteNonQuery(“EXEC sp_addserver ‘$newServerName’, ‘local'”)

Write-Host “Renamed server to ‘$newServerName’`n”
}

# Set the SQL Server instance name to the current machine name
# MSSQLSERVER service needs to be restarted after this change
Set-SqlServerInstanceName

Posted in Uncategorized | Comments Off on PowerShell Script to reset the local instance of SQL Server