I recently started using SSAS Server Traces a lot with SQL Server Analysis Services. This type of trace is basically the same trace you can create with SQL Server Profiler, but it runs without Profiler, uses less resources, and can be persisted across reboots. They’re a really handy tool.
I started using these when I built some AS monitoring tools based on the “Solution for Collecting Analysis Services Performance Data for Performance Analysis” sample on CodePlex. Seriously, totally revolutionized my life (at least the part related to administering complex AS installations and projects). After installing, adapting, and enhancing the functionality there I found I wanted more and easier ways to control AS traces, so I built some PowerShell functions to help manage them. These functions basically just wrap XMLA commands to make them easier to use.
Here are some sample files I’ll be talking about in this post: Download Sample Files
Don’t worry about copying the sample code out of the post… it’s all included as part of the sample.
Creating SSAS Traces
You use a XMLA command to create a trace. A trace looks something like this:
1: <Batch xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”>
2: <Create xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
3: <ObjectDefinition>
4: <Trace>
5: <ID>My Trace</ID>
6: <Name>My Trace</Name>
7: <Events>
8: <Event>
9: <EventID>15</EventID>
10: <Columns>
11: <ColumnID>28</ColumnID>
12: <!-- ... More Columns ... -->
13: <ColumnID>3</ColumnID>
14: </Columns>
15: </Event>
16: <Event>
17: <EventID>16</EventID>
18: <Columns>
19: <ColumnID>24</ColumnID>
20: <!-- ... More Columns ... -->
21: <ColumnID>36</ColumnID>
22: </Columns>
23: </Event>
24: <!-- ... More events ... -->
25: </Events>
26: <Filter>
27: <NotLike>
28: <ColumnID>37</ColumnID>
29: <Value>Application I don't care about events from</Value>
30: </NotLike>
31: </Filter>
32: </Trace>
33: </ObjectDefinition>
34: </Create>
35: </Batch>
Not the most fun to create by hand, but you could make it happen. However, there is an easier way to come up with the CREATE statement for your trace. Just do the following:
- Start up a SQL Server Profiler session and monitor the AS instance you’re working on. You only need to capture the Command Begin event.
- Start up a 2nd instance of SQL Server profiler. Use the GUI to create the trace you’re actually interested in, with all the events, columns, and filters. Then start the trace.
- Snag the CREATE XMLA from the 1st Profiler section and save it off.
Now you have XMLA you can use as the base for the trace you want. You’ll want to add a few more elements to the XMLA to make the server trace work though. It will look something like this:
1: <Batch xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”>
2: <Create mlns="http://schemas.microsoft.com/analysisservices/2003/engine">
3: <ObjectDefinition>
4: <Trace>
5: <ID>My Trace</ID>
6: <Name>My Trace</Name>
7: <LogFileName>\MyServerTraceFilesMyTrace.trc</LogFileName>
8: <LogFileAppend>0</LogFileAppend>
9: <AutoRestart>1</AutoRestart>
10: <LogFileSize>100</LogFileSize>
11: <LogFileRollover>1</LogFileRollover>
12: <Events>
13: <!-- ... The rest of the Create statement you just generated ... -->
There are just a few extra fields there. Here’s what they’re used for:
LogFileName | Name of the log file. Must end in .trc. The AS Service Account must have permission to write to the directory. |
LogFileAppend | 0 for Overwrite, 1 for Append. |
AutoRestart | 0 for No, 1 to restart when the server restarts. |
LogFileSize | Size in MB. The log will roll over when it reaches this size. |
LogFileRollover | 1 means create a new log file (it appends 1, 2, 3, etc. for each new log) when you reach the LogFileSize. |
Deleting SSAS Traces
So, we’ve created a trace that auto restarts. How do you get rid of it?
1: <Batch xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine” xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”>
2: <Delete xmlns="http://schemas.microsoft.com/analysisservices/2003/engine" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
3: <Object>
4: <TraceID>My Trace</TraceID>
5: </Object>
6: </Delete>
7: </Batch>
SSAS Trace PowerShell Library
I found that I wanted a convenient way to see what traces were running on a server, create them, delete them, and flush them (i.e., close out the current trace and create a new one, so you can process or otherwise work with events that were just logged). I have included two versions of my library in this sample. This first (SsasTraceLibrary.ps1) runs with PowerShell V1. The second (SsasTraceV2Library.ps1) is basically identical, but uses function header and parameter functionality from PowerShell V2 CTP3. I keep the V1 version around to deploy to servers (more on this later), but load the V2 in my environment to take advantage of the examples, help, and all of the other V2 goodness. I would encourage you to go with the V2 version, as it includes easy to use descriptions, examples, and better parameter help.
I created the following functions as part of this library:
Function | Description |
Get-SsasTrace | Get details of a specific trace |
Get-SsasTraceExists | Check if a specific trace exists |
Get-SsasTraces | Get all traces running on a server |
Start-SsasTrace | Start a new trace based on a stored template |
Delete-SsasTrace | Delete an existing trace |
Flush-SsasTrace | Stop/Restart an existing trace |
A Sample Function
Most of the functions in this library require the SSAS assemblies
1: # Load Required SSAS Assemblies
2: $asm = [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices")
3: $asm = [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices.Xmla")
You typically follow the pattern of connecting to a server, executing XMLA command, outputting the results, and disconnecting from the server.
1: # Connect to the server
2: $xmlaClient = new-object Microsoft.AnalysisServices.Xmla.XmlaClient
3: $xmlaClient.Connect($serverName)
4:
5: $xmlStringResult = "" # Initialize the variable so that it can be passed by [ref]
6:
7: # Fire off the discover command to return all traces
8: $xmlaClient.Discover("DISCOVER_TRACES", "", "", [ref] $xmlStringResult, 0, 1, 1)
9:
10: # Convert the result to XML to make it easier to deal with
11: [xml]$xmlResult = $xmlStringResult
12:
13: return $xmlResult.return.Root.row
14:
15: # Disconnect the session
16: $xmlaClient.Disconnect()
Occasionally we want to work with the XML result set a little bit to verify the results, but usually nothing major.
1: # Create the trace
2: $xmlaClient.Execute($createTraceXmla, "", [ref] $xmlStringResult, 0, 1)
3:
4: # Convert the result to XML to make it easier to deal with
5: [xml]$xmlResult = $xmlStringResult
6:
7: $xmlResultException = $xmlResult.return.results.root | ? {$_.Exception -ne $null}
8:
9: if ($xmlResultException -ne $null)
10: {
11: throw $xmlResultException.Messages.Error.Description
12: }
The PowerShell is really just a wrapper around XMLA commands… it just makes it easier to use.
Using the SsasTraceLibrary.ps1 in the Dev Environment
I’ve found I use these functions a decent bit as part of my day to day operations. I have the following command in my Profile.ps1 to load all the script files ending with “Library.ps1” in a given directory… I store command libraries like SsasTraceLibrary.ps1 in this folder, so they’re automatically loaded when PowerShell starts.
1: $powerShellScriptsDirectory = “c:PowerShellScripts”
2: if (!$powerShellScriptsDirectory.EndsWith("")) { $powerShellScriptsDirectory += "" }
3:
4: Write-Host Welcome $Env:Username
5:
6: foreach($filename in Get-ChildItem $powerShellScriptsDirectory* -Include "*Library.ps1")
7: {
8: & $filename
9: }
Now, you just have to start PowerShell and run a command like
1: Get-SsasTraces LocalHost
to return all the traces running on your local machine.
Using the SsasTraceLibrary.ps1 in the Server Environment
I mentioned earlier that I also deploy this script to my various AS instances. I do this because various people need to work on the machine, and I want an easy (read: single click) way to do things like start/stop/flush the trace on the machine. This also makes it easy to automate these actions as part of an ETL or job.
I use a batch file with the following commands:
1: ECHO Setting System Variables
2: SET DATA_COLLECTION_PATH=[INSTALLDIR]
3: SET SSAS_TRACE_UNC=\[OLAPSERVER][TRACE_FILE_SHARE_NAME][OLAPSERVER]_SsasPerformanceErrorMonitoringTrace.trc
4: SET SSAS_SERVER=[OLAPSERVER]
5: SET SSAS_TRACE_FILE_SIZE_MB=100
6:
7: ECHO Running Commands
8: REM: Create the Data Collection Trace File
9: PowerShell -Command "& {&'%DATA_COLLECTION_PATH%DataCollectionSsasTraceLibrary.ps1'; Start-SsasTrace -ServerName '%SSAS_SERVER%' -TraceID 'Analysis Services Performance and Error Trace' -TraceName 'Analysis Services Performance and Error Trace' -UncFileName '%SSAS_TRACE_UNC%' -FileSizeInMB '%SSAS_TRACE_FILE_SIZE_MB%'}"
10: ECHO Script Complete!
11: pause
The parameters encased in ‘[‘ and ‘]’ are replaced whenever the scripts are deployed to a server with variables specific to their environment. Someone can now just run one of the batch files to Start, Stop, or Flush a trace on the server. I also typically call the file to Flush the trace file as part of my Processing job, so I can immediately load the results into a database for analysis.
Performance
So a question that will always come up when running traces like this is the amount of overhead they require. And of course they require some, both in terms of CPU to log the events and Disk to write them. I’ve typically seen this to be in the single digits of CPU, and I always write to a location where there isn’t disk contention. You’ll of course want to test in your environment, but I haven’t seen a performance hit that makes the ROI of running these traces not worth it. If you’re concerned, you could consider turning them on/off as part of a scheduled job, or just running them on an as needed basis. Personally, I’ve seen a huge benefit from running them 24/7 as I capture detailed processing information (how long each step takes), query information (who is doing what, how bad, and how often) and error information (some errors that aren’t caught in any other logs are captured via traces).
Next Steps
Takes these libraries and modify to your heart’s content. I use a template in the scripts that is my standard, but you can replace it, add more, or whatever you want to do. You could also add a little bit better error handling if desired.
Conclusion
So, included here are some functions that will help you with some basic functionality around SSAS traces. Feel free to post back if you have any ideas for improvements or things that would be cool to do.
Cheers,
David