Melinda Cole – Agile BI http://agilebi.com A community for sharing ideas about business intelligence development using agile methods. Sat, 03 Nov 2012 04:00:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 Cannot Create a BI Semantic Model Connection to Tabular Cube http://agilebi.com/blog/2012/11/03/cannot-create-a-bi-semantic-model-connection-to-tabular-cube/ Sat, 03 Nov 2012 04:00:32 +0000 http://7.175 Continue reading ]]> Here’s the scenario:

Within the PowerPivot Gallery in SharePoint 2010 you create a new “BI Semantic Model Connection”.  In the “New BI Semantic Model” page you specify the name of the connection, the Workbook URL, and Database.  When you click OK the following error is displayed along with a checkbox to “Save the link file without connection validation”.  If you tick that checkbox and click OK then the BISM connection is created and works fine.

There were errors found while validating the page:

Cannot connect to the server or database.

The documentation from Microsoft does a really good job of explaining what is going on and what to do:

http://msdn.microsoft.com/en-us/library/hh230972.aspx#bkmk_ssas

Here’s the text:

Grant Analysis Services Administrative Permissions to Shared Service Applications


Connections that originate from SharePoint to a tabular model database on an Analysis Services server are sometimes made by a shared service on behalf of the user requesting the data. The service making the request might be a PowerPivot service application, a Reporting Services service application, or a PerformancePoint service application. In order for the connection to succeed, the service must have administrative permissions on the Analysis Services server. In Analysis Services, only an administrator is allowed to make an impersonated connection on behalf of another user.

Administrative permissions are necessary when the connection is used under these conditions:

  • When verifying the connection information during the configuration of a BI semantic model connection file.
  • When starting a Power View report using a BI semantic model connection.
  • When populating a PerformancePoint web part using a BI semantic model connection.

To ensure these behaviors perform as expected, grant to each service identity administrative permissions on the Analysis Services instance. Use the following instructions to grant the necessary permission.

Add service identities to the Server Administrator role

  1. In SQL Server Management Studio, connect to the Analysis Services instance.
  2. Right-click the server name and select Properties.
  3. Click Security, and then click Add. Enter the Windows user account that is used to run the service application.

    You can use Central Administration to determine the identity. In the Security section, open the Configure service accounts to view which Windows account is associated with the service application pool used for each application, then follow the instructions provided in this topic to grant the account administrative permissions.

Go ahead and follow the directions to check Central Administration to determine the identify of the SharePoint service account, just make sure to select the correct application on the Credential Management page.  In this case, “Service Application Pool – SharePoint Web Services System” should be the correct application.

Make note of the service account and it to the Analysis Services server admin group.

Also, make sure you’re adding the service account as a server admin, not a database admin.

If that doesn’t work it could be that you didn’t add the right service account.  A good way to find out exactly what service account is being used, without having to fumble around with Central Administration, is to use the SQL Server Profiler.  Start a new Profiler session on the Analysis Services server.  While the Profiler is running, attempt to create another BISM connection.  This is the result:

Look for the “Error” event class.  The service account listed under NTUserName is the the account that needs to be added as a server admin for Analysis Services.

 

]]>
BISM Connection to Tabular Cube and Data Link Properties http://agilebi.com/blog/2012/11/03/bism-connection-to-tabular-cube-and-data-link-properties/ Sat, 03 Nov 2012 03:04:57 +0000 http://7.164 Continue reading ]]> Here’s the scenario:

You have a PowerPivot Gallery within SharePoint 2010.  Within the gallery is a BISM connection that points to a Tabular cube.  You attempt to use the BISM connection to open a new Excel workbook by clicking on the Excel icon.

After Excel opens you get a Data Link Properties dialog box.

Trying to modify the Data Link Properties in any way is futile.  Entering a value for “Location:” won’t do anything, Analysis Services only runs with Windows NT Integrated security (so entering a a specific username and password is useless), and clicking the down arrow to select the initial catalog will yield a Data Link Error…

The following system error occurred: The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail.

And once you click OK you get another lovely error…

Login failed. Catalog information cannot be retrieved.

Chances are you need to install the “Microsoft Analysis Services OLE DB Provider for Microsoft SQL Server 2012” from the which is located here:

http://www.microsoft.com/en-us/download/details.aspx?id=29065

Microsoft® Analysis Services OLE DB Provider for Microsoft® SQL Server® 2012

      The Analysis Services OLE DB Provider is a COM component that software developers can use to create client-side applications that browse metadata and query data stored in Microsoft SQL Server 2012 Analysis Services. This provider implements both the OLE DB specification and the specification’s extensions for online analytical processing (OLAP) and data mining.
]]>
How to Open Dashboard Designer in SharePoint 2010 http://agilebi.com/blog/2012/11/03/how-to-open-dashboard-designer-in-sharepoint-2010/ Sat, 03 Nov 2012 02:23:12 +0000 http://7.149 Continue reading ]]> So you’ve got a new Sharepoint 2010 server configured and now you want to create a PerformancePoint dashboard.  The first thing you need to do is create a new “Business Intelligence Center” site.  You can find it under the Data category in the Create window.

Once the site is created you’ll see a page that looks like this:

Ok.  Now we’re ready for business.  In order to create a PerformancePoint dashboard you’ll need to open Dashboard Designer and there are several ways to do this.

Method #1

If this is your very first time opening Dashboard Designer you’ll need to open it from the Business Intelligence Center.  On the right hand side of the page you see three orange headings: Monitor Key Performance, Build and Share Reports, and Create Dashboards.  Hover over the first heading, Monitor Key Performance, and click on the third link in the center of the page called “Start using PerformancePoint Services“.  The next page that opens will have a large button called “Run Dashboard Designer”.  Click it.  You may be prompted to run an application, don’t be alarmed, this is actually the install for Dashboard Designer.  It takes just a minute or so to complete and once finished Dashboard Designer will open.  Of course it goes without saying that the install for Dashboard Designer only happens the very first time you click the “Run Dashboard Designer” button.  After that, any time you click the button Dashboard Designer will open right up.

Method #2

Now that Dashboard Designer has been installed you can open it from the Start menu.  Click on All Programs, expand the SharePoint folder, and click on PerformancePoint Dashboard Designer.  Make sure you expand the “SharePoint” folder and not the “Microsoft SharePoint 2010 Products” folder. 

Method #3

Once you have some PerformancePoint objects deployed to SharePoint you can open Dashboard Desiger from the PerformancePoint Content list.  By default you can access this list from the left-hand nav bar, but if it’s not there you can get to it by clicking “All Site Content”.  To open Dashboard Designer from the PerformancePoint Content list, click the down arrow for one of the items and select “Edit in Dashboard Designer”.

Method #4

The final, and in my opinion, best way to open Dashboard Designer is by using a URL.  Not a URL to the “Run Dashboard Designer” page from Method #1, but a URL that opens Dashboard Designer directly.  To do this you’re going to have to do a little sleuthing.  First thing to do is navigate to the “Run Dashboard Designer” page from Method #1.  Assuming you’re using Internet Explorer, make sure the Menu bar is displayed in your browser.  Click on View and select “Source”.  This will open up the HTML source code behind the webpage.  Since we know that clicking the button automagically opens Dashboard Designer, do a search in the HTML source code for the text on the button, “Run Dashboard Designer”.  Notice the OnClick method of the button is calling a Javascript function called “OpenDD”. 

Now search for the OpenDD function name, it will probably be defined towards the top of the document.

Once it has been found, copy the function code and past into a Management Studio query window and transform it into working SQL code.  You don’t have to do this, but I recommend it because the final expression is a bit tricky to do in your head.  At least, it’s tricky for me to do in my head…

Here’s the actual SQL code…

DECLARE @designerRedirect AS VARCHAR(MAX)
SET @designerRedirect = '_layouts/ppswebparts/designerredirect.aspx'

DECLARE @siteCollection AS VARCHAR(MAX)
SET @siteCollection = '/sites/BI/'

DECLARE @siteLocation AS VARCHAR(MAX)
SET @siteLocation = '/sites/BI/Finance'

DECLARE @siteCollectionUrl AS VARCHAR(MAX)
SET @siteCollectionUrl = 'http://SQL2012Dev' + @siteCollection --= 'location.protocol + "//" + location.host' + @siteCollection

DECLARE @siteLocationUrl AS VARCHAR(MAX)
SET @siteLocationUrl = REPLACE(@siteLocation, @siteCollection, '') --=siteLocation.replace(siteCollection,"");

DECLARE @URL AS VARCHAR(MAX)
SET @URL = 'http://SQL2012Dev' + @siteLocation + @designerRedirect + '?SiteCollection=' + @siteCollectionUrl + '&SiteLocation=' + @siteLocationUrl

SELECT @URL

Make sure the values reflect your environment.  When you run the query you should end up with something like this:

http://SQL2012Dev/sites/BI/Finance/_layouts/ppswebparts/designerredirect.aspx?SiteCollection=http://SQL2012Dev/sites/BI/&SiteLocation=Finance/

To test that it works, copy the URL into your browser and verify that Dashboard Designer opens with your content.

]]>
YTD Calculations http://agilebi.com/blog/2012/01/14/how-to-generate-dates-for-ytd-calculations/ Sat, 14 Jan 2012 03:48:06 +0000 http://7.74 Continue reading ]]> Whether you’re creating a report, chart, KPI, or calculated measure, at some point you’ll probably need to add year-to-date calculations. It doesn’t matter if you’re using SQL or MDX, the technique to generate the From and To dates for a YTD calculation remains the same.

The general concept goes like this:

Step 1: Get the current year and prefix it with “01/01/”. This is the From date.
Step 2: Get today’s date. This is the To date.

It seems simple enough but if your ETL process runs daily and pulls “yesterday’s” data then you’ll need to account for a 1 day lag on the To date.  For example, if today is June 15th, 2011 then your warehouse will only contain data up to June 14th.  Using the logic described above, the YTD calculation will select data from “01/01/2011” to “06/15/2011”, even though data for June 15th hasn’t yet been loaded into the warehouse.  Although not entirely accurate, it won’t hurt anything, the YTD calculation just won’t return any data for the 15th.

Now let’s say today is December 31st, 2011.  On this day the warehouse will only contain data up to December 30th and the YTD calculation will select data from “01/01/2011” to “12/31/2011”.  Are you beginning to see the problem?  Here’s one last example to drive the point home: Lets advance one day to January 1st, 2012.  On this day the warehouse will only contain data up to December 31st, 2011 but the YTD calculation will select data from “01/01/2012” to “01/01/2012”.  In this case the YTD calculation will return an empty resultset and the business users will never get a full 365-day view of the data.

Ok, so we know we have to update the YTD calculation to handle the one-day lag in the ETL.  The logic now looks like this:

Step 1: Get the current year of yesterday and prefix it with “01/01/”. This is the From date.
Step 2: Get yesterday’s date. This is the To date.

To summarize, here’s what the YTD calculation will select using the original logic:

  • June 15th, 2011: “01/01/2011” through “06/15/2011”
  • December 31st, 2011: “01/01/2011” through “12/31/2011”
  • January 1st, 2012: “01/01/2012” through “01/01/2012”

And here’s what the YTD calculation will select using the new logic:

  • June 15th, 2011: “01/01/2011” through “06/14/2011”
  • December 31st, 2011: “01/01/2011” through “12/30/2011”
  • January 1st, 2012: “01/01/2011” through “12/31/2011”

As you can see, given the 1-day ETL lag, the new YTD calculation will always select the correct range of data, even on the first of the year.

But lets take it a step further.  I propose the idea of working with the “last completed month” or even the “last completed week”.  This way calculations are handled more gracefully.  In other words, the YTD calculation is based on January through the last completed month or week.  Keep in mind that we still have to account for the 1-day ETL lag.

This is what would happen if we decided to operate on the last completed month:

  • June 15th, 2011: “01/01/2011” through “05/31/2011”
  • December 31st, 2011: “01/01/2011” through “11/30/2011”
  • January 1st, 2012: “01/01/2011” through “12/31/2011”

In order to achieve this behavior we must implement the following logic:

Step 1: Get the current year of yesterday and prefix it with “01/01/”. This is the From date.
Step 2: Get the month of yesterday’s date, append “/01/” for the day, append the year of yesterday’s date, and subtract one day. This is the To date.

The benefit of this approach is that business users have up to a month to analyze data produced by year-to-date calculations.  However, it’s a double edged sword because business users have to wait up to a month.  Admittedly this approach may be a better choice for “rolling” calculations but this will depend on what works best for the business.  In some cases using the last completed week may be preferred over using the last completed month since users still have time to analyze data but don’t have to wait long for new data to be included in the calculation.

]]>
When To Use NVARCHAR http://agilebi.com/blog/2011/09/09/when-to-use-nvarchar/ Thu, 08 Sep 2011 18:27:27 +0000 http://7.115 Continue reading ]]> The Short Answer

Only when you need to store multi-lingual (unicode) data. 

The Nitty Gritty

The fundamental unit of storage in a SQL Server database is an 8 KB page.  To paraphrase the MSDN, SQL Server reads and writes whole data pages and as a result, the maximum amount of data that can be contained within a single row on a page is 8,060 bytes (8 KB). 

The page size in turn limits the maximum size of VARCHAR, a variable-length non-Unicode character datatype, to 8,000 bytes.  Take note!  The operative word here is non-Unicode.  A non-Unicode character is stored as UTF-8 (8 bits) and requires one byte of storage per character, meaning that the VARCHAR datatype may contain at most 8,000 characters. 

In contrast, NVARCHAR is a variable-length Unicode datatype.  And unlike non-Unicode characters, Unicode characters are stored as UTF-16 (16 bits) and require two bytes of storage per character.   Because of the SQL Server page size, NVARCHAR datatypes have the same length restrictions as their VARCHAR cousins, 8,000 bytes.  This means that an NVARCHAR datatype may contain, at most, 4,000 characters.  The net result is NVARCHAR dataypes take up twice as much space as a VARCHAR datatype.  Said another way, NVARCHAR(4000) is the same size as VARCHAR(8000).

But the NVARCHAR datatype shouldn’t be dismissed entirely.  The English language can be encoded in UTF-8, making VARCHAR the datatype of choice.  However, other languages, such as Japanese, Hebrew, Arabic, etc., have an extended set of character codes that are only found in UTF-16.  In order to store data in these languages you must use the NVARCHAR datatype, but that should be the only time.

Warehouse Beware

The rule for when to use an NVARCHAR datatype applies to both OLTP systems and OLAP systems.  If your OLTP system is riddled with unnecessary NVARCHAR datatypes you may want to consider converting them to VARCHAR in the data warehouse to save space.  It’s tempting just to convert all NVARCHARS to VARCHARS but you always have to ask yourself, was the column defined as an NVARCHAR for a reason?  This is a question you’ll need to have answered by the business users.  Is your company planning to expand their business globally?  If so, it might make sense to keep the NVARCHARs, but only for columns where it really matters.

]]>
How to Generate Insert Scripts for Existing Data http://agilebi.com/blog/2011/04/04/how-to-generate-insert-scripts-for-existing-data/ Mon, 04 Apr 2011 03:35:46 +0000 http://7.77 Continue reading ]]> Let’s say you have a bunch of data stored in a table and you need to generate an insert script for each record. Maybe you need to save the scripts in source control or maybe you need the scripts to initialize the same table in another environment. What would you do?

Traditional Methods

Several ideas come to mind, all painfully tedious:

  • You could dynamically create the insert statements with a stored procedure

If you do a quick search you’ll find several stored procedures that will generate insert scripts. This isn’t a bad way to go in terms of effort but I would definitely scrutinize the procedure before using it. And as you’ll see in a bit, there’s an even better approach.

  • You could dynamically create the insert statements with a SQL query

Following this approach your SQL query would look something like this:

SELECT 'INSERT INTO dbo.MyTable (MyCol1, MyCol2) VALUES (' + MyCol1 + ',' + MyCol2 + ')' AS InsertStatement FROM dbo.MyTable

If your table is very wide then the script can be very complex to write and difficult to read and maintain.

  • You could dynamically create the insert statements with Excel

This is a two-step approach where first you select all data from the table and then copy it into Excel.  You then add columns before and after each value so you can create an expression for the insert statement.  For example, cell A1 would look something like this:

=INSERT INTO dbo.MyTable (MyCol1, MyCol2) VALUES (

Using the fill handle you drag the expression down for each row and repeat for each of the other columns (remember, each value you insert needs to be separated by a comma and if you have character data then it must also be enclosed in single quotes). Once you copy your script out of Excel you will notice that the values in each column are separated by a tab. If you are inserting character data you will need to either update your expression and surround it with the RTRIM() and LTRIM() functions, or manually remove the tab yourself. This can get very ugly very quickly.

  • You could manually type the insert statements

When all else fails you can always start to write the script yourself. Depending on how many rows of data you have, and the width of the table, this would require a tremendous amount of effort,

  • You could convince someone else to generate the scripts for you

A-la Tom Sawyer and the fence, your mileage may vary with this approach.

A Better Way

Luckily, there is a far easier approach where SQL Server does all the work.  Unfortunately, if you’re running SQL Server 2005 or earlier you’re still limited to the approaches mentioned above – this well-hidden gem is only available in SQL Server 2008 and later.

Starting in SQL Server Management Studio, navigate to the database where your data lives in the Object Explorer.  Right click on the database name, go to Tasks, and select Generate Scripts.

The SQL Server Scripts Wizard will appear, click Next to begin.

Select the database where your data lives.  In this example I am choosing AdventureWorksDW2008.

In the Choose Script Options there are a number properties that you can tweak.  This is where the magic happens!  In order to generate insert scripts you need to set the “Script Data” property located in the “Table/View Options” group to True (by default it’s set to False). 

Select the type of objects you  want scripted.  Since we want to generate insert scripts the object type table nust be selected.

Select the tables for which you want to generate insert scripts.  Here I’ve chosen DimAccount.

Select your output option. 

Review your settings.  Notice that “Script Data” is set to True.

Let the wizard spin…

And voila!  An insert statement for each record (along with the table definition) has been generated!

So Now What?

I *highly* recommend that you check out and familiarize yourself with all of the options in the Check Options screen as it contains numerous ways to customize your scripts that could end up saving you a lot of time!

]]>
Virtual PC and Windows 7 http://agilebi.com/blog/2010/10/07/virtual-pc-and-windows-7/ Thu, 07 Oct 2010 11:25:07 +0000 http://7.60 Continue reading ]]> This morning while trying to spin up one of my Virutal PCs I ran into some peculiar error messages.  In case the same thing happens to you, the workaround is pretty simple. 

When I opened Virtual PC and selected my VMC file I received the following message:

Could not register the virtual machine.  The virtual machine configuration could not be added. User does not have sufficient access rights.

Even if I ran Virtual PC as administrator I still got that error.  The fix is to create a new virtual machine.  When you do so, make sure to point to your existing VHD file.  At this point you’ll probaly get this message:

Virtual PC . . . was unable to write to one of its virtual hard disks.

The fix here is to reference the VHD file by share name, not drive name.  So instead of simply pointing to C:MyVirtualPCsMyFavoriteVirtualPC.vhd, you have to point to \ComputerName:MyVirtualPCsMyFavoriteVirtualPC.vhd.  Now, since I keep my VHD files on an external hard drive, this meant that I had to create a share on its parent directory. 

Thanks to Damir Dobric for posting the solution!

]]>
Prevent SSRS 2008 from Overwriting Datasets http://agilebi.com/blog/2010/09/13/prevent-ssrs-2008-from-overwriting-datasets/ Mon, 13 Sep 2010 01:31:37 +0000 http://7.38 Continue reading ]]> Report development is one of my favorite areas of Business Intelligence and I have to say I’ve been fairly happy with the new interface for SSRS 2008.  Except for when it comes to MDX queries. 

Off the top of my head I can think of three major issues with the way SSRS 2008 handles MDX queries.  To be fair, only one of those issues is related to 2008.  The other two issues, which I’m saving for another blog post (no, I’m not even going to hint at them!), are related to SSRS in general, not 2008 specifically.

So what issue am I talking about?  Well, if you’ve ever developed a report in SSRS 2008 that used a parameterized MDX query then I’m pretty sure you’re familiar with it: after switching to MDX mode, making and saving changes to the report query, the MDX Query Designer overwrites the parameter datasets and wipes out all changes made to these parameter datasets.

If you define a parameter in your MDX query, SSRS 2008 will automagically create the parameter, and its dataset, for you.  I have to admit, that’s pretty slick.  Unless the dataset already exists, in which case it gets overwritten.  In fact, anytime you modify an MDX query that references the parameter, the dataset will get overwritten.  If you’re using the generic dataset then this isn’t a problem.  But if you’ve customized the dataset in any way, then it gets annoying fast.  Really fast.  Can you imagine having to update an MDX query that referenced several parameters?  And can you imagine having to make multiple tweaks to said MDX query, just so it would operate just right?

After suffering a mild aneurism I finally said Enough is Enough.  Something must be done.  And, thanks to Teo, something has been done.  Well, kinda-sorta-not really.  There is a bug listed on the Microsoft Connect site but due to introducing backward-compatibility issues, Microsoft is unable to fix this behavior in the current release.  However, they have promised to fix the behavior in a future release.  (Has it been fixed in R2?)

In the meantime, you can perform the workaround described below.  But be warned, it involves manually editing the XML code, so you should probably make a backup of your RDL in case things go horribly awry. 

How to Prevent SSRS 2008 from Overwriting Datasets:

  1. Open the report in XML mode.  There are two ways to do this: One way is to navigate to the RDL file in Windows Explorer, right click on the RDL, and select Open With…  and then choose Notepad.  The second way is to open the report in BIDS, right click on the report in the Solution Explorer, and select View Code. 
  2. Do a search on “<Dataset>”, and add “<rd:SuppressAutoUpdate>true</rd:SuppressAutoUpdate>” to the “<Query>…</Query>” tags, as shown below.  Just to be on the safe side, I added this line to all of my datasets.

image

]]>
SQL Server 2005 Installation Failures http://agilebi.com/blog/2010/06/24/sql-server-2005-installation-failures/ Thu, 24 Jun 2010 16:38:51 +0000 http://7.17 Continue reading ]]> Recently I was trying to install SQL Server 2005 on a workstation running Windows XP Professional SP3 and for some reason the installation kept failing. The first item in the list that failed was MSXML 6.0. After combing through the log file I found the following message: “Product: MSXML 6 Service Pack 2 (KB954459) — Configuration failed.”

According to http://support.microsoft.com/kb/968749, when SQL Server Setup tries to install MSXML 6.0 SP2, it detects that the version of MSXML 6.0 that is described in Knowledge Base article 954459 is already installed on the computer. Therefore, the validation process indicates that the current version of MSXML 6.0 should not be replaced. This stops the installation of both MSXML 6.0 and SQL Server 2005.

To work around this issue, use the Windows Installer CleanUp utility to remove references to MSXML 6.0 Service Pack 2. The Windows Installer CleanUp utility can be downloaded here: http://support.microsoft.com/kb/290301. Once installed, simply run the utility, select MSXML6 Service Pack 2 (KB954459) [6.20.1099.0] in the list and click Remove. After MSXML 6.0 has been removed then you should be able to install SQL Server 2005 without any further problems.

]]>
SharePoint JavaScript Error: Library Not Registered http://agilebi.com/blog/2010/01/05/sharepoint-javascript-error-library-not-registered/ Tue, 05 Jan 2010 07:32:00 +0000 http://7.16 Continue reading ]]> The other day one of my clients upgraded from Outlook 2003 to Outlook 2007.  Since then, whenever she went to her company’s SharePoint site, she received the following error: 

================================================================== 

Website error details 

User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; 
.NET CLR 1.1.4322; MS-RTC LM 8; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)
Timestamp: Mon, 4 Jan 2010 18:27:45 UTC

Message: Library not registered.

Line: 1935
Char: 4
Code: 0
URL: http://server/_layouts/1033/init.js?rev=ck%2BHdHQ8ABQHif7kr%2Bj7iQ%3D%3D

==================================================================

Apparently the Outlook upgrade did something to a DLL (removed it?) that SharePoint needs.
If you have just upgraded from Outlook 2003 to Outlook 2007 and you receive a similar error, simply run Office Diagnostics and it should fix the problem!

To run Office Diagnostics follow these steps:
1. Open Microsoft Outlook 2007
2. On the Help menu, click Office Diagnostics
3. Click Continue, and then click Start Diagnostics

]]>