Joining the Digital Dots: Windows Azure SQL Reporting ends, replaced with Windows Azure VMs running SSRS – ringed with the Azure world

Apologies to Lord Tennyson for misquoting his wonderful poem “The Eagle”!
In the technical community, we do need to be ‘eagle eyed’ because there are a lot of infrastructure changes and people will need to keep watching out for them! If you like an in-person event, please take a look at SQLRelay and the Cloud OS CommunityRelay for news and technically-oriented sessions on SQL Server 2014, Windows Server 2012 R2 and Systems Center 2012 R2.
What’s the news here? On the October 31, 2013, notification went out to current Windows Azure SQL Reporting customers that the service will be discontinued in 12 months.  What was Windows Azure SQL Reporting? It was is a cloud-based reporting service for the Windows Azure Platform built on SQL Server Reporting Services technologies. As a big SSRS fan, who has presented on SSRS and Azure together, I was disappointed that the service was going. So what about the move to Windows Azure Virtual Machines for SSRS?
So, what’s the alternative? Welcome to SSRS on Windows Azure Virtual Machines

However, my disappointment was quickly dissipated when I realised that the Microsoft vision is users will have VMs with SSRS for their Azure reporting instances. My belief is that people will probably find it pretty easy to move their Windows Azure SQL Reporting solutions towards SSRS on Windows Azure VMs.
IT departments are used to VMs, and I see an increasing trend towards virtualisation amongst many of my customers already. Using a VM, you can deploy an operational reporting solution in the cloud that supports either the Native or SharePoint mode feature set.
Will I get the same features that I had before? Fortunately, a VM with SQL Server 2008 R2 or 2012 supports all Reporting Services features, including all supported data sources, customization and extensibility, and scheduled report execution and delivery. This means that users should see no change from their perspective, and that is a good thing.
What’s the benefits?

Performance – It was well known that Windows Azure SQL Reporting report execution was slower than SSRS running on premises. Moving to a VM makes sense, because performance of SSRS on a Windows Azure Virtual Machine corresponds to an on-premises SSRS instance. Faster reports is always good news! Side by side testing has shown that performance gains are attributed to having the report server catalog reside on a local disk in the VM.
What about my custom code? SSRS on an Azure VM supports custom code and assembly references in a report. Similarly, developers can replace or supplement report server operations by adding custom extensions. See Custom Code and Assembly References in Expressions and Reporting Services Extensions for details.
Mobility – this was my favourite feature of Windows Azure SQL Reporting but all is not lost with the new vision. If it is in the cloud, then you can look at mobilising the SSRS report from the VM as you did previously with SSRS as a service.
Scheduled report execution and delivery yes! See Schedules and Subscription and Delivery.
Integration with hybrid solutions – yes! You can join a Windows Azure VM to your corporate network. This is particularly useful for small to medium businesses who prefer an operational cost (OPEX) than a capital expenditure (CAPEX) cost. This means that SMEs can add capacity quickly, without making large hardware costs. You can get more information here Windows Azure Virtual Network Overview
Considering a new Reporting Solution on Windows Azure?
Here are some points to note:
A Windows Azure VM can use Windows authentication to support single sign on. The configuration depends on your setup and your requirements e.g. whether you require validation at the report server or the backend, for example. 
In order to help you to get started, take a look at the table below to help you evaluate a cloud-based Azure VM reporting solution for new software development projects:
Step
Description
Link
1
Before you start, learn about the basic capabilities of a Windows Azure VM by watching the videos and clicking the Explore links on the Virtual Machine page on the WindowsAzure.com web site.
2
Compare licensing costs between a predefined image and Windows Server VM running a licensed copy of SQL Server that you purchase and install separately on the VM. Depending on which SQL Server features you require, you might find it more cost-effective to purchase a Windows VM and SQL Server (Enterprise, Standard or Web edition) separately. In that case, you might want create a .vhd in-house using the installation media of the licensed copy of SQL Server, and then attach the disk to your Windows VM.
As alternative to SQL Reporting, you can use the Standard edition of SQL Server, but you might choose other editions depending on the feature requirements and workloads.
3
Choose the report server mode and features that best satisfy business requirements. The report server mode will determine which authentication subsystems and authorization models are available. While Native mode is closest to SQL Reporting, SharePoint mode provides out-of-box support for claims authentication, multi-tenancy, and load balancing.
Note that claims identity cannot be flowed to most backend data sources that exist outside of the SharePoint environment, so if you use claims, realize that stored credentials of a single user identity will most likely be required for backend data access.
4
Confirm your decisions about deployment, provisioning, report server mode, and features through proof-of-concept testing. Proof-of-concept testing includes building and publishing simple reports that allow you to validate connections from client applications so that you can test configuration, authentication, and authorization behaviors. During preliminary testing, retrieve enough data in each report to understand the expected latency for data retrieval and rendering, especially if you are testing a hybrid solution that combines cloud and on-premises services.
5
Finally, evaluation should include a review of high availability and scalable architectures that might be necessary to support a large volume of users or report executions.
Existing Projects using SQL Reporting on Windows Azure?

IT teams are accustomed to VMs, so this already leverages the skills in-house in order to make the transition. Here is some guidance below to help you to move existing SQL Reporting over to Azure VMs. Here are a few take-away points:
·         You will need to replace it with an alternative technology by October 2014.
·         Microsoft recommend a Windows Azure VM running SSRS in Native mode.
·         Choosing an SSRS VM preserves your existing investment in report design, so no real changes made to the reports themselves.
You are not charged for VMs that are turned off. This saves you money! If you only use reports at scheduled times, for example, month end reporting, you can export a report to a static format, such as PDF. You could then stop the VM when the report server is inactive.
How do you migrate to a VM? Simple! You can deploy a report server project to SSRS on a VM, setting the target server to the VM endpoint. For instructions on how to configure SSRS, set endpoints, configure the firewall, and publish and test reports, see SQL Server Business Intelligence in Windows Azure Virtual Machines.
Other aspects of a transition will require replacement functionality or manual changes, such as replacing report server authentication, or changes in how client applications connect to a report server. At a minimum, you will need to update the endpoint used on the connection. 
SSRS Native Mode on a VM versus SQL Reporting

SQL Reporting customers who are unfamiliar with SSRS can use the following table to compare the two platforms.
Compare
SSRS Native Mode on a Windows Azure VM
SQL Reporting
Features
No feature restrictions for Reporting Services instances on a VM, except for features that vary by report server mode or by SQL Server edition. On a VM, reports can retrieve data from any supported data source. See Data Sources Supported by SSRS for details. For feature comparison by mode or edition, see Reporting Services Report Server and Features by Edition SQL Server 2012.
SQL Reporting is limited to un-federated Windows Azure SQL Databases that are part of the same Windows Azure subscription. On-demand report execution and rendering is supported, but scheduling and subscription delivery is not available.
Billing model
Billing is based on the compute resources required to support a VM in the data center.
Microsoft recommends Medium or Large VMs for SQL Server BI server applications, depending on report volume and number of SQL Server features you plan to use. For operational reporting, you will need both Reporting Services and a Database Engine instance for the report server database.
Different rates apply depending on the size of the VM, as VM size determines how much CPU, memory, and disk storage are allocated. See Pricing Details for SQL Server for more information.
Note that you are not charged for VMs that are turned off, so if you only use reports at certain times, you can export a report to a static format, such as PDF, and then stop the VM when the report server is inactive.
Billing is based on the number of report executions rather than compute resources. If additional capacity is required, an additional instance is added dynamically in the background. Your bill goes up incrementally, in response to the higher number of report executions.
Authentication and Authorization
Users can authenticate to SSRS on VM using Windows authentication or Forms authentication. Support for commonly used authentication subsystems allows for greater software integration opportunities and supports identity delegation across multiple applications.
For database platforms that support Windows authentication, you can take advantage of identity delegation to flow a user identity from the calling application, to the report server, to the backend database. See Authenticate to a report server and Microsoft BI Authentication and Identity Delegation for more information.
A report server on a VM uses a role-based authorization model. See Granting Permissions on a Native Mode Report Server.
SQL Reporting has a proprietary report server authentication subsystem, limited to defining report user identities used for sign in and role assignments. User identity cannot be deleted to other server applications.
SQL Reporting uses Native mode Reporting Services roles.
Software integration and architecture
Reporting Services is a middle tier service that sits between backend data sources and front-end clients, such as a browser or custom web page hosting a report. When evaluating Reporting Services on a VM as your operational reporting solution, your design should position Reporting Services as a middle tier service accordingly.
Architecturally and programmatically, a report server VM is equivalent to an on-premises server. Parity between cloud and on premises architecture is best achieved when other applications, such as backend data sources or front-end applications providing embedded reports, also run within the same Cloud service as the report server VM. In most cases, an end-to-end solution designed to run on-premises can be duplicated using a collection of VMs in a Cloud service. See Developer’s Guide (Reporting Services) for more information about SSRS programmability.
In SQL Reporting, report access is primarily through the HTTP endpoint for URL access, or the SOAP management endpoint, often using the ReportViewer control embedded in a form or web page.
Note that on SQL Reporting, the ASP.NET MVC Web Application templates do not support the ReportViewer control.
More information
·         Sign up for Windows Azure<!–[if mso & !supportInlineShapes & supportFields]> SHAPE  \* MERGEFORMAT <![endif]–><!–[if mso & !supportInlineShapes & supportFields]> <![endif]–>

 I hope that helps someone!
Kind Regards,
Jen

Up and coming PASS BA Sessions, and Data Visualisation and Microsoft Workshops

If you’re here looking for precon notes, I will be posting these up in the next few weeks after I’ve finished my precons at SQLSaturday Denmark, Databasedays Switzerland and SQLBits in Nottingham, UK. So please keep checking back, or email me at jen.stirrup@copperblueconsulting.com 

I’m delighted and proud to be presenting at the PASS Business Analytics Conference this week, in Chicago, Illinios, on 11th and 12th April. 


I love SQLPass Summit, and it is one of the highlights of my year. The Business Analytics conference is happening for the first time, and I can’t wait to meet the new #SQLFamily – or should that be #PASSBAFamily – that I will meet! I’m already making arrangements to meet fellow BA and Twitterati ‘in person’, which is one of my personal favourite things to do! See you there!

I’m pleased and proud to be presenting a preconference training in the Denmark in April 19th at the Microsoft Offices in Hellerup, Denmark, and a workshop in Pfaffikon, Switzerland.

With respect to Denmark, in case you’re not familiar, SQLSaturday events are often accompanied by one-day, very reasonably priced community training days.  This means that attendees get a deep-dive training day on a specialised subject on the Friday Preconference training day, as well as being able to attend the range  of sessions on the main SQLSaturday event.

The precon I’m delivering is an in-depth day, which is aimed at Business Intelligence specialists who focus on front-end reporting. We will be looking at some of the ‘new toys’ in the SQL Server 2012 toolbox, as well as having a fresh look at old favourites such as SQL Server Reporting Services.

Our data source is the Tabular Model.  I’m seeing more traction of the Tabular model at customer sites, but not everyone has had the chance to play with it yet. Therefore, I thought it worthwhile to use the Tabular model as a data source, in order to expose its usefulness to a wider audience. 
 
To register for the Denmark event, please click here
 
I’m holding a Data Visualisation workshop in Switzerland on 17th April.  

If you want to conduct advanced, business-oriented Business Intelligence analysis in SQL Server 2012, then it is essential to understand data visualisation. This session is aimed at developers who want to make informed data visualisation decisions about reporting, with supporting cognitive psychology theory where relevant.
The takeaways will focus on:
 
  • Introduction: Finding patterns in the data.
  • Further Data Visualisations: learn about visualisations that are perhaps not so well-known. This includes Stephen Few’s bullet charts and Tufte’s Sparklines in SSRS.
  • Multivariate data: We will cover the complexities of displaying multivariate data since is potentially more complex by visualisations. For example, we will look at Tufte’s “small multiples” in Power View and in Reporting Services.
  • Putting it all together: Considerations for Dashboards. This workshop aims to deliver a breadth of data visualisation knowledge, underpinned by cognitive psychology theory to provide deeper understanding.

To register for DatabaseDays, please go to http://www.databasedays.com/:

 
 
Last but not least, I’m holding a preconference training day at SQLBits in Nottingham, United Kingdom, on Thursday 2nd May. SQLBits is one of my favourite events, and I’m very glad to be going back to my roots after having been far travelled this year so far!
 
If you’d like to register for the SQLBits precon, the Training Day link is here. I hope to see you there! 


Is Power View supposed to be a replacement for Oracle BI Answers in OBIEE 11g

In response to my last blog, I received the question: ‘Is Power View supposed to be a replacement for Oracle OBIEE 11g?’ This blog is aimed at answering that question. Before we start, however, I would like to say that these opinions expressed here are mine, and no-one else’s opinions. Therefore, if I’ve got some of the Oracle details incorrect, please do feel free to correct me and I will be grateful that you’ve pointed me in the right direction.

Power View, as I’ve said previously, is about helping business users to ‘surf’ their way through their data. It is designed to help them to think fast about the results of the data, and ask questions of it. It is therefore contrasted with other reporting packages which require a more ‘developer’ oriented focus, such as Business Objects WebI, Microsoft SQL Server Reporting Services.

My experience with users would make me put Oracle BI Answers in more of a ‘developer’ pot rather than a ‘think as fast as you click’ pot. If you take a look at the Microsoft Books Online Power View documentation, it shows you how easy it is to create small data visualisations in Power View. There is no discussion of variables, or syntax, or anything that requires typing.

On the other hand, when we look at Oracle BI Answers manual, we find that the discussion changes to include variables, custom date/time strings, formatting results and so on. Although this may be in the reach of many business users, it isn’t for everybody. This is in contrast to Power View, which requires minimal typing, drag-and-drop functionality with no requirement to create new formulae, making it simpler to use and assumes that the underlying data model is clean, correct and in place.

I’m not criticising the Oracle BI Answers solution; instead, I’m saying that it seems aimed at report writers who expect to do additional work to meet a particular reporting requirement. On the other hand, Power View is aimed at those users who expect the data model to be cleansed and sorted for them already, without requiring further work to deliver the data visualisation other than point, click and publish.
To answer the initial question: Is Power View supposed to be a replacement for Oracle OBIEE 11g? I don’t think that is the case. I look forward to your comments on this question.

Is Power View meant to replace ReportBuilder?

At first glance, it could be easy to confuse Power View and Report Builder. This blog is aimed at helping you to understand the ‘why’ of these packages, so that you and your business can get the best out of these complementary reporting packages.
I do hear complaints now and again, that the Microsoft Business Intelligence reporting packages can offer similar functionality, so it isn’t clear to users whether they are using the best option available to them. The reporting packages, such as Excel, Reporting Services and so on, are aimed at specific users with specific needs. The answer to ‘which package is right for you?’ will depend on the user and the requirements for building and delivering the report, and it is good that there is more than one option.
One way of looking at the ‘Do I use Power View or Report Builder?’ debate is to look at the distinction between a puzzle and a mystery, as specified by Outliers author Malcolm Gladwell. A puzzle can be viewed as a defined question with a defined answer; when all the pieces of the puzzle are in place, then the puzzle is solved. A mystery, however, is more uncertain; we do not have all of the information in place, and the answer can involve surfing lots of ill-defined information in order to get an answer.
Report Builder is aimed at business users who have puzzles i.e. they have the information in place, and the data will give them the answer in the form of a designed chart. A puzzle could be a straightforward question, such as ‘What was our sales for the last quarter, for a specific product?’
Power View is aimed at business users who have a ‘mystery’ to solve. The question could be ill-defined i.e. ‘how do I find out the characteristics of the data?’ or ‘are there any anomalies in the data?’ The question could start off as a puzzle, answered quickly in Power View- but then it could spawn mysteries. So, we go from ‘what’ to ‘why?’ by looking at the data from different angles. 
My perspective in this is that options are always a good thing, particularly if it means an opportunity to learn something new. I’m aware that Power View is a new package for people, given that it’s available in SQL Server 2012. I’ve written this blog in order to help people to understand the ‘positioning’ of Power View in relation to its older cousin, Report Builder.
Power View
Power View is easy to use, and quick. It involves very little typing, and assumes that the underlying data model is correct. It does not require programming language.
Instead, it simply requires you to drag and drop, and it means that you are only ever a few clicks away from the data representation that is available to you. It is easy for data analysts to produce reports that help you to  produce ‘ad hoc’ reports. In other words, it does not require you to know a ‘set’ question to answer, before you start. It helps you to work with an unknown query definition, or workload. The point is that it helps you to ‘surf’ the data without a need for structured reporting requirements.
Report Builder
Report Builder is also easy to use, and is aimed at producing reports quickly and easily. Again, it is not aimed at report writers with programming knowledge. Instead, it is aimed at information users and analysts who require reports quickly. Report Builder helps you to work with known query definitions, or workloads, to produce a defined report. Report Builder can help you to surf the data to a degree, too, but it is mainly for structured reporting.
Although Report Builder has been available since SQL Server 2005, there are some interesting new features added in SQL Server 2012 RC0 for Report Builder. It’s no surprise, for example, the Word and Excel renderers are compatible with the latest version of Excel and Word 2010. This will only work, however, if the Microsoft Compatibility Pack for Word, Excel and PowerPoint pack is installed.
Report Builder is part of SQL Server 2012 RC0, and earlier versions can be found in SQL Server 2005 and 2008. In 2012, it comes in two versions: stand-alone, and ClickOnce, which is available in both SSRS native and Sharepoint integrated mode. In the stand-alone method, it needs to be installed on the computer, and it can be downloaded from here. The ClickOnce version is installed as part of the SQL Server 2012 RC0 installation sequence, and can be started from within Sharepoint or via Report Manager. In Sharepoint, it is nice to see that Report Builder reports are just seen as ‘Documents’ and you’ll be able to create one by selecting ‘New Document’ from the Sharepoint library.
On the other hand, Power View is solely part of Sharepoint Enterprise and SQL Server 2012. 
To summarise, both applications:
Easy to Use
Aimed at Information Workers
Report Builder:
Can be obtained from outside Sharepoint
Emphasis is on a straight reporting tool rather than an analysis tool
Power View:
Is only available in Sharepoint Enterprise
Is only available in SQL Server 2012

To answer the question, the packages support and complement one another; Report Builder isn’t meant to be replaced by Power View, but offers a slightly different perspective on easy-to-access structured reporting.

I hope that helps. As always, any questions, please do leave a comment. 

Facilitating Comparison with Sparklines

Given that comparison is a starting point to any investigation of the data, it is important to ensure that people are not blindly going through SSRS chart wizards, believing the representation to be correct. As this illustration shows, it is important to double-check that the SSRS mechanisms are being correctly deployed to produce the correct visualisation. This blog will help you to do that, using sparklines as an example.

Few (2009), in his work Now You See It, described comparison as ‘the beating heart of analysis’ since it is so fundamental, so vital to the analytics activity. As Few rightly points out, this is constituted of two complementary activities:

– Comparison – looking for similarities
– Contrast – looking for differences
Fundamental to this activity is the comparison of magnitudes i.e. whether one value is greater than, smaller than, or the same as, another value. There are plenty of data visualisations which can help with this evaluation, including line graphs, bar charts and sparklines. However, to ensure the integrity of the visualisation, it is important to ensure that any missing data does not mislead the message of the data.
In order to ensure that this does not happen in the case of sparklines, SQL Server Reporting Services 2008 R2 introduced a new feature called ‘Domain Scope’. Essentially, this allows the report writer to align and synchronise group data, thereby a column is present regardless of whether or not there is data for a given quarter, for example. For comparison purposes, it is better to show a gap for missing data for a given category member, so that this feature of the data can be compared with a category member where data is present.
This feature is particularly apparent when sparklines are being created. I first saw this noted in the Reporting Services Recipes book by Paul Turley, but I have extended it to include vertical axis comparison as well. Also, from the data visualisation perspective, I always think it is important for people to understand why they are conducting an activity to produce a report; so hopefully the comments will be helpful to you as we proceed.
Notes before you begin

This uses the database called AdventureWorksDW2008R2
This assumes you are using SQL Server Reporting Services 2008 R2
I have included a sample stored procedure for you to use. You may find it here: http://goo.gl/YeC5B
If you click on any of the pictures, it will take you through to my flickr account so you can take a closer look.
1. Create a new report called Sparkline Dashboard

2. Create a Dataset using the DS_CategoryResellerAndInternetOrderSales shared dataset.

b.Create a Dataset

3. Drag a Matrix element onto the canvas
From the ‘Report Data’ section on the left hand side, navigate to the ‘DS_CategoryResellerAndInternetOrderSales’ dataset.
The screen will now appear as follows:

c.Column and Row Grouping

4. Drag ‘Reseller Order Sales’ over to the box marked ‘Data’.

5. It’s then necessary to make a column for the sparkline to be put in place. This is done by clicking on ‘Order Year’ in the Column Groups section, then navigate to the ‘Add Total’ menu item, and choosing ‘Before’.

3. Add Total Before

6. The tablix is prepared, ready to enter in the Sparkline chart. To do this, right click on the column marked ‘Total’ to produce a menu. Select ‘Tablix’ -> ‘Insert’ -> The screen will now appear as follows:

4. Insert Sparkline

7. Once the sparkline is in place, it requires the correct data. From ‘Report Data’ on the left hand side, drag ‘ResellerOrderSales’ to the ‘Values’ section .

8. On the ‘Details’ item under ‘Category Groups’, enter the following expression for the ‘Grouping’ expression:
=Fields!OrderYear.Value & Fields!OrderQtr.Value

9. On the ‘Details’ item under ‘Category Groups’, enter the following expression for the ‘Sorting’ expression:

=Fields!OrderYear.Value & Fields!OrderQtr.Value

The expression should appear as follows:

e.sparkline expression

10. Change the colour of the sparkline to use ‘GrayScale’
11. Now, preview the report. It will appear as follows:

f. First attempt at running report

It’s clear that it’s impossible to compare properly, since the bars are not aligned with one another. To do this, we need to do some more work to ensure that the bars are aligned together for comparison purposes. Next, here are the steps to do this:
12. On the sparkline chart, right click and choose ‘Horizontal Axis’. A property dialog appears as follows:

7. Horizontal Axis Properties Dialog

13. In the section entitled ‘Axis Range and Interval’, click the checkbox next to ‘Align Axes in:’ and select the name of the Tablix. In this example, the Tablix is called ‘Tablix_CategoryResellerAndInternetOrderSales’. Instead of the axis being aligned to the category i.e. Accessories, Bikes and so on, this means that the axis is aligned to the grouping set up on the Tablix. In this case, our grouping is set up on the Order Years and Order Quarters. The outcome of this is that, if any of the columns are missing, there will simply be a gap where the data should be located. This helps the process of comparison, because if there is no way of identifying how the columns relate to one another, then the visualisation is misleading.

14. Click on the ‘OrderQtr’ group and look in the ‘Properties’ pane on the right hand side under ‘Group’, for the property item ‘Domain Scope’. Paste the name of the Tablix in this section. In this example, the Tablix is called ‘Tablix_CategoryResellerAndInternetOrderSales’. This window will now appear as follows:

8. Domain Scope

15. The only issue is that the comparison may be made more difficult if the vertical axis is not aligned. To do this, right-click on the sparkline, and select ‘Vertical Axis Properties’. As before, under the ‘Axis Range and Interval’ section, tick the box next to ‘Align Axes in’ and choose the Tablix called ‘Tablix_CategoryResellerAndInternetOrderSales’.

h.Vertical Axis

16. Once you have made the right-most column invisible, the final report looks as follows:

i.FinalSparklineReport

It’s possible to see that the columns are aligned with each other horizontally and vertically. This allows us to see that, for example, ‘Helmets’ and ‘Bike Racks’ were the biggest sellers. On a more detailed level, we can see that ‘Locks’ and ‘Pumps’ did not sell at all in the last two quarters, but that ‘Hydration Packs’ and ‘Bike Racks’ did sell during that time.
With data that has ‘dates’ as a category, normally a line graph is appropriate. However, in this case, since there are ‘missing’ data for certain quarters, the lines do not appear properly. Here is the same example, completed using line graph sparklines:

j.FinalLineSparklines

Since some of the data is missing, it seems more difficult to compare the ‘points’ of the line graph. However, this seems easier with the bar chart, since the bars serve as distinct points, representing each quarter.
To summarise, this report would form part of a dashboard, and it’s not intended to replace detail. Instead, it is supposed to help the data consumer to formulate patterns in their head in order to help them to draw conclusions from the data.

Project Crescent in Denali: BISM Summary

There has been a lot of buzz from SQLPass and in the SQL Server community about the Business Intelligence Semantic Model (BISM), which will be used to ‘power’ access to the data for the Microsoft Business Intelligence applications such as Excel, Reporting Services (SSRS) and Sharepoint. It is also intended that Project Crescent, the new self-service ad-hoc reporting tool available in SQL Server Denali, will be powered by the BISM.

Following on from some recent blogs, I was pleased to receive some direct questions from some business-oriented people, who wanted to know more about the ‘how’ of the BISM. It’s clear to me that business users are interested in how it will impact them.  The focus of this blog is to take the information from people like Chris Webb, Teo Lachev, Marco Russo and TK Anand, who have written clear and trusted accounts of the SQL Server Denali information thus far, and use it as a foundation to answer the questions from business-oriented users that I’ve received so far. So, to business!


How can I access the BISM?

The Data Model Layer – this is what users connect to. This is underpinned by two layers:

The Business Logic Layer – encapsulates the business rules, which is supported by:

The Data Access Layer  –  the point at the data is integrated from various sources.

TK Anand has produced a nice diagram of the inter-relationships, and you can head over to his site to have a look.



How do I create a Business Intelligence Semantic Model?

This is done via an environment, which is essentially a new version of the BIDS called Project Juneau. 

It is also possible to produce a BISM Model using Excel PowerPivot, which will help you to construct the relationships and elements contained in the model, such as the business calculations. This is done using Data Analysis Expressions (DAX). This helps you to form simple calculations through to more complex business calculations such as Pareto computations, ranking, and time intelligence calculations. If you would like to know DAX in-depth, then I suggest that you have a look at the book entitled Microsoft PowerPivot for Excel 2010 by Marco Russo and Alberto Ferrari. This book is accessible in its explanations of the DAX constructions. Thank you to Sean Boon for his commentary on the involvement of PowerPivot in creating BISM models.


How up-to-date is the data? Is it cached or accessed in real-time?

The Data Model Layer, accessed as the fundamental part of the BISM, can be cached or accessed in real-time. The main take away point is as follows:

Cached method: the upshot of which is that it is very, very fast to access the cached data. At the SQLPASS event, the demo showed instant querying on a 2 billion row fact table on a reasonable server. Specifically, the speed is because it uses the Vertipaq store to hold the data ‘in memory’. 

Real-time method: the queries go straight through the Business Logic Layer to go and get the data for the data navigator. 

A potential downside of the cached method is that the data needs to be loaded into the Vertipaq ‘in memory’ store for access. It’s not clear how long this will take so it is sounding like a ‘how long is a length of string?’ question; in other words, it depends on your data I suppose. Other technologies, like Tableau, also use in-memory data stores and data extracts. For example, Tableau offers you more calculations, such as CountD, if you use the data extracts instead of touching the source systems, thereby encouraging you to use their own data stores. In Denali, I will be interested to see if there are differences in the calculations offered by the cached or real-time method. 

To summarise, a careful analysis of the requirements will help to determine the methodology that your business needs. In case you need more technical detail, this BISM, in-memory mode is a version of SQL Server Analysis Services. If you require more details, I would head over to Chris Webb’s site.


How can I access the BISM without Sharepoint?


In SQL Server Denali, it will be possible to install a standalone instance of the in-memory, BISM mode. Essentially, this is a version of Analysis Services which does not need Sharepoint. Until more details are clarified, it isn’t possible to say for certain how this version differs from the Sharepoint-specific version. No doubt that will become more clear. 

As an aside, I personally love Sharepoint and I think that users can get a great deal from it generally, and not just in the Business Intelligence sphere. I would want to include Sharepoint implementations as far as possible in any case.


What will BISM give me?


Project Crescent: The big plus is Project Crescent, which is the new ad-hoc data visualisation tool, which is planned to look only visualise data via the BISM. Although you don’t need Sharepoint to have a BISM, you do need it if you want to use Project Crescent. 

Hitting the low and high notes: If you’ve ever had to produce very detailed, granular reports from a cube, then you will know that this can take time to render. The BISM will be able to serve up the detailed level data as well as the aggregated data, thereby hitting both notes nicely!

Role-based security: this will be available, in which it will be possible to secure tables, rows or columns. As an aside, it will be important to plan out the roles and security so that this maps business requirements around who can see the data.



What will BISM not give me?

As I understand it, it will not support very advanced multi-dimensional calculations in Denali since it is not as multidimensional as its more mature Analysis Services sibling, the Unified Dimensional Model (UDM). Like most things, if it is simpler to use, it won’t be as advanced as more complex facilities. This can be an advantage since it will be easier for many relational-oriented people to understand and access, especially for straightforward quick reports.

I hope that helps to answer the various questions I have received; if not, please don’t hesitate to get in touch again!

Project Crescent in Denali: A Kuhnian paradigm shift for business users?

What is Project Crescent? The Microsoft SQL Server team blog describes Project Crescent as a ‘stunning new data visualisation experience’ aimed at business users, by leveraging the ‘self-service business intelligence’ features available in PowerPivot. The idea is to allow business users to serve themselves to the data by interacting, exploring and having fun with it. The concept at the heart of Project Crescent is that “Data is where the business lives!” (Ted Kummert), so business users have access to the data directly.  

For many users, this new methodology of data visualisation could be a real fundamental change in their way of looking at data, a real Kuhnian paradigm shift; instead of using basic reports, instead accessing a self-service way of understanding their data without a reliance on IT, and without invoking a waterfall methodology to get the data that they require in order to make strategic decisions.

What does this data visualisation actually mean for business users, however? Haven’t business users already got their data, in the form of Excel, Reporting Services, and other front-end reporting tools? The answer to this question really depends on the particular reporting and data analysis process deployed by the business users. Let’s use an analogy to explain this. In the ‘Discworld’ series of books’ by Terry Pratchett, one book called ‘Mort’ contains a joke about the creation of the Discworld being similar to creating a pizza. In other words, the Creator only intended to create night and day, but got carried away by adding in the sea, birds, animals and so on; thus, the final outcome was far beyond the initial plan. The book continues that the process was similar to making a pizza, whereby the initial outcome was only intended to be ‘cheese and tomato’ but the creator ends up impulsively adding in all sorts of additional toppings. Thus, the final result is something that was over and above the original intention. Similarly, reporting and data analysis can be analogous to this process, whereby the original planned outcome is surpassed by the addition of new findings and extrapolations that were not originally anticipated.

Put another way, there are two main ways of interacting with data via reporting; one is structured reporting, and the other is unstructured data analysis. In the first ‘structured’ route, the report is used to answer business questions such as ‘what were my sales last quarter?’ or ‘how many complaint calls did I receive?’ Here, the report takes the business user down a particular route in order to answer a specific question. This process is the most commonly used in reporting, and forms the basis of many strategic decisions. If this was Discworld, this is your base ‘cheese and tomato’ pizza!

On the other hand, unstructured data analysis allows the business user to take a look and explore the data without a preconceived business question in their heads. This allows the data to tell its own story, using empirical evidence based on the data, rather than using pre-conceived ideas to generate the data.  In our ‘Discworld’ analogy, this would be the final ‘toppings and all’ pizza, that contained so much more than the original intention.

So, Project Crescent is great news for business users for a number of reasons:

 – users will be able to use ‘self-service’ to create their own reports, with no reliance on IT staff
 – users will be able do ad-hoc analysis on their data without being taken down a particular road by a structured report
 – the traditional ‘waterfall’ methodology of producing reports can be more easily replaced with an agile business intelligence methodology, since prototypes can be built quickly and then revised if they do not answer the business question.

At the time of writing, it is understood that the data visualisation aspect of Project Crescent will involve advanced charting, grids and tables. The users will be able to ‘mash up’ their data in order to visualise the patterns and outliers that are hidden in the data. Although it is difficult to quantify for a business case, it is interesting to note the contributions that visualisation has made to understanding data – or even occluding it, in some cases. One example is in the discovery of DNA: Rosalind Franklin’s photographs of the DNA structure revealed the double helix, which was examined and found by Crick and Watson.  This has had enormous contributions of this finding to our understanding of science. On the other hand, incorrect data visualisation has been proposed as a contributor to the decision making processes potentially leading to the Challenger disaster by Edward Tufte.

So far, it sounds like a complete ‘win’ for the business users. However, it may be a Kuhnian ‘paradigm shift’ in a negative way for some users, in particular for those people who rely on intuition rather than empirical, data-aware attitudes to make strategy decisions. In other words, now that the ‘self-service’ Business Intelligence facilities of PowerPivot and Project Crescent are available, business users may find that they need to become more data-oriented when making assertions about the business. This ‘data-focused’ attitude will be more difficult for business users who use a ‘gut feel’, or intuition, to make their assertions about the business. This is particularly the case where business users have been with a company for a long time, and have a great deal of domain knowledge. 

It is also important to understand that the ‘base’ reporting function is still crucial, and no business can function without the basic reporting functionality. Thus, Reporting Services, whether facilitated through Sharepoint or ‘native’, along other reporting tools, will still be an essential part of the business intelligence owned by enterprises. If this is Discworld, this would be our ‘cheese and tomato’ pizza. Put another way, there would be no pizza if there wasn’t for the base!

Terry Pratchett commented a while ago that ‘he would be more enthusiastic about encouraging thinking outside the box when there’s evidence of any thinking going on inside it’. This is very true of business intelligence systems, as well as the processes of creative writing. The underlying data needs to be robust, integrated and correct. If not, then it will be more difficult for business users to make use of their data, regardless of the reporting tool that is being used. In other words, the thinking ‘inside’ the box needs to be put in place before the ‘out of the box’ data analysis and visualisation can take place.

Project Crescent will be released as a part of SQL Server Denali, and will be mobilised by Sharepoint and the Business Intelligence Semantic Model (BISM). A final release date for SQL Server Denali is still being negotiated, but I hope to see it in 2011. To summarise, Project Crescent offers a new way of visualising and interacting with data, with an emphasis on self-service – as long as there is thinking inside the box, of course, regardless of whether you live in Discworld or not!