SQLPass 24hop Review: Slicers in Reporting Services

Twice a year, SQLPass put on a stellar free ’24 hour hop’ which is aimed at engaging with SQL Server fans all over the world. Basically, SQL experts from all over the world give an hour of their time in order to present you, the SQL Server viewing public, with some SQL goodness, advice and tips and tricks to make your lives easier. Ever since I have got involved with the SQL Server community in the UK and the US, I have learned so much from my peers. I can say that SQLPass and SQLBits have both enriched my life so much in terms of my skills, and also introducing me to some wonderful friends that I’ve made my home in the community. For that, I am grateful. The 24 hop is one of the ways in which we can learn more about SQL!

The SQLPass 24 hop sessions, given in Fall 2011, have been designed to give you a flavour of the precons that will be provisioned as a part of SQLPass Summit.  The 24hop sessions are given by world-class SQL experts. This fall, I listened to a total of 5 different sessions and learned a lot! 
I’d like to thank the following people for giving up their time to educate the wider SQL Community (and me!): Denny Cherry, Stacia Misner, Paul Turley, Rob Farley, Simon Sabin and Peter Myers for their excellent sessions and for giving up their time to help people in the community by provisioning free training. If you are interested, I’d recommend that you take the time to look at the SQLPass preconference training for each of these sessions since you’d be trained by the ‘best of the best’. 

In this blog, I wanted to call out Simon’s Reporting Services session since I thought that his session was particularly outstanding. Simon’s webinar focused on provisioning slicers for Reporting Services. Yes, you read that right! Slicers are available in Excel and in Project Crescent. Every time I show slicers to an Excel user for the first time, the customer is usually impressed by their simplicity and ease of configuration. It’s also possible to ‘theme’ slicers so that they match the rest of the dashboard elements. They also increase usability because they are consistent with Schneiderman’s Visual Information Seeking Mantra: ‘overview, filter and zoom’ methodology with respect to data navigators ‘surfing’ their way through the data.

Basically, Simon used VB and SSRS to produce the slicers in the report. For SSRS people who’d like to know more about what .Net can offer them, this is an excellent route towards learning some .net whilst enhancing report usability in line with the best data visualisation thinking as advised by gurus like Schneiderman. What I especially liked is that Simon paid attention to the .Net requirement from the SSRS writers’ perspective, and was careful to call out any potential pitfalls or mistakes that the SSRS report writer might make. 

The end result was great to see, and Simon produced the report, which you can see on his blog, on less than an hour. It seems to me that his SSRS precon would be especially interesting since you’d obtain lots of useful practical advice, packed into a one-day event, that would really make a difference to writing SSRS reports.

As a business intelligence specialist, I believe firmly that accurate and useful reporting can drive a business from data towards business intelligence and customer intelligence; listening to the stories that the data is telling you. If you can’t ‘hear’ the data because the reports are poor or don’t have the user in mind, then you are still not using the data properly – even if you have lots of it. Quantity of data is not quality of data – it has to be clean and well-presented before it can be used to support the enterprise.

It seems to me that an investment in report writing is fundamentally an investment in business. SSRS precon and training like Simon’s session offers a real ‘value-add’ to the business long term, by supporting the provisioning of reports by report writers to the decision-makers who need the reports to drive the business forward. SSRS can be left behind a bit, in the fanfare over Excel Services, and of course, Project Crescent. Despite the new technologies and new ‘self-service’ business intelligence outlook, there is always a role for straight reporting in running a business knowledgeably and accurately, based on the data. 

If you’re interested in the SQLPass pre-cons, then please do head over to the site and take a look! If you decide to go, please do let me know by tweeting me and hopefully I’ll see you at SQLPass Summit!

Business data: 2D or 3D?

One debate in data visualisation can be found in the deployment of 2D or 3D charts. Here is an interesting assessment here, conducted by Alasdair Aitchison, and it is well worth a read.
3D visualisations are good for certain types of data e.g. spatial data. One good example of 3D in Spatial analysis is given by Lie, Kehrer and Hauser (2009) who provide visualisations of Hurricane Isabel. 3D has also been shown to be extremely useful for medical visualisation, and there are many examples of this application. One example for many parents is a simple, everyday miracle: anyone who has known the experience of seeing their unborn child on a screen will be able to tell you of the utter joy of seeing their healthy child grow in the womb via the magic of medical imaging technology. Another example of this work has been conducted in cancer studies, where the researchers have visualised tumours in order to detect brain tumours (Islam and Alias, 2010). 
For me, data visualisation is all about trying to get the message of the data out to as many people as possible. Think John Stuart Mill’s principle of utilitarianism – the maximum happiness to the most amount of people. In data visualisation, similar applies; we can make people happy if they get at their data. However, for the ‘lay public’ and for business users, 3D isn’t good for business data because people just don’t always ‘get’ it easily. Note that medical staff do undertake intensive training in order to assess scans and 3D images, and this subset is excluded from the current discussion, as is spatial data. Hopefully, by restricting the ‘set’ of users to business users, the argument goes from the general to the specific, where it is easier to clarify and give firmer answers to the ‘grey’ subject of data visualisation.
Data Visualisation is not about what or how you see; it’s ‘other-centric’. It’s about getting inside the head of the audience and understanding how to help them see the message best. It is often difficult to judge what business users – or people in general – will find easiest to understand. It is also difficult to ascertain what visualisations can best support a given task. Ultimately, I like to stick to the best practices in order to try and answer the data visualisation question as well as possible and to make things as clear for everyone as possible.
Part of my passion for data visualisation comes from personal experience; I was told when I was quite young that I was going blind in one eye. Fortunately, this proved not to be the case, and I can see with two eyes. When my son was born, I saw him with two eyes, and for that I am extremely grateful. Having been through the experience of learning that I may go through life with impaired vision, I have been blessed to understand how precious our vision is, and to try and do something positive for others who have struggled with their vision. This experience has made me passionate about trying to make things as clear for everyone else as possible, so I guess the personal experience has made me so passionate about making data visualisation accessible to everyone, as far as possible.
One particularly relevant issue in data visualisation is the  debate over 2D over 3D – namely, whether to use 3D in data visualisation or not. Here, I specifically refer to the visualisation of business data, not Infographics. 
On one hand, 3D can make a chart or dashboard look ‘pretty’ and interesting. In today’s world, where we are bombarded with images and advanced graphical displays, we are accustomed to expecting ‘more’ in terms of display. We do live in a 3D world, and our visual systems are tuned to perceive the shapes of a 3D environment (Ware, 2004). 
The issue comes when we try to project 3D onto a 2D surface; we are trying to add an additional plane onto a 2D surface. This is a key issue in data visualisation, since we are essentially trying to represent high-dimensional entities onto a two-dimensional display, whether it is a screen or paper. 
Generally speaking, 3D takes longer for people to assimilate than 2D graphs, and they are more difficult to understand. Not everyone has good eyesight or good innate numerical ability, and its’ about getting the ‘reach’ of the data to as many people as possible without hindering or patronising. Perceptually, 2D is the simplest option, and the occlusion of data points is not an issue. Business users are also often more familiar with this type of rendering and it is the ‘lowest common denominator’ in making the data approachable to the most number of people. 
On the other hand, there is some evidence to suggest 3D graphs can, on occasion, be more memorable initially, but this isn’t any good if the data wasn’t understood properly in the first place. It can also be more difficult to represent labels and textual information about the graph. 
In terms of business data, however, 3D Graphs can break ‘best practice’ on a number of issues:
 – Efficiency. Numbering is inefficient since it can be difficult to compare. “Comparison is the beating heart of analysis” (Few) In other words, we should be trying to help users to get at their data in a way that facilitates comparison. If comparison isn’t facilitated, then this can make it more difficult for the users to understand the message of the data quickly and easily.
 – Meaningful. A graph should require minimum explanation. If users take longer to read it, and it increases cognitive load, then it can be difficult to draw meaningful conclusions. The introduction of 3D can mean chartjunk, which artificially crowds the ‘scene’ without adding any value. If you crowd the ‘scene’, then this can naturally distract rather than inform.
 – Truthful. The data can be distorted; occluding bars are just one example. If the labels are not correctly aligned or have labels missing, this can also make the 3D chart difficult to read.
 – Aesthetics. It can make the graph look pretty but there are other ways to do this which don’t distract or occlude.
Stephen Few has released a lot of information about 3D and I suggest that you head over to his site and take a look. Alternatively, I can recommend his book entitled ‘Now you See it‘ for a deeper reading since it describes these topics in more detail, along with beautiful illustrations to allow you to ‘see’ for yourself.
To summarise, what should people do – use 2d only? Here is the framework of a strategy towards a decision:
 – Look at the data. The data might be astrophysics data, in which the location of the stars, and its type, could be identified by colour and brightness as well as location. If the data is best suited to 3D, such as spatial, astrophysics or medical data, then that’s the right thing to do. If the data is business data, where it is important to get the ‘main point’ across as clearly and simply as possible, then 2D is best since it reduces the likelihood of misunderstandings in the audience. Remember that not everyone will be as blessed with good sight or high numerical ability as you are!
Look at the audience. 3D can be useful if the audience are familiar with the data. I had a look at Alastair’s 3D chart and I have to say that I am not sure what the chart is supposed to show, probably because I’m not clear on the data. I am not an expert in spatial data, so I don’t ‘get’ it. So I ask for Alastair’s understanding in my perspective that I don’t understand the spatial data in his blog, so I will be glad to defer to his judgement in this area (no pun intended). If you can’t assume that the viewers are familiar with the data, then it’s probably common sense to make it as simple as possible.
 – Look at the Vendors. Some vendors, e.g. Tableau, do not offer 3D visualisations at all, and bravely take the ‘hit’ from customers, saying that they are sticking to best practice visualisations and that’s the second, third, fourth, fifth and final opinion on the matter. 
In terms of multi-dimensional data representation, there are different methodologies in place to display business data that don’t require 3D, such as parallel co-ordinates, RadViz, lattice charts, sploms, scattergrams. I have some examples on this blog and will produce more over time. Further, it is also possible to filter and ‘slice’ the data in order to focus it towards the business question at hand, so that it is easier for business users to understand. 
I hope that SQL Server Denali Project Crescent will help business users to produce beautiful, effective and truthful representations of business data. I believe that business users will eventually start doing data visualisations ‘by default’ because it is inbuilt to the technology that they are using. Think of sparklines, which are now availabe in Excel 2010 – this was exciting stuff for me! Hopefully Project Crescent will go down this route towards excellent data visualisation but I recognise it will take time.
To summarise, the way around the ‘3D or not to 3D’ in business data is to offer such beautiful, effective, truthful visualisations of business users’ data that adding 3D wouldn’t add anything more to them. The focus here has been on business users, since that’s where my experience lies; there are plenty of good examples of 3D in spatial, astrophysics and medical imaging, but my focus is on business users . 
To conclude, my concern is to get the message of the data is clearly put across to the maximum number of people – think John Stuart Mill again!

Knocking down Stephen Few’s Business Intelligence Wall?

Stephen Few blogged recently about Business Intelligence hitting a ‘wall’; if you haven’t read this post, I strongly recommend that you read it. I have enjoyed many of Few’s blogs over the years, and this particular blog was very insightful. It focuses on the dissonance between ‘old’ business intelligence and ‘new’ business intelligence. 
‘Old’ business intelligence is hallmarked by its emphasis on the engineering aspects of the ‘technical scaffolding’ which supports business intelligence solutions. Traditionally, this is owned by the IT department. This IT Department may localised in one place, or, in my experience, it could be spread out across the world via outsourcers, data centres and so on. The focus here is on technology, hardware, and software.
Somewhere in the traditional sphere of business intelligence, the user has been forgotten; side-lined with acronyms or perhaps even patronised with concepts which are, fundamentally, within their grasp. ‘New’ business intelligence is hallmarked by its emphasis on the importance of the user; this focuses on analysis, visualisation, and drawing conclusions from the data for the purpose of moving the business forward.
It seems to me that ‘new’ business intelligence is coming. Businesses need their data, and are starting to understand that there are different ways of obtaining it, in addition to the traditional business user method of using Excel for everything.  Peter Hinssen, in his book ‘The New Normal’, talks about technology in terms of its accessibility to all: it’s longer such a mysterious entity, restricted to the rich or technologically advanced few. Instead, it is moving towards becoming ‘the new normal’ – accessible and affordable to everybody, and particularly taken for granted by younger generations, the Generation Y people born after 1978. 
As Hinssen puts it, business users are starting to ask ‘Explain to me why’ questions to IT departments. These questions include: ‘Explain to me why it takes you 18 months to implement a system which only updates overnight, when I can book Easyjet flights and see my data on their site immediately after I’ve booked it?’ ‘Explain to me why I can google for every piece of data on the Internet, but I can’t access Excel spreadsheets on our system?’
In the meantime, the common currency for discussion between business users and IT will probably remain focused on specific technology.  However, new business intelligence isn’t focused on new technology, although it is probably more commonly associated with cool new visualisation technologies such as Tableau.  ‘New’ Business Intelligence is a paradigm shift towards an enablement of a user-centred approach in collecting, analysing, predicting and using data.
However, this does not mean the data warehouses are going away anytime soon. Also, the concerns of traditional IT departments are also valid. As the guardians of the data, IT departments are tasked with looking after data on their terms, and protecting it from any potential vandalism from internal or external influences, which includes business users.  IT departments are, quite rightly, reticent to allow unrestricted access to the data sources that they are required to protect. It’s not immediately obvious how to balance the pressures on data by the business users, who feel entitled to their data, with the needs of the IT department to protect their data. Is there a solution?
I was speaking to some of the team at Quest Software about this very issue. In order to respond to the needs of business users as well as IT teams, Quest Software are working on a Data Hub which is aimed at provisioning data to the business users, whilst ensuring that IT can carry out their guardianship role of protecting the data. 
In other words, the Data Hub solution provides surfaces data as a ‘one-stop-shop’ to all of the data sources, so that the business users can use as a ‘window’ in order to access the data that they need. Given that the Quest Data Hub could talk to many common data source, and the business users could consume the data as they like, then this would please the business users. Very often, business users don’t really care about the actual source of the data; they just need to know that the number is correct.
As a consequence of this solution, on the other hand, if the Quest Data Hub could be configured and set up by IT or a data-savvy business analyst, then IT could maintain guardianship over the data. It also means that they still ‘own’ the technical scaffolding that provides the data, and can insulate it from inadvertent mishaps. This is particularly important when the data may well be farmed out from one source to another, across firewalls and sent out to other companies, who consume the data; the potential ‘clean-up’ consequences are enormous. Further, this means that data warehouses remain in place, serving up data as they have always done.
As I understand it, the Quest Data Hub is that it is not dependent on any particular technology. This means that it should be possible for business users to connect straight to the data sources via the hub, regardless of the type of data source e.g. Oracle, SQL Server, DB2 or even Excel. 
In the Microsoft sphere, the best way to leverage Business Intelligence is to have an integrated SharePoint environment; this includes Reporting Services, PowerPivot, and possibly even Project Crescent. Don’t misunderstand me, I have implemented SharePoint on a number of occasions; I can see what it does for customers, and I love seeing customers really make the most of SharePoint. However, in my opinion, this dependency on a particular framework isn’t good for the Microsoft stack; in my experience I have come across plenty of DBAs who do not like SharePoint. Take one example; depending on the environment, it is difficult to set up Kerberos authentication. This means that customers struggle and get put off the technology at the early stages; in the worst cases, they can even give up or just implement the stand-alone, non-integrated SharePoint.
To set up SharePoint properly, it is vital to recognise that it isn’t a ‘next-next-next’ installation. It needs properly planned, and a variety of skill sets to make an enterprise solution that becomes embedded in the organisation.
That does not mean that the Quest Data Hub is trivial to set up; my inclination is that this job would be best done by IT since they already know the data sources well. I would also want to see an emphasis on both structural and guide metadata. The structural metadata will indicate the structure of the tables, keys, and so on. The guide metadata would provision this information to users in a language that is meaningful to the business. I haven’t seen the Quest Data Hub yet, but I would be interested to know more about the plans for allowing easy documentation for the structural metadata.
To summarise, ‘new’ business intelligence is coming, and the needs of business users need to be addressed. I have seen software that distinctively sits on either side of Stephen Few’s Business Intelligence wall. The closest I have seen to both sides is the entire Microsoft stack. However, since it is independent, the Quest Data Hub is subversive in nature since it should communicate with everything, and does not lock businesses into one technology or another; it moves as the businesses move, and stretches and grows according to the business need. I look forward to seeing ways in which Stephen Few’s business intelligence wall can be broken!

Project Crescent in Denali: BISM Summary

There has been a lot of buzz from SQLPass and in the SQL Server community about the Business Intelligence Semantic Model (BISM), which will be used to ‘power’ access to the data for the Microsoft Business Intelligence applications such as Excel, Reporting Services (SSRS) and Sharepoint. It is also intended that Project Crescent, the new self-service ad-hoc reporting tool available in SQL Server Denali, will be powered by the BISM.

Following on from some recent blogs, I was pleased to receive some direct questions from some business-oriented people, who wanted to know more about the ‘how’ of the BISM. It’s clear to me that business users are interested in how it will impact them.  The focus of this blog is to take the information from people like Chris Webb, Teo Lachev, Marco Russo and TK Anand, who have written clear and trusted accounts of the SQL Server Denali information thus far, and use it as a foundation to answer the questions from business-oriented users that I’ve received so far. So, to business!


How can I access the BISM?

The Data Model Layer – this is what users connect to. This is underpinned by two layers:

The Business Logic Layer – encapsulates the business rules, which is supported by:

The Data Access Layer  –  the point at the data is integrated from various sources.

TK Anand has produced a nice diagram of the inter-relationships, and you can head over to his site to have a look.



How do I create a Business Intelligence Semantic Model?

This is done via an environment, which is essentially a new version of the BIDS called Project Juneau. 

It is also possible to produce a BISM Model using Excel PowerPivot, which will help you to construct the relationships and elements contained in the model, such as the business calculations. This is done using Data Analysis Expressions (DAX). This helps you to form simple calculations through to more complex business calculations such as Pareto computations, ranking, and time intelligence calculations. If you would like to know DAX in-depth, then I suggest that you have a look at the book entitled Microsoft PowerPivot for Excel 2010 by Marco Russo and Alberto Ferrari. This book is accessible in its explanations of the DAX constructions. Thank you to Sean Boon for his commentary on the involvement of PowerPivot in creating BISM models.


How up-to-date is the data? Is it cached or accessed in real-time?

The Data Model Layer, accessed as the fundamental part of the BISM, can be cached or accessed in real-time. The main take away point is as follows:

Cached method: the upshot of which is that it is very, very fast to access the cached data. At the SQLPASS event, the demo showed instant querying on a 2 billion row fact table on a reasonable server. Specifically, the speed is because it uses the Vertipaq store to hold the data ‘in memory’. 

Real-time method: the queries go straight through the Business Logic Layer to go and get the data for the data navigator. 

A potential downside of the cached method is that the data needs to be loaded into the Vertipaq ‘in memory’ store for access. It’s not clear how long this will take so it is sounding like a ‘how long is a length of string?’ question; in other words, it depends on your data I suppose. Other technologies, like Tableau, also use in-memory data stores and data extracts. For example, Tableau offers you more calculations, such as CountD, if you use the data extracts instead of touching the source systems, thereby encouraging you to use their own data stores. In Denali, I will be interested to see if there are differences in the calculations offered by the cached or real-time method. 

To summarise, a careful analysis of the requirements will help to determine the methodology that your business needs. In case you need more technical detail, this BISM, in-memory mode is a version of SQL Server Analysis Services. If you require more details, I would head over to Chris Webb’s site.


How can I access the BISM without Sharepoint?


In SQL Server Denali, it will be possible to install a standalone instance of the in-memory, BISM mode. Essentially, this is a version of Analysis Services which does not need Sharepoint. Until more details are clarified, it isn’t possible to say for certain how this version differs from the Sharepoint-specific version. No doubt that will become more clear. 

As an aside, I personally love Sharepoint and I think that users can get a great deal from it generally, and not just in the Business Intelligence sphere. I would want to include Sharepoint implementations as far as possible in any case.


What will BISM give me?


Project Crescent: The big plus is Project Crescent, which is the new ad-hoc data visualisation tool, which is planned to look only visualise data via the BISM. Although you don’t need Sharepoint to have a BISM, you do need it if you want to use Project Crescent. 

Hitting the low and high notes: If you’ve ever had to produce very detailed, granular reports from a cube, then you will know that this can take time to render. The BISM will be able to serve up the detailed level data as well as the aggregated data, thereby hitting both notes nicely!

Role-based security: this will be available, in which it will be possible to secure tables, rows or columns. As an aside, it will be important to plan out the roles and security so that this maps business requirements around who can see the data.



What will BISM not give me?

As I understand it, it will not support very advanced multi-dimensional calculations in Denali since it is not as multidimensional as its more mature Analysis Services sibling, the Unified Dimensional Model (UDM). Like most things, if it is simpler to use, it won’t be as advanced as more complex facilities. This can be an advantage since it will be easier for many relational-oriented people to understand and access, especially for straightforward quick reports.

I hope that helps to answer the various questions I have received; if not, please don’t hesitate to get in touch again!

Project Crescent in Denali: A Kuhnian paradigm shift for business users?

What is Project Crescent? The Microsoft SQL Server team blog describes Project Crescent as a ‘stunning new data visualisation experience’ aimed at business users, by leveraging the ‘self-service business intelligence’ features available in PowerPivot. The idea is to allow business users to serve themselves to the data by interacting, exploring and having fun with it. The concept at the heart of Project Crescent is that “Data is where the business lives!” (Ted Kummert), so business users have access to the data directly.  

For many users, this new methodology of data visualisation could be a real fundamental change in their way of looking at data, a real Kuhnian paradigm shift; instead of using basic reports, instead accessing a self-service way of understanding their data without a reliance on IT, and without invoking a waterfall methodology to get the data that they require in order to make strategic decisions.

What does this data visualisation actually mean for business users, however? Haven’t business users already got their data, in the form of Excel, Reporting Services, and other front-end reporting tools? The answer to this question really depends on the particular reporting and data analysis process deployed by the business users. Let’s use an analogy to explain this. In the ‘Discworld’ series of books’ by Terry Pratchett, one book called ‘Mort’ contains a joke about the creation of the Discworld being similar to creating a pizza. In other words, the Creator only intended to create night and day, but got carried away by adding in the sea, birds, animals and so on; thus, the final outcome was far beyond the initial plan. The book continues that the process was similar to making a pizza, whereby the initial outcome was only intended to be ‘cheese and tomato’ but the creator ends up impulsively adding in all sorts of additional toppings. Thus, the final result is something that was over and above the original intention. Similarly, reporting and data analysis can be analogous to this process, whereby the original planned outcome is surpassed by the addition of new findings and extrapolations that were not originally anticipated.

Put another way, there are two main ways of interacting with data via reporting; one is structured reporting, and the other is unstructured data analysis. In the first ‘structured’ route, the report is used to answer business questions such as ‘what were my sales last quarter?’ or ‘how many complaint calls did I receive?’ Here, the report takes the business user down a particular route in order to answer a specific question. This process is the most commonly used in reporting, and forms the basis of many strategic decisions. If this was Discworld, this is your base ‘cheese and tomato’ pizza!

On the other hand, unstructured data analysis allows the business user to take a look and explore the data without a preconceived business question in their heads. This allows the data to tell its own story, using empirical evidence based on the data, rather than using pre-conceived ideas to generate the data.  In our ‘Discworld’ analogy, this would be the final ‘toppings and all’ pizza, that contained so much more than the original intention.

So, Project Crescent is great news for business users for a number of reasons:

 – users will be able to use ‘self-service’ to create their own reports, with no reliance on IT staff
 – users will be able do ad-hoc analysis on their data without being taken down a particular road by a structured report
 – the traditional ‘waterfall’ methodology of producing reports can be more easily replaced with an agile business intelligence methodology, since prototypes can be built quickly and then revised if they do not answer the business question.

At the time of writing, it is understood that the data visualisation aspect of Project Crescent will involve advanced charting, grids and tables. The users will be able to ‘mash up’ their data in order to visualise the patterns and outliers that are hidden in the data. Although it is difficult to quantify for a business case, it is interesting to note the contributions that visualisation has made to understanding data – or even occluding it, in some cases. One example is in the discovery of DNA: Rosalind Franklin’s photographs of the DNA structure revealed the double helix, which was examined and found by Crick and Watson.  This has had enormous contributions of this finding to our understanding of science. On the other hand, incorrect data visualisation has been proposed as a contributor to the decision making processes potentially leading to the Challenger disaster by Edward Tufte.

So far, it sounds like a complete ‘win’ for the business users. However, it may be a Kuhnian ‘paradigm shift’ in a negative way for some users, in particular for those people who rely on intuition rather than empirical, data-aware attitudes to make strategy decisions. In other words, now that the ‘self-service’ Business Intelligence facilities of PowerPivot and Project Crescent are available, business users may find that they need to become more data-oriented when making assertions about the business. This ‘data-focused’ attitude will be more difficult for business users who use a ‘gut feel’, or intuition, to make their assertions about the business. This is particularly the case where business users have been with a company for a long time, and have a great deal of domain knowledge. 

It is also important to understand that the ‘base’ reporting function is still crucial, and no business can function without the basic reporting functionality. Thus, Reporting Services, whether facilitated through Sharepoint or ‘native’, along other reporting tools, will still be an essential part of the business intelligence owned by enterprises. If this is Discworld, this would be our ‘cheese and tomato’ pizza. Put another way, there would be no pizza if there wasn’t for the base!

Terry Pratchett commented a while ago that ‘he would be more enthusiastic about encouraging thinking outside the box when there’s evidence of any thinking going on inside it’. This is very true of business intelligence systems, as well as the processes of creative writing. The underlying data needs to be robust, integrated and correct. If not, then it will be more difficult for business users to make use of their data, regardless of the reporting tool that is being used. In other words, the thinking ‘inside’ the box needs to be put in place before the ‘out of the box’ data analysis and visualisation can take place.

Project Crescent will be released as a part of SQL Server Denali, and will be mobilised by Sharepoint and the Business Intelligence Semantic Model (BISM). A final release date for SQL Server Denali is still being negotiated, but I hope to see it in 2011. To summarise, Project Crescent offers a new way of visualising and interacting with data, with an emphasis on self-service – as long as there is thinking inside the box, of course, regardless of whether you live in Discworld or not!

Project Crescent: When is it best applied?

From what I’ve read and understood, Project Crescent is best applied to ad-hoc, undirected analysis work rather than directed reports, displaying current operational data activity. This blog aims to understand more about Project Crescent, and when it is best applied to business questions.

From the front-end reporting perspective, Project Crescent is a major new Business Intelligence addition to Microsoft SQL Server Denali. It is primarily designed to assist business users, across an organisation, to ‘storyboard’ their data by facilitating unstructured analyses of the data, and to share it in the Microsoft Office applications that we are familiar with already. Project Crescent has its genesis in the SSRS team, and is focused on allowing business users to analyse, mine and manipulate their data ‘on the fly’. The solution will be browser-based, using Vertipaq and Silverlight to product rapid BI results by allowing users to interact with their data. Further, this analysis will be powered by the Business Intelligence Semantic Model (BISM), and will only be available in Sharepoint mode. This criteria would need to be met in order to use Project Crescent, so this may not work in every environment.


How does this perspective differ from reporting? Well, standard reporting takes business users down a particular structured route; for example, the parameters are pre-prepared in the report in drop-down format, and the dataset is already ‘set’ about the data that can be shown. This offers limited options for business users to interact with the data. On the other hand, Project Crescent will allow unstructured analysis of the data, which does not lead business users down a particular route. Rather than produce reports, the unstructured analysis does not offer a real separation between ‘design’ and ‘preview’. Instead, the users ‘play’ with their data as they go along, ‘thinking’ as fast as they ‘see’.

‘Unstructured’ type of data analysis is great with most data sources. For straightforward operational needs, the structured reporting facillities will answer the immediate, operational business question.  This is particularly the case when considering the display of data from an Operational Data Store (ODS). What does the ODS give you? The ODS basically helps support people who conduct business at the operational level, who need the information to react quickly. Essentially, it sits between the source systems and the data warehouse downstream, and the data usually has a short shelf-life. It is an optional aspect of the business intelligence architecture, and worth considering since many organisations have at least one ODS. There are many different definitions for the ODS, but here are some common chacteristics:

– it intakes data from disparate sources, cleanses and integrates them
– it is designed to relieve pressure from source systems
– stores data that isn’t yet loaded into the data warehouse, but is still required now

One example of this ODS deployment is in contact centre management. For example, the contact centre manager will need to know if there is a high abandoned call rate at the contact centre. As a result of this, they may need to temporarily direct calls to another team in order to meet the unusually high demand. To do this, however, they need to be aware that the abandoned call rate has reached a particular threshold. This reporting requirement is operational rather than strategic; a strategic decision may involve using a number of metrics to identify whether some longer-term re-organisation is required, but the ODS helps the business to run optimally here and now. 

This leads us to another question; how can we best display the data in the ODS, and display it? Data visualisation of ODS information has very specific requirements since it needs to show, at a glance, how the business is operating.  There are a number of different data visualisations which can assist in showing ‘at-a-glance’ information very effectively. There are plenty examples of data visualisations that can assist in the ‘here and now’ business question facilitation. For example, for information on bullet charts to display target information, please see an earlier blog. It is also straightforward to produce KPIs in Reporting Services, and if you’re interested, please see this post by Jessica Moss. and so on. In SQL Server Denali, these visualisations can be produced using SSRS, Sharepoint, Excel, and PowerPivot, not to mention a whole range of other third-party applications such as Tableau. There are more details about the Microsoft-specific technologies in the SQL Server Denali roadmap, which can be found here on TK Anand’s Blog

To summarise, from what I’ve read so far, Project Crescent is aimed more at ad-hoc analysis work rather than displaying current operational data activity. From the roadmap, I gather that Microsoft SQL Server Denali will have something to please the business users who require ‘unstructured’ data analyses, in addition to the high standard of functionality already available for those who require structured data analyses in response to a changing business environment. I look forward to its release!