Knocking down Stephen Few’s Business Intelligence Wall?

Stephen Few blogged recently about Business Intelligence hitting a ‘wall’; if you haven’t read this post, I strongly recommend that you read it. I have enjoyed many of Few’s blogs over the years, and this particular blog was very insightful. It focuses on the dissonance between ‘old’ business intelligence and ‘new’ business intelligence. 
‘Old’ business intelligence is hallmarked by its emphasis on the engineering aspects of the ‘technical scaffolding’ which supports business intelligence solutions. Traditionally, this is owned by the IT department. This IT Department may localised in one place, or, in my experience, it could be spread out across the world via outsourcers, data centres and so on. The focus here is on technology, hardware, and software.
Somewhere in the traditional sphere of business intelligence, the user has been forgotten; side-lined with acronyms or perhaps even patronised with concepts which are, fundamentally, within their grasp. ‘New’ business intelligence is hallmarked by its emphasis on the importance of the user; this focuses on analysis, visualisation, and drawing conclusions from the data for the purpose of moving the business forward.
It seems to me that ‘new’ business intelligence is coming. Businesses need their data, and are starting to understand that there are different ways of obtaining it, in addition to the traditional business user method of using Excel for everything.  Peter Hinssen, in his book ‘The New Normal’, talks about technology in terms of its accessibility to all: it’s longer such a mysterious entity, restricted to the rich or technologically advanced few. Instead, it is moving towards becoming ‘the new normal’ – accessible and affordable to everybody, and particularly taken for granted by younger generations, the Generation Y people born after 1978. 
As Hinssen puts it, business users are starting to ask ‘Explain to me why’ questions to IT departments. These questions include: ‘Explain to me why it takes you 18 months to implement a system which only updates overnight, when I can book Easyjet flights and see my data on their site immediately after I’ve booked it?’ ‘Explain to me why I can google for every piece of data on the Internet, but I can’t access Excel spreadsheets on our system?’
In the meantime, the common currency for discussion between business users and IT will probably remain focused on specific technology.  However, new business intelligence isn’t focused on new technology, although it is probably more commonly associated with cool new visualisation technologies such as Tableau.  ‘New’ Business Intelligence is a paradigm shift towards an enablement of a user-centred approach in collecting, analysing, predicting and using data.
However, this does not mean the data warehouses are going away anytime soon. Also, the concerns of traditional IT departments are also valid. As the guardians of the data, IT departments are tasked with looking after data on their terms, and protecting it from any potential vandalism from internal or external influences, which includes business users.  IT departments are, quite rightly, reticent to allow unrestricted access to the data sources that they are required to protect. It’s not immediately obvious how to balance the pressures on data by the business users, who feel entitled to their data, with the needs of the IT department to protect their data. Is there a solution?
I was speaking to some of the team at Quest Software about this very issue. In order to respond to the needs of business users as well as IT teams, Quest Software are working on a Data Hub which is aimed at provisioning data to the business users, whilst ensuring that IT can carry out their guardianship role of protecting the data. 
In other words, the Data Hub solution provides surfaces data as a ‘one-stop-shop’ to all of the data sources, so that the business users can use as a ‘window’ in order to access the data that they need. Given that the Quest Data Hub could talk to many common data source, and the business users could consume the data as they like, then this would please the business users. Very often, business users don’t really care about the actual source of the data; they just need to know that the number is correct.
As a consequence of this solution, on the other hand, if the Quest Data Hub could be configured and set up by IT or a data-savvy business analyst, then IT could maintain guardianship over the data. It also means that they still ‘own’ the technical scaffolding that provides the data, and can insulate it from inadvertent mishaps. This is particularly important when the data may well be farmed out from one source to another, across firewalls and sent out to other companies, who consume the data; the potential ‘clean-up’ consequences are enormous. Further, this means that data warehouses remain in place, serving up data as they have always done.
As I understand it, the Quest Data Hub is that it is not dependent on any particular technology. This means that it should be possible for business users to connect straight to the data sources via the hub, regardless of the type of data source e.g. Oracle, SQL Server, DB2 or even Excel. 
In the Microsoft sphere, the best way to leverage Business Intelligence is to have an integrated SharePoint environment; this includes Reporting Services, PowerPivot, and possibly even Project Crescent. Don’t misunderstand me, I have implemented SharePoint on a number of occasions; I can see what it does for customers, and I love seeing customers really make the most of SharePoint. However, in my opinion, this dependency on a particular framework isn’t good for the Microsoft stack; in my experience I have come across plenty of DBAs who do not like SharePoint. Take one example; depending on the environment, it is difficult to set up Kerberos authentication. This means that customers struggle and get put off the technology at the early stages; in the worst cases, they can even give up or just implement the stand-alone, non-integrated SharePoint.
To set up SharePoint properly, it is vital to recognise that it isn’t a ‘next-next-next’ installation. It needs properly planned, and a variety of skill sets to make an enterprise solution that becomes embedded in the organisation.
That does not mean that the Quest Data Hub is trivial to set up; my inclination is that this job would be best done by IT since they already know the data sources well. I would also want to see an emphasis on both structural and guide metadata. The structural metadata will indicate the structure of the tables, keys, and so on. The guide metadata would provision this information to users in a language that is meaningful to the business. I haven’t seen the Quest Data Hub yet, but I would be interested to know more about the plans for allowing easy documentation for the structural metadata.
To summarise, ‘new’ business intelligence is coming, and the needs of business users need to be addressed. I have seen software that distinctively sits on either side of Stephen Few’s Business Intelligence wall. The closest I have seen to both sides is the entire Microsoft stack. However, since it is independent, the Quest Data Hub is subversive in nature since it should communicate with everything, and does not lock businesses into one technology or another; it moves as the businesses move, and stretches and grows according to the business need. I look forward to seeing ways in which Stephen Few’s business intelligence wall can be broken!

3 thoughts on “Knocking down Stephen Few’s Business Intelligence Wall?

  1. Thanks Jen – was great meeting you. The only little point of clarification is that we already have the Quest Data Hub as part of Toad for Cloud Databases. It allows you to work with Hadoop and other NoSQL stores such as Azure Table Services. We have a small number of data analysts using it today to join data between RDBMS and NoSQL. We are extending its capabilities as your blog indicates. Look forward to giving you and update and demo soon.

Leave a Reply