SSRS Red-Yellow-Green Indicators: An Alternative Approach?

The release of SQL Server Reporting Services 2008 R2 was accompanied with much fanfare regarding the new gauges that were available. This blog will look at different ways of implementing KPIs that do not require the gauges or indicators that are available. The reason for this is that, by default, the SSRS Indicator gauges do not always follow the best principles for data visualisation as expressed by experts such as Stephen Few or Edward Tufte. As a note, this blog was part of a SQLPass webinar given by the current author for the ‘Women In Technology’ SQLPass 24 hop webinar series, given in March 2011.

In order to be clear, in SSRS, Indicators are minimal gauges that display the state of a single data value. It is intended that this information is taken ‘at-a-glance’. The icons that represent indicators and their states are simple, for example, traffic light indicators. They are used to display trends, states, ratings or conditions. In the SQL Server Performance Dashboard example we will look at, the Indicator is used to display ‘Wait Time’ for a SQL Server.

Before we dive into the implementation in SSRS, let’s take a look at the Indicator gauges. Here is an example of the indicators that are available:

A. Default KPI Indicators

Figure 1 Default KPI Indicators in SSRS

In terms of data visualisation, these traffic lights could be improved by taking on board the following points:
The Red-Yellow-Green colour system is not good for people that are colour blind. Approximately 12% of men, and 1% of women, are colour-blind (Few, 2008). Stephen Few’s rule number 8 specifies that ‘To guarantee that most people who are colour-blind can distinguish groups of data that are color-coded, avoid using a combination of red and green in the same display’.  However, for people who are short-sighted with very strong glasses, it is possible that they can experience a strong effect in that off-axis viewing of objects away from the centre of the lens. Specifically, this can result in prisms, and the colours separate. This is known as chromatic aberration, and is the glasses wearer experiences a color fringing affect around strongly contrasting colours, which can be distracting for the eyeglass wearer. It may well be worth pointing out that near-sighted contact lenses wearers do not experience chromatic aberration because the contact lens moves with the cornea, thereby eliminating the chromatic effect.

Given that Red-Yellow-Green is a problematic combination, what can we do to resolve this issue? Stephen Few (2008) recommends using red and blue instead; this is perceptually distinct enough to carry the message of the data visualisation, whilst reducing the impact on colour-blind and very near-sighted individuals.

The current example will use the SQL Server Performance Dashboards to display indicator materials, and will only show Red if there is an issue; otherwise, it will not show anything. Instead of using the default Indicator gauges, it was decided to keep the visual representation as simple as possible. Thus, a small red square is used.  Since the red square is only present if there is an issue, this can be viewed as reducing the chartjunk on the page since only essential items are shown. Further, by keeping the display to a minimum, the data/ink ratio is reduced. 
In this article, we will improve one of the SQL Server Performance reports to show more visual information. The original Historical Waits Report looks like this:

Historical Waits Default Report

Figure 2 Original Historical Waits report

One thing we can do to improve this report is to use colour in order to convey a message about the information. From the above report, we can see that the top ‘% Wait Time’ is set at 76.93%. It is possible to use some colour in order to convey the highest wait time, and this is the purpose of the current article.
For this article, the following is assumed:

The SQL Server Performance Dashboards have been downloaded from Microsoft Download Centre
The SQL Server Performance Dashboards script has executed successfully: in order to do this, the column ‘cpu_ticks_in_ms’ needs to be changed to ‘ms_ticks’. Intellisense will help you to identify the incorrect column.
1.      Import the SQL Server Performance Dashboards into a SQL Server Project.
2.      Open the ‘Historical Waits.rdl’ report and navigate to the column entitled ‘% Wait Time’. This is outlined in the blue box in the following diagram:

2.  KPI Text Box to change

Figure 3 Sample Text Box for amendmen

3. Right-click the box and select ‘Text Box Properties’. We will need to do two things: we will change the value of the textbox from the expression to a ‘o’. Then, we will change font to WingDings in order to obtain a nice ‘o’ shape using the ‘o’ that we typed in first.

In order to change the value of the Textbox to a ‘o’,  type ‘o’ in the Values box for the Text Box properties. This can be seen in the following diagram:

3. KPI Value o

Figure 4 Changing the value of the Textbox
      
4. Now it is time to change the font to Wingdings. This will give us a simple square, rather than a trumpeting arrow as per the SSRS Indicator gauges. This can be seen in the following image:

4. KPI Wingdings font change

Figure 5 Change Font to WingDings
      Once this has been done, click ‘Ok’.
5. Now it is time to update the colour of the square indicator in order to reflect the status of the Wait Time. This rule specifies that, if the wait time is greater than or equal to 70%, then the textbox show a red square; otherwise, nothing is displayed. In order to ensure nothing is displayed for values that are less than 70%, the square is set to white; this means it will be invisible.
In order to update the colour, we will copy and paste the following formula into the ‘Color’ box of the ‘Properties’ window. If the ‘Properties’ window is not showing, click on the textbox, and then choose the ‘F4’ button. This will reveal a dialog box as follows:
5. Color Property change
Figure 6 KPI Textbox Properties Window
6.      Choose the ‘Color’ item. Clear the contents of the expression editor window, and copy and paste the following in its place. When this is done, click OK.

=Switch(
(Sum(Fields!wait_time.Value) / sum(Fields!wait_time.Value, “DM_OS_WAIT_STATS”))
>= 0.7, “Red”,
(Sum(Fields!wait_time.Value) / sum(Fields!wait_time.Value, “DM_OS_WAIT_STATS”))
< 0.7,
“White”)
7.    
  Save and preview the report. The report should appear as follows:
6. Completed Report
Figure 7 Completed Report
It is now clear that, instead of percentages, there is a small indicator which specifies that the ‘Other’ Wait Category exceeds a specified criterion.
References:
Stephen Few (2008). Practical Rules for using Colour in Charts. Perceptual Edge, February 2008. 

Data Fundamentals in Data Visualisation: lesson from the history of astronomy

Data visualisation can be defined as the representation of data graphically in order to reveal the ‘stories’ within the data. However, it is important to ensure that the data is properly in place before starting. It’s easy to get sidetracked with the selection of data visualisation itself for example – a sparkline? a trellis chart? This selection is obviously a core piece, but if the correct data isn’t in place and correct, then there is no confidence that the data visualisation correct. This emphasis on data collection and integrity hasn’t always been around, however; some might say that it does not happen even today!
In the history of astronomy, we can find data visualisation example where the importance of data made interesting reading. Tycho Brahe (1546 – 1601) laid the foundations for today’s astronomy, by emphasising the rigorous and clean collection of data regarding the planets and the stars, which was a real innovation at that time. 
Tycho believed that astronomy could not be pursued by non-rigorous collations of astronomical data. Instead, Tycho believed that astronomy could only be understood through systematic collation of data. This also meant including redundant data, which Tycho was prepared to put the work into completing; he actually conducted this study for almost twenty years, and, amazingly, without a refracting telescope!  This is an incredible achievement, since this feat occurred prior to the development of the refracting telescope (Keplerian Telescope), which was created by Kepler in 1611.  
Significant to us today, however, is that Tycho produced the most accurate and systematic astronomical data of his time; he successfully managed to note the orbits of the planets to a very close degree. Tycho systematically collected the triangulated locations of the planets and stars throughout the course of the year, believing that the factual, observed data was the only way forward. Tycho produced hundreds and hundreds of statements about the location of each planet over the course of the year, for example ‘On the 15th March 1572, at 2.04am the planet Mars was 32’38’’ above the horizon, and 12’30 west of the pole star’.
Many people sought after Tycho’s data, and Tycho was much-sought after as a teacher, due to his data. Eventually, Johannes Kepler became Tycho’s apprentice. Kepler took Tycho’s data, and used it to generate his Laws of Interplanetary Motion. Key to this story is that Kepler had held the view of circular orbits, but the evidence instead showed that the planets moved instead in elliptical orbits. Newton once described himself as ‘standing on the shoulders of giants’, and rightly so, since he used Kepler’s work to inform his work on gravity. Building on Tycho Brahe’s data, Isaac Newton (1642–1727) later deduced the fundamental mechanisms underlying the movements of planets. Newton’s Three Laws of Motion (uniform motion, force=(mass * acceleration), action-reaction), along with his Law of Universal Gravitation, therefore come directly from Tycho’s original observations.
Until mid-18th Century, the known planets were Mercury, Venus; Earth (obviously), Mars, Jupiter and Saturn.  A detailed investigation of the orbit of Saturn showed that it was not an ellipse, but there was a slight deformation. This could be explained by another planet, whose gravitational pull was affecting Saturn’s ellipse. In 1740, Newton’s mathematics were used to create predictions surrounding the existence of another planet were published, and in 1781, Uranus was found, within 2 space minutes of the predicted location. Uranus’ orbit was also shown to have a deformed elliptical orbit, and again, another planet was posited. In 1821, based on predictions made by Newton’s mathematics, Neptune was discovered in 1846.
Thus, Newton’s theory, together with the appropriate accurate data in place, had deduced the existence and precise orbits of two previously-undiscovered planets.  The theory didn’t fit all the facts, however; Mercury’s orbit also is not totally elliptical. Newton’s theory required the presence of another planet, located between Mercury and the Sun, in order to affect Mercury’s orbit enough to pull it slightly away from its ellipse. As an aside, this hypothetical planet was proposed to be called Vulcan, which Star Trek fans will know as the home planet of Spock. However, Mercury’s activity was explained by Einstein’s Theory of Relativity.
Central to this story, however, is Tycho’s stringent collection of data, and emphasis on rigorous data integrity. It is important to note that Tycho was an innovator in terms of his determination to collect careful observations of data; until this point, no-one had done this. Without his work and emphasis on clean data, the history of achievements in astronomy might have taken longer to achieve.
To summarise, it is not just all about finding patterns; the data has to be right in the first place. There has to be enough data for trial and test, along with a reduction in missing data, and even repeatable observations where possible. This applies not just to data visualisation, of course!

Knocking down Stephen Few’s Business Intelligence Wall?

Stephen Few blogged recently about Business Intelligence hitting a ‘wall’; if you haven’t read this post, I strongly recommend that you read it. I have enjoyed many of Few’s blogs over the years, and this particular blog was very insightful. It focuses on the dissonance between ‘old’ business intelligence and ‘new’ business intelligence. 
‘Old’ business intelligence is hallmarked by its emphasis on the engineering aspects of the ‘technical scaffolding’ which supports business intelligence solutions. Traditionally, this is owned by the IT department. This IT Department may localised in one place, or, in my experience, it could be spread out across the world via outsourcers, data centres and so on. The focus here is on technology, hardware, and software.
Somewhere in the traditional sphere of business intelligence, the user has been forgotten; side-lined with acronyms or perhaps even patronised with concepts which are, fundamentally, within their grasp. ‘New’ business intelligence is hallmarked by its emphasis on the importance of the user; this focuses on analysis, visualisation, and drawing conclusions from the data for the purpose of moving the business forward.
It seems to me that ‘new’ business intelligence is coming. Businesses need their data, and are starting to understand that there are different ways of obtaining it, in addition to the traditional business user method of using Excel for everything.  Peter Hinssen, in his book ‘The New Normal’, talks about technology in terms of its accessibility to all: it’s longer such a mysterious entity, restricted to the rich or technologically advanced few. Instead, it is moving towards becoming ‘the new normal’ – accessible and affordable to everybody, and particularly taken for granted by younger generations, the Generation Y people born after 1978. 
As Hinssen puts it, business users are starting to ask ‘Explain to me why’ questions to IT departments. These questions include: ‘Explain to me why it takes you 18 months to implement a system which only updates overnight, when I can book Easyjet flights and see my data on their site immediately after I’ve booked it?’ ‘Explain to me why I can google for every piece of data on the Internet, but I can’t access Excel spreadsheets on our system?’
In the meantime, the common currency for discussion between business users and IT will probably remain focused on specific technology.  However, new business intelligence isn’t focused on new technology, although it is probably more commonly associated with cool new visualisation technologies such as Tableau.  ‘New’ Business Intelligence is a paradigm shift towards an enablement of a user-centred approach in collecting, analysing, predicting and using data.
However, this does not mean the data warehouses are going away anytime soon. Also, the concerns of traditional IT departments are also valid. As the guardians of the data, IT departments are tasked with looking after data on their terms, and protecting it from any potential vandalism from internal or external influences, which includes business users.  IT departments are, quite rightly, reticent to allow unrestricted access to the data sources that they are required to protect. It’s not immediately obvious how to balance the pressures on data by the business users, who feel entitled to their data, with the needs of the IT department to protect their data. Is there a solution?
I was speaking to some of the team at Quest Software about this very issue. In order to respond to the needs of business users as well as IT teams, Quest Software are working on a Data Hub which is aimed at provisioning data to the business users, whilst ensuring that IT can carry out their guardianship role of protecting the data. 
In other words, the Data Hub solution provides surfaces data as a ‘one-stop-shop’ to all of the data sources, so that the business users can use as a ‘window’ in order to access the data that they need. Given that the Quest Data Hub could talk to many common data source, and the business users could consume the data as they like, then this would please the business users. Very often, business users don’t really care about the actual source of the data; they just need to know that the number is correct.
As a consequence of this solution, on the other hand, if the Quest Data Hub could be configured and set up by IT or a data-savvy business analyst, then IT could maintain guardianship over the data. It also means that they still ‘own’ the technical scaffolding that provides the data, and can insulate it from inadvertent mishaps. This is particularly important when the data may well be farmed out from one source to another, across firewalls and sent out to other companies, who consume the data; the potential ‘clean-up’ consequences are enormous. Further, this means that data warehouses remain in place, serving up data as they have always done.
As I understand it, the Quest Data Hub is that it is not dependent on any particular technology. This means that it should be possible for business users to connect straight to the data sources via the hub, regardless of the type of data source e.g. Oracle, SQL Server, DB2 or even Excel. 
In the Microsoft sphere, the best way to leverage Business Intelligence is to have an integrated SharePoint environment; this includes Reporting Services, PowerPivot, and possibly even Project Crescent. Don’t misunderstand me, I have implemented SharePoint on a number of occasions; I can see what it does for customers, and I love seeing customers really make the most of SharePoint. However, in my opinion, this dependency on a particular framework isn’t good for the Microsoft stack; in my experience I have come across plenty of DBAs who do not like SharePoint. Take one example; depending on the environment, it is difficult to set up Kerberos authentication. This means that customers struggle and get put off the technology at the early stages; in the worst cases, they can even give up or just implement the stand-alone, non-integrated SharePoint.
To set up SharePoint properly, it is vital to recognise that it isn’t a ‘next-next-next’ installation. It needs properly planned, and a variety of skill sets to make an enterprise solution that becomes embedded in the organisation.
That does not mean that the Quest Data Hub is trivial to set up; my inclination is that this job would be best done by IT since they already know the data sources well. I would also want to see an emphasis on both structural and guide metadata. The structural metadata will indicate the structure of the tables, keys, and so on. The guide metadata would provision this information to users in a language that is meaningful to the business. I haven’t seen the Quest Data Hub yet, but I would be interested to know more about the plans for allowing easy documentation for the structural metadata.
To summarise, ‘new’ business intelligence is coming, and the needs of business users need to be addressed. I have seen software that distinctively sits on either side of Stephen Few’s Business Intelligence wall. The closest I have seen to both sides is the entire Microsoft stack. However, since it is independent, the Quest Data Hub is subversive in nature since it should communicate with everything, and does not lock businesses into one technology or another; it moves as the businesses move, and stretches and grows according to the business need. I look forward to seeing ways in which Stephen Few’s business intelligence wall can be broken!

Facilitating Comparison with Sparklines

Given that comparison is a starting point to any investigation of the data, it is important to ensure that people are not blindly going through SSRS chart wizards, believing the representation to be correct. As this illustration shows, it is important to double-check that the SSRS mechanisms are being correctly deployed to produce the correct visualisation. This blog will help you to do that, using sparklines as an example.

Few (2009), in his work Now You See It, described comparison as ‘the beating heart of analysis’ since it is so fundamental, so vital to the analytics activity. As Few rightly points out, this is constituted of two complementary activities:

– Comparison – looking for similarities
– Contrast – looking for differences
Fundamental to this activity is the comparison of magnitudes i.e. whether one value is greater than, smaller than, or the same as, another value. There are plenty of data visualisations which can help with this evaluation, including line graphs, bar charts and sparklines. However, to ensure the integrity of the visualisation, it is important to ensure that any missing data does not mislead the message of the data.
In order to ensure that this does not happen in the case of sparklines, SQL Server Reporting Services 2008 R2 introduced a new feature called ‘Domain Scope’. Essentially, this allows the report writer to align and synchronise group data, thereby a column is present regardless of whether or not there is data for a given quarter, for example. For comparison purposes, it is better to show a gap for missing data for a given category member, so that this feature of the data can be compared with a category member where data is present.
This feature is particularly apparent when sparklines are being created. I first saw this noted in the Reporting Services Recipes book by Paul Turley, but I have extended it to include vertical axis comparison as well. Also, from the data visualisation perspective, I always think it is important for people to understand why they are conducting an activity to produce a report; so hopefully the comments will be helpful to you as we proceed.
Notes before you begin

This uses the database called AdventureWorksDW2008R2
This assumes you are using SQL Server Reporting Services 2008 R2
I have included a sample stored procedure for you to use. You may find it here: http://goo.gl/YeC5B
If you click on any of the pictures, it will take you through to my flickr account so you can take a closer look.
1. Create a new report called Sparkline Dashboard

2. Create a Dataset using the DS_CategoryResellerAndInternetOrderSales shared dataset.

b.Create a Dataset

3. Drag a Matrix element onto the canvas
From the ‘Report Data’ section on the left hand side, navigate to the ‘DS_CategoryResellerAndInternetOrderSales’ dataset.
The screen will now appear as follows:

c.Column and Row Grouping

4. Drag ‘Reseller Order Sales’ over to the box marked ‘Data’.

5. It’s then necessary to make a column for the sparkline to be put in place. This is done by clicking on ‘Order Year’ in the Column Groups section, then navigate to the ‘Add Total’ menu item, and choosing ‘Before’.

3. Add Total Before

6. The tablix is prepared, ready to enter in the Sparkline chart. To do this, right click on the column marked ‘Total’ to produce a menu. Select ‘Tablix’ -> ‘Insert’ -> The screen will now appear as follows:

4. Insert Sparkline

7. Once the sparkline is in place, it requires the correct data. From ‘Report Data’ on the left hand side, drag ‘ResellerOrderSales’ to the ‘Values’ section .

8. On the ‘Details’ item under ‘Category Groups’, enter the following expression for the ‘Grouping’ expression:
=Fields!OrderYear.Value & Fields!OrderQtr.Value

9. On the ‘Details’ item under ‘Category Groups’, enter the following expression for the ‘Sorting’ expression:

=Fields!OrderYear.Value & Fields!OrderQtr.Value

The expression should appear as follows:

e.sparkline expression

10. Change the colour of the sparkline to use ‘GrayScale’
11. Now, preview the report. It will appear as follows:

f. First attempt at running report

It’s clear that it’s impossible to compare properly, since the bars are not aligned with one another. To do this, we need to do some more work to ensure that the bars are aligned together for comparison purposes. Next, here are the steps to do this:
12. On the sparkline chart, right click and choose ‘Horizontal Axis’. A property dialog appears as follows:

7. Horizontal Axis Properties Dialog

13. In the section entitled ‘Axis Range and Interval’, click the checkbox next to ‘Align Axes in:’ and select the name of the Tablix. In this example, the Tablix is called ‘Tablix_CategoryResellerAndInternetOrderSales’. Instead of the axis being aligned to the category i.e. Accessories, Bikes and so on, this means that the axis is aligned to the grouping set up on the Tablix. In this case, our grouping is set up on the Order Years and Order Quarters. The outcome of this is that, if any of the columns are missing, there will simply be a gap where the data should be located. This helps the process of comparison, because if there is no way of identifying how the columns relate to one another, then the visualisation is misleading.

14. Click on the ‘OrderQtr’ group and look in the ‘Properties’ pane on the right hand side under ‘Group’, for the property item ‘Domain Scope’. Paste the name of the Tablix in this section. In this example, the Tablix is called ‘Tablix_CategoryResellerAndInternetOrderSales’. This window will now appear as follows:

8. Domain Scope

15. The only issue is that the comparison may be made more difficult if the vertical axis is not aligned. To do this, right-click on the sparkline, and select ‘Vertical Axis Properties’. As before, under the ‘Axis Range and Interval’ section, tick the box next to ‘Align Axes in’ and choose the Tablix called ‘Tablix_CategoryResellerAndInternetOrderSales’.

h.Vertical Axis

16. Once you have made the right-most column invisible, the final report looks as follows:

i.FinalSparklineReport

It’s possible to see that the columns are aligned with each other horizontally and vertically. This allows us to see that, for example, ‘Helmets’ and ‘Bike Racks’ were the biggest sellers. On a more detailed level, we can see that ‘Locks’ and ‘Pumps’ did not sell at all in the last two quarters, but that ‘Hydration Packs’ and ‘Bike Racks’ did sell during that time.
With data that has ‘dates’ as a category, normally a line graph is appropriate. However, in this case, since there are ‘missing’ data for certain quarters, the lines do not appear properly. Here is the same example, completed using line graph sparklines:

j.FinalLineSparklines

Since some of the data is missing, it seems more difficult to compare the ‘points’ of the line graph. However, this seems easier with the bar chart, since the bars serve as distinct points, representing each quarter.
To summarise, this report would form part of a dashboard, and it’s not intended to replace detail. Instead, it is supposed to help the data consumer to formulate patterns in their head in order to help them to draw conclusions from the data.

Dashboard Design using Microsoft Connect item data for SQL Server Denali

I am presenting at the SQLPass ’24Hop’ ‘Women in Technology‘ event on March 15th 2011. The topic is Dashboard Design and Practice using SSRS, and this blog is focused on a small part of the overall SQLPass presentation. Here, I will talk about some of the design around a dashboard which displays information about the Microsoft Connect cases focused on SQL Server Denali. Before I dive in, this dashboard was produced  as a team effort between myself, who did the data visualisation, and Nic Cain and Aaron Nelson, who bravely got me the data, sanitised it, and served it up for consumption by the dashboard, and also Rob Farley, who helped put us in touch with one another. So I wanted to say ‘Thank you’ to the guys for their help, and if you like it, then please tweet them up to say ‘Thank You’ too 🙂 You’ll find them at Aaron Nelson (Twitter),  Nic Cain (Twitter) and Rob Farley (Twitter). 


Before we begin, here is the dashboard:


Connect_Items_Dashboard

Well, what is a dashboard? At first, it simply looks like a set of reports nailed together on a page  However, this misses an important point about dashboards, which is that they give the viewer something which is ‘over and above’ the individual reports give to the data consumer. A dashboard can mean different things to different people. There are a number of different types of dashboard, which are listed here:



Strategic Dashboard – overview of how well the business is performing
Faceted Analytical Display – multi-chart analytical display (Stephen Few’s terminology) This will be discussed in more depth next.
Monitoring Dashboard – this displays reactionary information for review only; this data is often short-term, perhaps a day old or less.


Each dashboard type has got the following elements in common:

  • Dashboards are intended to provide ‘actionability’ in addition to insight; to help the data consumer, to have insight into the presented data.  
  • The reports on the page support a ‘theme’, which is the fundamental business question which is answered by the dashboard.  In other words, what is it that the business need to know, and what is it that they need to act upon? 
  • Further, the dashboard should rest on a fundamental data model, which has data that is common to all of the reports; the reports should not be completely disparate. If this occurs, then the data’s message may become diluted as distractions are added.  
In order to explore the idea of the Faceted Analytical Display, I have used data from Microsoft Connect items, which are focused on SQL Server Denali. This dashboard shows us different perspectives on the numbers, types and statuses of Connect items opened for SQL Server Denali.  In order to understand more, it is possible to select relevant years on the right hand side, in order to show how the data has changed over time.  If you click on the image below, it will take you to the Tableau Public website so that you can have a play for yourself!

Thus, this dashboard type is, in Stephen Few’s terminology, a “faceted analytical display”. Few defines this as a set of interactive charts (primarily graphs and tables) that simultaneously reside on a single screen, each of which presents a somewhat different view of a common dataset, and is used to analyse that information. I recommend that you head over to his site in order to read more about the definitional issues around dashboard, along with practical advice regarding their construction. 

This dashboard isn’t a straightforward ‘Monitoring’ dashboard, because it does allow some analysis. It is also possible to ‘brush’ the data, which means that it is possible to highlight some bars and dashboard elements at the expense of other elements.  There are other considerations in the creation of the dashboard:

Colour – I used a colour-blind palette, so there are no reds or greens. Orange and blue are ‘safe’ perceptually distinct colours. At the foot of the dashboard, the same colours were assigned to Connect call status. So, ‘Fixed’ has the same colour for both ‘Closed’ and ‘Resolved’ connect calls, as this is the same for the other status types.

Bar charts – for representing quantity and for the purposes of reading left-to-right, and for facilitating comparison within dashboard elements. 

Continuous data – the number of Connect items opened at any point is given as a continuous line chart. This line chart is interesting, since it shows that the number of Connect items has increased dramatically since the start of 2011. It’s great that everyone is getting involved by raising Connect items!

I will be interested in your feedback; please leave a comment below!
Jen x