Recently, I did a presentation at on report design in SSRS at SQLBits, which is Europe’s largest SQL Server community conference. If you are interested, please click here to go and view the video.
The attendees are encouraged to submit session feedback. I have decided to visualise the feedback that I was given for my session here on my blog, and the full, unedited feedback can be downloaded as a text file. If you provided feedback for my session; thank you very much for the feedback. As a speaker, I do pay attention to it in order to improve. The comments don’t make always easy reading, but you can view the video and comments, and decide for yourself.
Generally speaking, rating data is interesting from the cognitive psychology perspective since it involves thinking about how humans categorise, rank and rate objects and ideas. If you’re interested in reading more about this, I can suggest Lakoff’s Women, Fire and Dangerous Things as an introduction to this topic.
Specifically, the SQLBits session feedback data is interesting because it gives me data about myself, that I use and investigate. The data itself is non-parametric: this means that there are very few assumptions made about the data. For instance, no assumption is made about the shape of the data, and only ordinal data is assumed. By ordinal data, this means that points on the scale are ordered e.g. a score of 9 for ‘Quality of Demos’ would be higher than a score of 8 for the same metric.
For the data to be considered parametric, then it would need to be interval data at a minimum. An example of interval data is temperature, because the units allow us to equate intervals between points e.g. the difference between 10 and 20 degrees Centigrade is the same as the difference between 20 and 30 degrees Centigrade. For example, what one person gives as a ‘5’ out of a possible 9 isn’t equivalent to what someone else’s criteria. Further, it isn’t possible to say that the differences between the points is equivalent across all cases; so a person who gives a rating of 9 isn’t exactly 3 times more satisfied than the person who gives a rating of 3.
As long as the issues around rating data are borne in mind, the data can still yield insights via data visualisation, whereby we look for patterns in the data. In order to do this, I have produced a heatmap to display the data, which you can see below. The ‘Rating (bin)’ columns refer to the actual rating number given, where ‘1’ is the lowest level of satisfaction, and ‘9’ is the highest:
Heatmaps are can be described as a visualisation where colour is primarily used to encode quantitative data, or where there is a large amount of data that may produce over-plotting on a line graph. Heatmaps can take many forms. For example, they can be used for binary displays i.e. one colour for ‘true’, and another colour for ‘false’. Alternatively, heatmaps can simply look like a coloured ‘block’ spreadsheet, where the cells display a matrix of rows and columns. I have used Tableau in order to quickly produce the above visual.
Heatmaps, as in the above example, can be used to display multivariate data. Here, the ‘ratings’ are ‘bucketed’ as per the columns, so we can see approximately how many people placed a rank in each category. The rows display the quantitative variables, and constitute of the following items:
- Speaker’s knowledge of the subject area
- Speaker’s presentation skills
- Quality of Demos
- Accuracy of session abstract
- Were your expectations met
- Overall performance of the session
The combination of the colours display the profile of the rating criteria. The way that I personally see this is as a ‘birds eye view’ of a line graph. In other words, where is the darkest colour reside, and where is the ‘tail’? Fortunately for me, the darker colours are on the right hand side of the visualisation, which shows that I received more ratings on the ‘higher’ end of the scale rather than the lower.
In order to make the heatmap more clear, I have included the following thoughts:
- Higher values are represented by brighter and darker colours; so the lighter colour means that there were not as many ratings given.
- Blue has been chosen, since red and green are not ‘colour blind’ friendly.
- Element size is also used in order to double-encode the actual quantity. In other words, the smaller ‘blocks’ show that smaller number of ratings were given for the ranks ‘1’, ‘2’ and so on, and more votes were given to the higher value spectrums.
So, all that remains is for you to have a look at the video and see what you think! I will look forward to your comments on this and other commentary, as always.
2 thoughts on “SQLBits Feedback Post-Mortem”
Interesting stuff. You left a comment on my blog post (http://sqlblog.com/blogs/jamie_thomson/archive/2011/01/15/analysing-sqlbits-feedback.aspx) offering to produce the same for me so if it isn't too much trouble for you I'd really appreciate you doing that.
The raw numbers are in the spreadsheet that I linked to. You can send it to me at jamie[at]jamie[HYPHEN]thomsonDOTnet
This is a great post, and I appreciate your description of the different kinds of parameters one finds in data: that's all new stuff to me.
Looks like you got great feedback too. The results prove that you cannot please all of the people all of the time, and that trying to do so is doomed to failure. Looks like your weakest area was accuracy of the abstract. What would be really interesting would be to compare your feedback to that of the other sessions, because I think getting an abstract right is very difficult; I expect many sessions would lose points on this category.