The Feedback Commons provides a convenient and time-saving workflow for listening to people and deciding how to improve your work based on feedback from others. It works best when you are asking a mix of staff and stakeholders for their perspectives on how things are going every few months.
By now you should have created a survey and sent it out. Whether you collected responses on paper and typed them in, deployed people with tablets running the survey in kiosk mode, or used our email campaign tool, all the data should end up in the same place. And you've probably received an email to come look at the results. Now you're going to explore the responses, looking for patterns that reveal more about the people whose feedback you are considering.
The first page shows the categories of feedback. If you have at least 30 responses, you'll see several key findings below as well. These are your top strengths and weaknesses, based on a scan of all questions and combinations (pairs of questions) in the survey. Because others have asked similar questions before you, the Commons can detect when your feedback is much better or worse than the 'norms' based on what others have observed.
For a typical 10-question survey, containing two multiple choice questions about the person and three scale questions (0 to 10) about his or her perception of a relationship or progress, there are over 100 possible patterns that could correlate with broader trends about long term outcomes.
Our algorithms scan all these patterns within pairs of questions and report out when some particular combination has the sharpest contrast and the strongest evidence of all the data you collected.
In this example, answers from men and women show opposite trends.
The numbers in grey circles on the charts are net performance scores - a simple way to assign an overall number to the bar chart on a -100 to +100 scale, for quick comparison. The colored parts of the bar (red, yellow, green) represent the proportion of positive, neutral, and negative scores for a given question, in much the same way that pie charts work. (See this tutorial for more on net performance scores and benchmarking)
Combinations of questions can often reveal more about a group of people than the component questions.
In this real example, the overall response is exactly balanced between positive, neutral, and negative feedback. There are as many people who will recommend this program as will promote some other program. But when you combine these responses with other data from the survey - in this case the source of the relationship - it becomes clear that a few industry partners are promoters, and the vast majority of people - who received the survey because they signed up for the newsletter - are not.
Looking at averages can hide the real story. However, scanning hundreds of charts like this in search of the real story isn't practical. That's why the Commons saves you time by scanning your feedback and letting you know which charts are potentially interesting.
As you begin to collect responses over time, you can see those trends too, and export them into your reports.
Note: our timeline analysis automatically groups responses by calendar month, and you can combine data from multiple surveys in your analysis.
We also include sentiment analysis where people provide longer answers. The algorithm looks at the words in comments and calculates the overall comment as being positive or negative based on word choices. It is a quick way to understand overall trends from comments. In addition, the little button in the upper right exports these comments into a more advanced wordtree tool, which can reveal the main themes in comments.
Click on any given question on the side menu to reveal one or more types of charts you can make, depending on the type of data you have and the other types of data you can combine it with from other questions in your survey.
If you have more questions, feel free to contact us and we can help.