Form Data Analysis provides analytics on any form created and used on intelliHR. This analytics page really is the catch-all to view responses for anything that has been sent to the team. The analytics page provides a Reliability Indicator that assists those viewing the page to discern the level of accuracy that we can be confident of so that the responses to the forms can be used to promote solid data-driven decisions. The tool will assist in business cases by providing statistical assurance that the responses you have received are valid.
This article covers:
- What is the reliability indicator?
- What is reliability?
- Who is responsible for various aspects of reliability?
- How is reliability assessed on the intelliHR Form Data Analysis Page?
- What can you do to increase the reliability of your data?
What is the reliability indicator?
The reliability indicator provides form data analytics users with additional insight into how they can use their data. It is visible on the Form Data Analysis home page as well as the form-specific data analysis page.
What is reliability?
A form’s reliability reflects the degree to which analysis of the responses can reflect the entire cohort to which the form was issued.
Reliability is an important feature in determining how to use analytics to effect change. There are many different factors that can influence the reliability of data, including:
- The authenticity of each response. Is everyone motivated and comfortable with responding honestly?
- The capacity for accurate responses is built into the form. Is there a diverse array of response values from which the respondent can select the most accurate?
- The majority of the population responded. Are there sufficient respondents to assume that the analysis will reflect the cohort to which the form was issued?
- The exclusion of illegitimate form responses. Are only the intended cohort having access to the form, and can they only respond to it once?
Who is responsible for various aspects of reliability?
As can be seen above, there are lots of complex, and to some extent overlapping features, that inform overall reliability. Ultimately, it is impossible for any one individual or analysis tool to assess reliability perfectly. There are three important factors that play a part to improve/assess reliability: the person designing a form, the form respondent, and intelliHR.
The Form Creator
The form creator is responsible for designing a form that allows for accurate responses. This can take many different forms, such as:
- Keep the form simple, and short
- No double-barrelled questions
- Central tendency bias (people usually pick the middle option if unsure so it can be good to not have a middle option)
- Provide guides to the scales so everyone can respond consistently
- Using language that is clear and not easy to misinterpret
The Form Respondent
It is the form respondent’s responsibility to answer forms issued to them as honestly as possible and ensure that they and they alone are completing forms issued to them.
intelliHR
intelliHR provides access to valuable statistical tools that can describe the relationship between the number of respondents in relation to the number of forms issued.
This offers the user an insight into the estimated reliability of a form in representing the entire cohort to which the form was issued.
This is important because analyzing the reliability of any given data allows you, as the reviewer of this data, to ensure that the analysis that will inform change can be independently reviewed as having a high-reliability factor; the importance of which comes in to play when considering whether the purpose of the data collected is to provide supporting data or actively drive data-led decisions.
How is reliability assessed on the intelliHR Form Data Analysis Page?
Completion Rate
Completion rate is a useful tool to describe the quality of data collected by a form. Ideally, it is important to collect as much data as possible to get the most accurate and reliable insights. This is because it results in fewer unknown responses. The completion rate is given by the below calculation:
However, while it is uniformly important to amass as many form responses as possible, a limitation of completion rate is that it does not take into account how many forms were issued.
Reliability Indicator
The reliability indicator, on the other hand, is designed to account for the total number issued. Essentially, an internal calculation maps the same data used for completion rate to give a value that is proportional to the number of forms issued. Then this value is placed into a bucket of either Low, Moderate, or High Accuracy. On the page, this allows for the following to be true:
Completion Rate |
Responses/Forms Issued |
Reliability Level |
60% |
3/5 |
Low |
60% |
30/50 |
Moderate |
60% |
300/500 |
High |
In practice, completion rate is designed to show the wholeness of a census whereas the reliability level indicates how to use the data and analyses from the form, as well as how much to trust it as a representation of the population.
How to interpret the reliability indicator?
Low Accuracy
Low accuracy means that the data and analyses should not be extrapolated to represent the whole cohort to whom the form was issued. In cases like these, the quantitative data or snippets of responses can be used anecdotally but should not be drivers of decisions affecting the cohort.
Moderate Accuracy
Moderate accuracy means that data and analyses should be taken with caution. Not as unreliable as low accuracy data which hardly if at all represents the cohort to whom the form was issued, the insights provided by this analysis can corroborate trends seen in other forms or elsewhere in the business but alone should not influence major business decisions.
High Accuracy
High accuracy means that data and analyses can be trusted to represent the cohort issued the form. This data can be used as primary evidence to effect changes.
Not Applicable
Not Applicable can occur for two different reasons.
The first is on the form selection view when there are not yet any form responses. In this case, it is impossible to analyze reliability as there is no content or trends that could be extrapolated to the cohort.
The other reason it may appear is on the form data analysis page itself when questions are filtered. The issue with filtering by questions is it means that the base cohort of people is no longer identifiable, as all respondents will arbitrarily have completed the form; for example, if there are a number of optional questions respondents may have each chosen to respond to a different selection of questions.
What can you do to increase the reliability of your data?
Regardless of what tools are being used to evaluate reliability, there are a few overlapping common themes that will invariably increase reliability.
Encourage Good Form Completion Practice
Whether it is a small or large group being surveyed encouraging high form completion practices is at the heart of collecting reliable data. This can include encouraging employees to complete forms and scaffolding sufficient time so that they can complete them accurately and reflectively.
Designing an Appropriate Group
One of the strengths of conducting surveys with a defined group or subsets of this group, in this case, a business, is that there is the capability to collect census data, or at minimum know what proportion of the population is represented in said data. An entire group of people can and should be asked to complete a form.