IRIS includes built-in validity safeguards to help ensure survey results accurately reflect a participant’s responses.
Every IRIS survey undergoes multiple automated validity checks during scoring. When the system detects response patterns that may affect reliability, a Validity Alert icon (⚠️) will appear next to the survey.
Validity alerts protect the integrity of interpretation. They are not punitive.
What the Alert Icon Means
The alert icon indicates that one or more automated validity checks identified unusual response patterns.
If a survey falls below recommended reliability thresholds (generally below 75%), the results should not be relied upon for interpretation or decision-making.
A validity alert does not mean someone answered “wrong.”
It means the data may require review.
Common Triggers
Validity alerts may appear due to patterns such as:
Completing the survey unusually quickly
Selecting an unusually low or high number of adjectives
Highly inconsistent or erratic response patterns
Uniform responding (selecting the same type of adjective repeatedly)
Extremely negative or improbable response patterns
Indicators of random answering
These checks are designed to detect reliability issues — not to judge character.
Minimal Engagement
Occasionally, a validity alert may result from minimal engagement — for example, quickly selecting descriptors without thoughtful reflection.
The survey works best when participants take their time and select adjectives that genuinely reflect them. There is no advantage to selecting as few or as many as possible.
If minimal engagement is suspected, clarify expectations and consider inviting a retake in a focused environment.
When Should Someone Retake the Survey?
A retake is recommended when:
The validity score is below 75%
The participant acknowledges distraction, misunderstanding, or stress
The results appear clearly inconsistent with known behavior
The survey was completed in a rushed or compromised environment
A retake may not be necessary if:
The participant confirms thoughtful responding
The context explains unusual patterns
The results align with observed behavior
Use professional judgment.
How to Approach the Conversation
When a survey is flagged, approach the discussion with curiosity — not accusation.
You might ask:
“How did you feel while completing the survey?”
“Were you able to focus without interruption?”
“Did any of the wording feel confusing?”
“Were you going through anything stressful at the time?”
Validity alerts typically stem from one of two areas:
Attitude Factors
Disengagement or indifference
Attempting to manage impression (overly positive) or exaggerate distress (overly negative)
Trying to stand out as “different”
Emotional distress at the time of completion
Ability Factors
Vocabulary challenges
Misunderstanding instructions
Distraction, fatigue, or illness
External stressors impacting concentration
In some cases — especially with highly negative patterns — responses may reflect genuine emotional strain rather than distortion. Context matters.
Do not interpret flagged data as diagnostic until validity concerns have been addressed.
Best Practice
If in doubt, retake the survey in a focused environment after clarifying expectations.
Accurate data supports:
Stronger coaching conversations
Reliable growth tracking
Sound hiring decisions
Validity alerts are there to help you interpret responsibly.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article