Reliability analysis allows
the researcher to determine the extent to which a scale produces consistent
results if the measurements are repeated.
Reliability analysis is conducted when you have 2 or more questions that will be summed to determine a specific variable.
For example, if you want
to know about the construct job satisfaction, you COULD simply ask,
"Are you satisfied with your job?".
You can get a response YES/NO, or on a scale from "very satisfied" to "very dissatisfied".
But this does not tell you much about what elements of the job the participant is satisfied with. So you can ask a SERIES of questions: "Do you get along with your coworkers?" "Does your supervisor provide adequate feedback?" "Are you satisfied with your work schedule?"Once you start asking a series of questions, you want to know if these questions are reliably measuring the same construct.
Reliability analysis is determined by examining the proportion of systematic variation in a scale (in other words, if a respondent tends to rate one question highly, does the respondent also rate another question high?). If all the participants are consistent in the way they respond to the various questions, the scale yields consistent results and is considered reliable.
Cronbach's alpha is a statistic used to determine the internal consistency, so Cronbach's alpha increases as
the intercorrelations among the items included in the analysis increase.
If the questions on the survey or items being tested have very high intercorrelations, the questions are considered to be measuring different dimensions of the same construct.
In the job satisfaction example, if all participants who are satisfied with their supervisor's feedback also get along with their coworkers, the participants are all responding to the same construct: satisfaction with coworkers and supervisors.
Similar to correlation analysis and the measure of r and r sq, the higher the alpha, the more reliable the instrument you are testing. A general rule for measuring reliability is
Alpha above .70
is considered reliable.
Alpha above .60 is probably reliable, but you should consider evaluating each question to determine if you could raise the alpha level by eliminating it from the analysis.
Alpha below .59 is considered not reliable. You should either consider eliminating some elements from the instrument to raise reliability or revise the instrument to increase its reliability.
For more information on how to conduct reliability analysis using Cronbach's alpha in SPSS, see the flash lecture.