Head Office Singapore: 31 Rochester Drive,Level 24,Singapore,138637
India Office: 880,
Adarsh Nagar,
Jogesheri West,
Mumbai,
400053
Head Office Singapore: 31 Rochester Drive,Level 24,Singapore,138637
India Office: 880,
Adarsh Nagar,
Jogesheri West,
Mumbai,
400053
Reliability: Why Are Researchers Considered About Reliability of Data?

Reliability: Why Are Researchers Considered About Reliability of Data?

Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events.

1.    We use two major processes in measurement conceptualization and operationalization.

a.    Conceptualization refers to taking an abstract construct and refining it by giving it a conceptual or theoretical definition. Conceptualization is the process of thinking through the various possible meanings of a construct.

b.    Operationalization on the other hand refers to the process of moving from a construct’s conceptual definition to specific activities or measures that allow a researcher to observe it empirically. 

c.     For example, effectiveness of a campaign for a micro-credit scheme amongst a group of workers is an outcome that can vary from very effective to very ineffective. Effectiveness of a campaign is however a function of overall how appealing, easy to understand, relevant and persuasive the campaign is seen to be. Also, in case a campaign has high effectiveness it manifests itself in high signups for the program.  

Validity and reliability are central constructs that help establish the truthfulness, credibility, and believability of findings.

 Reliability, refers to the consistency of results i.e. similar results when the study is repeated under the similar conditions. For example, if for a given survey to measure the disposition to avail of a micro-credit scheme amongst a group of workers one gets consistent information from more than one group, more than one time (test-retest), more than one instrument/ questionnaire (parallel forms), then these results can be said to be valid.

Reliability means dependability or consistency. It suggests that the same thing is repeated or recurs under the identical or very similar conditions. The opposite of reliability is an erratic, unstable, or in- consistent result that happens because of the measurement itself. Same thing is repeated or recurs under identical or very similar conditions. E.g., Taking a personality test

Three Types of Reliability :

 Stability reliability is reliability over time i.e. is the test delivering the same answer over different time periods? Using the test-retest method can verify an indicator’s degree of stability reliability.  To meet this criterion in case we are studying wellbeing the Test-retest reliability (stability reliability) well being scores should be the same or similar over different time periods.

 Representative reliability: is reliability of data across subpopulations or different types of cases. And measures if the test delivers the same answer when applied to different groups (e.g., different classes, races, sexes, age groups).

For example, I ask a question about a person’s age- this check if both 20 and 50 year olds gave an accurate answer. A subpopulation analysis verifies whether an indicator has this type of reliability- or significant differences exist across segments.

 Equivalence reliability applies when re- searchers use multiple indicators—that is, when a construct is measured with multiple specific measures (e.g., several items in a questionnaire all mea- sure the same construct). Equivalence reliability addresses the question whether all the measures yield consistent results across different indicators?

We verify equivalence reliability with the split- half method. This involves dividing the indicators of the same construct into two groups, usually by a random process, and determining whether both halves give the same results.

To meet this criterion in case we are studying wellbeing the Equivalence reliability should be the same or similar across measures that are measuring the same thing and the internal reliability (Cronbach’s alpha; α) should be high.

 We can improve reliability by doing the following : (1) clearly conceptualize constructs, (2) use a precise level of measurement, (3) use multiple indicators, and (4) use pilot tests.

In Sum : While both these metrics are pivotal for researchers, both these evaluations are independent factors, therefore a study can be reliable without being valid, and vice versa. Thus one can have the following 4 scenarios: 1) High Reliability and High Validity; 2) High validity and low reliability, 3) Low validity and high reliability, 4) Low validity and low reliability. Furthermore reliability is necessary for validity and is easier to achieve than validity. Although reliability is neces- sary to have a valid measure of a concept, it does not guarantee that the measure will be valid. It is not a sufficient condition for validity.

Qualitative Research- Validity & Reliability

1.    First, to be considered valid, a researcher’s truth claims need to be plausible and, as Fine (1999) argued, intersubjectively “good enough” (i.e., understandable by many other people). The qualitative researcher accepts these accounts not as the one truth in the world or an invention or arbitrary. Rather, these are descriptions that reveal researcher experiences with the empirical data.

2.    Second, a researcher’s empirical claims gains validity when the same is supported by numerous pieces of diverse empirical data i.e. verbatims. Validity arises out of the cumulative impact of collation of diverse trivial information to create strategic insights and inferences.

3.    Third, validity increases as researchers search continuously in diverse data and consider the related themes. Validity grows as a researcher recognizes a dense connectivity in disparate details. It grows with the creation of a web of dynamic connections across diverse realms, not only with the number of specifics that are connected

Reliability: Qualitative researcher does not focus on  the data objectivity, standardization, replication of i.e. the positivist elements of quantitative data. Rather, the researcher accepts that alternative measures can translate into results which are not consistent or reproducible. 

This is because in qualitative research, data collection is an interactive process in which researcher interaction plays a pivotal role in defining context and the engagement. Thus, to establish reliability the researcher deploys many techniques (e.g., interviews, participation observation, photographs) to record observations and attempts to be consistent  and non-judgemental at all points of time.

These diverse measures and interactions with different researchers are beneficial because they can illuminate different facets or dimensions of a subject matter.  By contrast, objective & reliable measures of the quantitative researcher may neglect key aspects of diversity that exist in the social world. 

Different Reliability

Test-retest reliability (stability reliability) wellbeing

§  Same (similar) scores over different time periods

Representative reliability wellbeing

§  Same (similar) scores across groups

Equivalence reliability

§  Same (similar) scores across measures that are measuring the same thing

§  Internal reliability (Cronbach’s alpha; α)

Inter-rater reliability

4.    Test-retest reliability : Correlate scores of person on the same test at two different points in time

2.    Parallel forms of reliability : Correlate scores derived from parallel forms of test, which measure same construct with different items.

3.         Split-half reliability: Correlate scores derived from two halves of test; this method is used when no alternate form of test available  

4.    Internal reliability or Cronbach’s alpha ( α ) : Correlate scores derived from splitting test randomly in   all possible ways.

5.    Inter-rater reliability : Correlate scores across two or more different raters. 

Different types of error Quantitative research

Test-retest reliability

–      Changes over time is treated as error.

–      Parallel forms of reliability, split-half & internal

   Reliability or Cronbach’s alpha

–      Content differences in test items is treated as error.

–      Inter-rater reliability

–      Differences between raters is treated as error.

VALIDITY