A. All scientific research, including social science research, involves observation and/or measurement of the empirical world.
B. In sociology,
quantitative researchers systematically observe the social world, either
directly or indirectly, and attempt to assign precise numeric scores to those
observations. That is, they attempt to measure the empirical social world
II. Measurement in Quantitative Social Research.
A. In order to determine if and how the variables in a hypothesis change and whether change in one variable causes or is associated with change in another variable, you must be able to measure the variables in question.
B. Measurement is the process of classifying or quantifying empirical observations according to precise methods and procedures.
C. What do sociologists measure?
1. Characteristics and features of individuals or groups.
2. Attitudes, opinions, perceptions, and orientations of individuals or groups.
3. Behaviors of individuals or groups.
D. Measurement in the social sciences is not always easy.
1. Measuring abstracts and intangibles.
2. Changing meanings and interpretations.
3. Assigning numbers to social phenomenon.
4. Approximate nature of social science measures.
a. Random measurement error.
E. The importance of
validity and reliability.
1. It is important that measurements in the social sciences be valid.
a. A measure is valid if it actually measures the concept it is supposed to be measuring.
b. Assessing validity.
i. Face validity.
ii. Content validity.
iii. Predictive validity.
iv. Concurrent validity.
vi. Convergent validity.
vii. Discriminant validity.
vi. Factor Analysis.
2. It is important that measurements in the social sciences be reliable.
a. A measure is reliable if it yields consistent and predictable results each time it is taken.
b. Assessing reliability.
i. Inter-rater/coder/observer reliability.
ii. Test-test reliability.
iii. Parallel forms reliability.
iv. Internal consistency reliability.
vi. Chronbach's Alpha.
3. If researchers cannot be certain that they are measuring what they think they are measuring or that their measurements are consistent and reliable, they cannot place any confidence in their findings. Hence, they put a great deal of time and effort into the development of measurement instruments that are both valid and reliable.
A. Not all variables are the same. Nor can all variables be measured the same way.
B. Categorical variables.
1. Categorical variables generally vary in quality or type.
2. Some categorical variables vary in amount or intensity such that one category has more or less of something than another category, but the exact amount of this difference in not quantified.
3. The attributes of categorical variables represent the different categories that observations may fall in.
4. The attributes of a categorical variable must be mutually exclusive which means no observation can fall in more than one category.
5. The attributes of a categorical variable must be exhaustive which means that every observation must fall in at least one of the categories.
6. Examples of categorical variables include gender, race, and college class.
C. Numerical variables.
1. Numerical variables vary in quantity or amount.
2. The attributes of numerical variables are numbers representing different amounts or quantities of the concept being measured.
3. Numerical variables measure things that can be counted and measured quantitatively.
4. Examples include age, income, number of children, and distance traveled to work.
5. Some numerical variables are discrete, meaning that they can only be measured in whole numbers. Examples include number of children at home or number of magazine subscriptions.
6. Some numerical variables are continuous, meaning they can be measured at any point along a number line, including fractional values. Examples include distance traveled to work and age.
a. Continuous variables are often measured as whole numbers.
b. Discrete variables are often treated as if they were continuous.
IV. Levels of measurement.
A. Nominal level.
1. Categorical variables that vary in quality or type with no order or rank to their attributes are called nominal variables.
2. Examples include race, gender, and religious affiliation.
B. Ordinal level.
1. Categorical variables whose attributes can be ordered or ranked such that one attribute represents more of less of some quality than another are called ordinal variables.
2. Examples include social class, degrees received, and Likert-type variables.
C. Interval/ratio level.
1. Numerical variables are interval variables. The attributes of interval variables cannot only be ranked, but the exact amount of difference between any two attributes can be quantified.
2. Some numerical variables can also be treated as ratio variables. Ratio variables have a "true zero" representing the total absence of the quality being measured. The distinction between interval and ratio variables is not critical in most sociological research.
3. Examples of interval measures include age, distance to work, number of children at home, and income.
D. Choosing the level of measurement.
1. Theoretical and substantive concerns.
2. Amount of information available.
3. Precision of available measurement devices.
4. Types of statistical analysis to be performed.
5. As a general rule, use the highest level of measurement possible.
V. Composite measures.
A. Composite measures combining several distinct quantitative measures into a single value or score.
1. Measuring multidimensional concepts.
2. Increasing range and variation.
3. Increasing validity and reliability.
4. Greater efficiency in data analysis.
1. Adding together the scores of multiple indicators of the same concept.
2. Choosing multiple indicators.
a. Indicators of multiple dimensions.
b. Multiple indicators of a single dimension.
3. Evaluating the index prior to data collection.
a. Consider the face validity of each indicator.
b. Consider the content validity of the complete set of indicators.
c. Consider whether the index will identify differences on the concept of interest.
d. Consider the possible range and variability of the index.
e. Consider whether any of the indicators might be unreliable.
4. Evaluating and computing the index after data have been collected.
a. Checking range and variation.
b. Checking validity using factor analysis.
c. Checking reliability using Cronbach's alpha or other statistical tests.
d. Weighting indicators and handling missing data.
1. Scales assign values based on patterns of responses. They assume that some indicators indicate more of the concept than others.
2. Types of scales.
a. Bogardus Social distance scale.
b. Thurstone Scales.
c. Gutman Scales.
D. Comparing indices and scales.
1. Scales are more efficient.
2. Indices more easily constructed.
3. Indices better at measuring multidimensional concepts.
4. Indices more widely used.
A. The process of expressing the concepts in a hypothesis as empirically measurable variables is called operationalization.
B. Steps in operationalizing concepts.
1. Identify the key variables in your hypothesis.
a. Identify the dependent variable.
b. Identify the independent variable(s).
c. Identify any intervening variables.
2. Drawing on academic literature, develop precise conceptual definitions of the key variables.
a. Conceptual definitions must clearly indicate what a variable means, what it represents, and how it is being used in the current study.
b. Conceptual definitions are nominal definitions. They are not the only, or even the best definitions. They are definitions developed by the researcher that best describe the variables as he or she conceives them in the current study. However, researchers should attempt to define concepts in ways that are reasonably consistent with definitions used by other scholars.
c. In defining variables, researchers often delineate their relevant dimensions. Sometimes this is simple. At other times you have think carefully before recognizing the important dimensions of a variable.
d. Make sure you clearly indicate what variables mean in your study and what dimensions you think are relevant.
3. Consult the academic literature to determine what indicators other researchers have used to measure your variables and if adequate measurement instruments have already been developed.
a. Many sociologically relevant variables have been measured by other researchers.
b. If other researchers have identified good indicators and/or developed adequate measurement instruments there is no reason they cannot be used in your study.
i. Using existing measurements as they are.
ii. Adapting existing measurement instruments.
c. Advantages of using existing indicators or measurement instruments.
i. Saves time and effort.
ii. Validity and reliability may have already been demonstrated.
d. Disadvantages of using existing indicators or instruments.
i. Existing indicators or instruments may not measure your variables adequately or may not measure exactly what you are interested in.
ii. Using existing indicators and instruments limits you to seeing variables in terms of others definitions.
iii. Using existing indicators and instruments can reduce serendipity.
e. Finding existing indicators or measurement instruments.
i. Previously published research.
ii. Edited collections of existing measurements instruments.
iii. Contacting authors.
f. Evaluating existing indicators and instruments.
i. How have the indicators and instruments been used and is this consistent with your intended use?
· To what populations and/or samples have the indicators or instruments been applied?
· What methods of data collection have been used to measure the indicators or employ the measurement instruments?
· How will the indicators or instruments have to be changed or adapted for your study?
ii. Has the instrument been shown to be valid?
· Criteria related validity--Does the measure "concur" or agree with other known measures of the variable? Does it "work" or predict known differences as it should?
· Construct validity--Do the different indicators vary together as they should? Do they discriminate between the concepts different dimensions. Has a factor analysis been performed?
iii. Has the instrument been shown to be reliable.
· Stability reliability--Does the measure perform the same at different periods of time? Does it perform the same when repeated (test-retest reliability).
· Representativeness reliability--Does the measure yield similar results for different groups. Does it perform the same for sub-groups in a larger population (sub-group comparisons).
· Internal measures of equivalence--Do the indicators vary consistently and reliably? Can the instruments indicators be divided in half and each half vary consistently with the other (split-half comparisons). Has Cronbach’s alpha or other measures of internal reliability been calculated?
· External measures of equivalence--Does the measure vary consistently with other measures of the concept that are known to be reliable? Has inter-coder reliability been demonstrated, if it is relevant?
g. Citing and crediting sources.
4. If existing measurement instruments are unavailable or inadequate you will have to develop your own measurement instruments.
a. Identify and list relevant indicators.
i. Determine what would indicate the presence or absence of a concept.
ii. Determine what would indicate how much of concept is present.
iii. What could people say or do that would indicate the presence of the concept.
iv. Avoid using causes or consequences as indicators.
v. Some concepts are simple and may have a one or two indicators, but others will be complex and multidimensional and may have numerous indicators.
vi. Consider different types of indicators.
· Direct observations--things the researcher could observe directly about the subjects' characteristics or behaviors.
· Self-reports--things the research subjects could report about their own
· characteristics, attitudes, or behavior.
· Secondary reports--things others could report about the subjects' characteristics, attitudes, and behaviors.
· Communication media--things that can be observed in communication media such as newspapers, radio broadcasts, and movies that would indicate something about the subjects' characteristics, attitudes, or behaviors.
· Physical traces--things that can be observed in physical objects and artifacts such as their placement and arrangement or wear and damage that would indicate something about the subjects' characteristics, attitudes, and behaviors.
b. Evaluate your list of indicators.
i. Check the face validity of each indicator by asking questions such as: Will the indicator measure what it is supposed to measure? Could something else affect or be responsible for this indicator's variation besides the variable I am trying to measure? Am I sure that variation in this indicator is the due to the variable I am trying to measure? Is this indicator a true indicator of the variable I am measuring or is it a cause or consequence of that variable.
ii. Check the content validity of your indicators by asking questions such as: Do the indicators as a group measure the variable fully and completely? Have all the relevant dimensions or aspects of the variable been measured adequately? Is there something missing, something more I should include?
iii. Consider any obvious threats to reliability by asking questions such as: Will this indicator consistently indicate the variable at all times in all places for all people in all settings and circumstances? Could the indicator change over time or differ from group to group or place to place, even though the underlying variable stays the same? Is the indicator easy to observe and document? Is there any possibility that it could be misread or misinterpreted? Is the source of the indicator reliable? Is there any reason to believe the source would want to lie, deceive, or withhold information?
c. Select indicators and/or create measurement instruments.
i. Select the best indicator or indicators.
ii. Creating composite measures if necessary.
1. Tests, indices, and scales.
2. Tally sheets and other data recording devices.
iii. Testing your measurement instruments and devices.
5. Use your measurement instrument to collect data.