Reference no: EM133735980 , Length: 5
Assignment: Instrumentation & Data Collection Paper
Selecting or creating a data collection instrument can be a daunting task. One must consider the linkage of the data collection instrument to the purpose of the study and the variables of the study while also considering the reliability and validity of the instrument. In some cases, pre-existing instruments are readily available and can be leveraged for future research. However, in some cases, the instruments need to be created and validated through statistical techniques such as confirmatory factor analysis.
Reliability and validity of data collection instruments are primary concerns of any researcher. In order for instruments to be considered valid, they must be empirically evaluated and proven to have high reliability and validity. In most cases, construct validity is considered, which represents the extent to which an instrument measures the construct it is intended to measure. For example, if the instrument is supposed to be measuring employee motivation, construct validity reflects the extent to which the instrument is measuring employee motivation and not some other unintended variable such as employee satisfaction. Reliability is oftentimes assessed by conducting an inter-item reliability analysis resulting in a Cronbach's alpha coefficient. Validity is typically assessed by conducting a confirmatory factor analysis, which yields goodness of fit statistics. In general, reliability coefficients above 0.80 reflect good reliability, and validity coefficients (e.g., goodness of fit statistics) above 0.95 reflect a high degree of validity. Sources explaining how to report and interpret reliability and validity coefficients are provided in the resources section.
Data collection in quantitative research is based on results that will fit into specifically determined, numerically-coded categories that can be easily inputted in the desired commercial computer software program (Microsoft Excel, SPSS, STATA, SAS, etc.). Very popular open-source programs are also available (e.g., Python and R).
Table
Means of Data Collection
Observation
Defined list of what will be observed.
List will include defined behaviors/events that will be the units of data to be observed, recorded, and coded.
Procedures on how the observations will be observed, recorded and coded (such as creating a log where tick marks will be placed for every instance of a behavior or event).
Interviews
Structured interview questions with responses formulated as a word or series of words that will be selected (Responses will be coded as a number, such as yes=1 or no=2).
Questionnaires/Surveys
Define area of investigation.
Formulate the questions.
Pilot test the questionnaire for validity/reliability.
Face-to-face interviews, telephone interviews, paper questionnaires, online surveys such as Survey Monkey, etc.
Scales
1. Rating scales such as Likert scales allow respondents to select what best matches their opinion or attitude pertaining to behavior, event, or attitude.
2. Statements are associated with a numeric value that can be coded.
3. Random sampling and structured data collection instruments with predetermined responses provide a means of gathering data that can be easily coded and summarized in reports that can be compared. Generalization is then possible to larger populations. However, when generalizing from a research sample to the overall population, there will always be some degree of error, which is known as the margin of error. Larger samples will yield a smaller margin of error, but the sample needs to be representative of the population and the research study needs to be valid in order for the population parameter estimate to be accurate. In other words, if we use valid instruments, rigorous designs, and large samples, our population parameter estimates will be more precise and more accurate.