The continued rapid growth of online courses and programs in higher education has brought concerns regarding support services, learning resources, and effectiveness of instruction, as well as how institutions monitor the quality of online programs. These concerns have prompted questions about the effectiveness of instruction and how participants perceive online learning. Such questions led Phipps and Meritosis (1999) to question the methodology of the body of research on online programs and raised the need for a process by which programs and institutions could be compared by academics or prospective students. Unfortunately, the concerns first identified by Phipps and Meritosis continue to persist (Hannafin, Oliver, Hill, Glazer, & Sharma, 2003; Sherlock & Pike 2004).
These issues provided the impetus for this study, the goals of which were to identify quality indicators specific to community college online programs, and to determine stakeholders’ perceived importance of those indicators. A literature review identified common standards and best practices for online courses and programs developed by accrediting organizations and policy groups. The terms best practices, criteria, and standards are used interchangeably in the literature when discussing recommendations regarding practices and policies institutions should adopt for distance learning programs (Twigg, 1999a). One goal of the present study is to identify a set of indicators, and the best practices, criteria, and standards from the literature provide a place to start in the identification of possible indicators of quality.
Synthesizing these sources yielded five categories: institutional support, curriculum and instruction, faculty support, student support, and evaluation and assessment. A case was made for adding technical support as a sixth category. This information was used to guide the development of a Delphi study to identify potential indicators. Twenty distance education program administrators from community colleges and 4-year institutions agreed to participate in the study; fifteen completed the initial survey and thirteen the full process.
The potential items identified through the Delphi process were used to create a three-part stakeholder survey, which was designed to collect input on perceived levels of importance for each potential indicator using the magnitude estimation technique. Participants were also able to recommend indicators not included in the survey, and demographic data were collected. The stakeholder survey was then distributed to students and faculty, technical support staff, and program administrators participating in online courses offered by a community college system in the Midwest.
The perception of importance, as measured through the stakeholder survey, did not suggest that any Delphi items should be eliminated, and the relatively equal perceptions of importance indicated by each stakeholder group provides validation for the results of the Delphi study.
A third research step was added to refine the results of the Delphi process which included a mix of potential indicators, factors, and other measures. A group of distance learning experts, identified through their scholarly research and professional activity, was asked to review the Delphi items and classify each as a factor or indicator according to the following definitions. Indicators are outputs that an organization can point to as signs of success, and factors are inputs consciously made by the institution in support of its program.
Results from this study identify where and how an institution might look for data when measuring the effectiveness of its online programs and services. The potential indicators and factors identified in these three studies represent parameters that support the examination of how an institution supports its programs, or how programs might compare across institutions. What these items do not address is how an institution uses the data it collects on its programs.