Pilot Testing for Feasibility in a Study of Student Retention and Attrition in Online Undergraduate Programs

Prior to undertaking a descriptive study on attrition and retention of students in two online undergraduate health administration and human service programs, a pilot test was conducted to assess the procedures for participant recruitment, usability of the survey questionnaire, and data collection processes. A retention model provided the conceptual framework for this investigation to identify and organize various factors that influenced students’ decisions to either discontinue or continue their educational programs. In an attempt to contribute to the body of research in this area and to enrich pedagogical practices, the authors describe the pilot testing processes and feasibility issues explored, and the improvements made to the instrument and methodology before commencing the main research study on attrition and retention.


Introduction
Retaining students is both a priority and an unrelenting challenge in higher education, whether in conventional face-to-face settings or in distance education (Tinto, 1975(Tinto, , 1982Berge & Haung, 2004;Heyman, 2010;Rintala, Andersson, & Kairamo 2011). Tinto's (1982) analyses of undergraduate degree completion rates from 1880-1980 prompted him to say "rates of dropout from higher education have remained strikingly constant over the past 100 years" (p. 694). He observed that students were dropping out at a rate of 45% with little variation over time. More than three decades after Tinto's study, the problem of retention persists in higher education generally, and is an even greater concern in distance and distributed learning contexts. Indeed, attrition and retention became the focus of concern for an investigation in the Bachelor of Health Administration (HADM) and Human Service (HSRV) programs delivered online at a single open and online university.
As a precursor to the main descriptive study on attrition and retention, a pilot study was conducted to determine the feasibility of using a survey questionnaire and the recruitment and data collection processes. The online survey instrument was structured around a retention model that the researchers had not previously employed. It was believed that the model would provide an effective framework for organizing the factors contributing to students' decisions to either discontinue or continue their online studies.
Historically, pilot and feasibility studies were not usually reported, and nor were they topics of much discussion in the research literature. While to some extent this continues to be the case in educational research, pilot and feasibility studies have recently become the focus of extensive debate in the healthrelated literature. It would be beneficial if similar attention were given to pilot and feasibility studies in the broader research context, including the education community. In an attempt to contribute to the body of research in this area, the authors describe the pilot testing process, the specific feasibility issues explored, and modifications made to prepare for the main study on attrition and retention in distance education. First, some background information is provided, including a definition of terms; followed by a discussion of the purpose, differences, and similarities of pilot and feasibility studies described in the literature. The definitions and purposes proposed in the health research are relevant to and help inform educational research, and are therefore included in the background discussion in this paper.

Definition of Terms
In general, a pilot precedes and is closely related to a larger study (Prescott & Soeken, 1989;Lancaster, Dodd, & Williamson, 2004;Eldridge et al., 2016). A pilot is often viewed synonymously with a "feasibility study intended to guide the planning of a large scale investigation" (Thabane et al., 2010, p. 1). In effect, pilots comprise a risk mitigation strategy to reduce the chance of failure in a larger project.
The word pilot has several different meanings in the research literature; however, as Eldridge et al. (2016) point out, definitions of pilot studies usually focus on an experiment, project, or development undertaken in advance of a future wider experiment, project, or development. In other words, a pilot study facilitates decision-making, and therefore serves as "a small-scale experiment or set of observations undertaken to decide how and whether to launch a full-scale project" (Collins English Dictionary, 2014, para 1).
An informal term often used for feasibility is doability; Eldridge et al. (2016) observed that outside of the health context, definitions of feasibility and feasibility studies focus on the likelihood of being able to do something easily or conveniently, and on the "assessment of the practicality of a proposed plan or method" (para. 16). Moore, Carter, Nietert, and Stewart (2011) noted that pilot studies imply feasibility to the extent that they are "preparatory studies designed to test the performance characteristics and capabilities of study designs, measures, procedures, recruitment criteria, and operational strategies that are under consideration for use in a subsequent, often larger, study" (p. 332).
There is no clear distinction between pilots, pilot trials, and feasibility studies in the way the terms are used (Thabane et al. 2010). van Teijlingen and Hundley (2002) argued that "[t]he term 'pilot studies' refers to mini versions of a full-scale study (also called 'feasibility' studies), as well as the specific pretesting of a particular research instrument such as a questionnaire or interview schedule" (p. 1). Bowen et al. (2009) similarly used the term feasibility study "to encompass any sort of study that can help investigators prepare for full-scale research leading to intervention" (p. 453). Arain, Campbell, Cooper, and Lancaster (2010) do not agree that the terms pilot and feasibility can be used interchangeably; these authors contend that a feasibility study is undertaken to determine important components critical to the development of the main study, whereas a pilot study is the conduct of the main study in miniature. This aligns with others who suggest that due to the specific goals of each, pilot and feasibility studies are mutually exclusive. For example, Bugge et al. (2013) noted that feasibility studies are designed to "ask questions about whether the study can be done" and they agreed that pilot trials are "a miniature version of the main trial, which aim to test aspects of study design and processes for the implementation of a larger main trial in the future" (p. 2).
The numerous, and conflicting definitions and interpretations; differences in current usage, and diverse opinions in the health research community regarding the concepts of pilot and feasibility; motivated Eldridge et al. (2016) to undertake extensive work to clarify the issue. They concluded that rather than viewing pilot and feasibility studies as separate entities, pilot studies are best defined as subsets of feasibility studies; therefore, feasibility is conceptualized as "an overarching concept for studies assessing whether a future study, project or development can be done" (para. 23). This means that all studies aiming to assess "whether a future [randomized control trial] RCT is doable [are defined] as 'feasibility studies'" (Eldridge et al., 2016, para. 30). Hence, a systematic review or meta-analysis of the research literature could be classified as a feasibility study, but not as a pilot study. Moreover, these authors determined that although "all pilot studies are feasibility studies…not all feasibility studies are pilot studies" (Eldridge et al., 2016, para. 17). Eldridge's team (2016) propose that even though a pilot study could ask the same questions as a feasibility study, a pilot has specific design features. Consequently, they noted that: While piloting is also concerned with whether something can be done and whether and how we should proceed with it, it has a further dimension; piloting is implementing something, or part of something, in a way you intend to do it in future to see whether it can be done in practice (para.

Purpose of Pilot and Feasibility Studies
Pilot studies. In research textbooks from the 1980s, the purported purpose of pilot studies was generally only to test, on a small scale, the steps outlined in a previously-developed research plan, and then based on the results of the pilot, revisions would subsequently be made to the plan (Ackerman, & Lohnes, 1981;Brink & Wood, 1983;Burns & Grove, 1987;Lieswiadomy, 1987;Polit & Hungler, 1987). It has been suggested that many researchers had misconceptions that pilot studies required too much time and energy for the research team to bother with them, given their narrow range of purposes (Prescott & Soeken, 1989;Hinds & Gattuso, 1991). But as Cope (2015) observed, while a pilot or feasibility study could be seen as "a burden or an added step in conducting a large-scale study," researchers can realize benefits from these investigations that "outweigh the added effort and increase the likelihood of success" (p.196) even if there is no guarantee that they will avoid all problematic issues for the main study. Pilot study results can help identify actual and potential problems that researchers can address before beginning the anticipated future study. It has long been recognized that when used this way, "pilot work serves to guide the development of a research plan instead of being a test of the already-developed plan" (Prescott & Soeken, 1989, p. 60).
Researchers have come to understand that not only can pilots help answer methodological questions that could guide the researcher toward "empirically determined non-arbitrary answers to design issues" that need to be addressed (Prescott & Soeken, 1989, p. 60), pilot studies can serve other important purposes (Doody & Doody, 2015). An investigator might undertake a pilot in order to evaluate the execution of the methods and feasibility of recruitment, randomization, retention, measurement, and assessment procedures; the implementation of new procedures and interventions (Leon, Davis, & Kraemer, 2011); refining new and existing tools (Polit & Beck, 2004), or widening or narrowing eligibility criteria for the recruitment of participants (Conn, Algase, Rawl, Zerwic, & Wyman 2010). For instance, Chu (2013) conducted a pilot study on teacher efficacy to evaluate the clarity of the items to be used in the formal study in order to ensure that measurement instruments were reliable and valid in the educational context before undertaking the formal study.
A pilot study is often performed to test the feasibility of techniques, methods, questionnaires, and interviews and how they function together in a particular context; it can also reveal ethical and practical issues that could hamper the main study (Doody & Doody, 2015). Therefore, pilot studies help researchers identify design flaws, refine data collection and analysis plans; gain experience with and train the research team; assess recruitment processes; and learn important information about participant burden prior to undertaking the larger study (Prescott & Soeken, 1989;Beebe, 2007). If participants experience difficulty in completing survey instruments, this may prompt researchers to modify item wording, change the order in which questions are presented, or alter the instrument format (Conn et al., 2010). There is strong support in the literature that pilot studies should be undertaken to identify and mitigate risks associated with future study design, sample size, sample selection, data collection, data management, and data analysis (Jairath, Hogerney, & Parsons, 2000;Moore et al., 2011).
Feasibility studies. Feasibility studies evaluate individual critical components necessary for the large-scale study, such as participant recruitment, ability to execute the intervention, and accuracy of the intervention protocol (Arain et al., 2010;Tickle-Degnen, 2013). Conducting a feasibility study can be seen as "a developmental learning process in which the study procedures and intervention can be adapted as necessary during the study to achieve the most promising outcomes" (Dobkin, 2009, p. 200). Following a feasibility study, the researchers identify strategies to address any challenges, and revise components as necessary prior to designing a pilot study to evaluate intervention outcomes in a more formal manner.
While there seems to be little difference from pilots, feasibility studies tend to focus on the process of developing and implementing an intervention and result in preliminary examination of participant responses to the intervention (Gitlin, 2013;Orsmond & Cohn, 2015). Dobkin (2009) highlights that "[b]ecause adaptation is an important feature of feasibility studies, establishing fidelity to demonstrate that the intervention procedures or protocols were implemented as intended most likely occurs in the pilot stage" (p. 200). Pilot studies, on the other hand, "more clearly focus on outcomes, rather than process, and include a more controlled evaluation of participant responses to the intervention" (Orsmond & Cohn, 2015, p. 2).
Lee, Whitehead, Jacques, and Julious (2014) agreed that the purpose of pilot trials is "to provide sufficient assurance to enable a larger definitive trial to be undertaken" (p.1), but they disagree with the order of feasibility and pilot studies described above. Instead, they support the notion put forth by Leon, Davis, and Kraemer (2011) that pilot results are meant to inform feasibility and identify modifications needed in the design of a larger, ensuing hypothesis testing study. They argue that a pilot serves an earlier-phase developmental function that will enhance the probability of success in larger subsequent studies; through pilot studies investigators are able to assess recruitment rates, usability of instruments, or whether certain technologies can be implemented and make indicated changes. Leon et al. (2011), as well as Lee et al. (2014) caution that while a pilot study might be the first step needed when exploring new interventions or procedures, or innovative applications of an existing one, pilot studies are not used for hypothesis testing, or for evaluating safety, efficacy, and effectiveness. Therefore, feasibility and pilot studies are not expected to have the large sample sizes that are needed to adequately power statistical null hypothesis testing (Thabane et al., 2010). Moreover, "the outcomes of most feasibility and pilot studies should be measured with descriptive statistics, qualitative analysis, and the compilation of basic data related to administrative and physical infrastructure" (Tickle-Degnen, 2013, p. 171). Lee et al. (2014) observed that "pilot studies are more about learning than confirming: they are not designed to formally assess evidence of benefit;" and as such, it is usually more informative to provide an estimate of the range of possible responses (p. 10). Furthermore, Williams (2016) noted "that most journals do not expect to see an assessment of the effectiveness of interventions in articles reporting on feasibility or stand-alone pilot studies" (p. 8).

Publication
In the past, it was unusual to see publications of pilot or feasibility studies; reports were rarely seen of any testing of the processes, resources, and management of clinical trials (Tickle-Degnen, 2013). Although it is now much more common for pilot studies in medicine and nursing to be reported in the research literature (Thabane et al. 2010;Morin, 2013;Lancaster, 2015), it is less common in other fields and with other types of research, such as pilot studies of action research, or other qualitative methods (van Teijlingen & Hundley, 2002). Nevertheless, because of the many benefits that could be gained from the sharing of information gleaned from these studies (Arain et al., 2010;Leon et al., 2011;Morin, 2013), researchers are encouraged to publish the results of pilot and feasibility studies (Eldridge et al., 2016).
Publishing is important for a number of reasons, not the least of which is that learning from the results of other pilot projects could potentially conserve time, energy, and research resources (Hinds & Gattuso, 1991;Doody & Doody, 2015;Eldridge et al., 2016). Additionally, the publishing of pilot outcomes in one field could facilitate collaborative projects with individuals in other areas once they are informed of the researcher's interests; what is learned in one profession or disciplinary area can be applied to other fields.
For example, information from publications in health literature is relevant to and can be applied in educational research, as is the case with Bowen et al.'s (2009) suggestions from public health research about how to decide whether or not to undertake feasibility studies. Sharing key information, including pitfalls, can prevent unnecessary duplication of efforts and over-expenditure of public resources. More importantly, in research involving humans, it can minimize the impact on human subjects (Connelly, 2008;Conn et al., 2010;Wolfe, 2013;Doody & Doody, 2015) and facilitate culturally competent research (Kim, 2011). Therefore, researchers have both scientific and ethical obligations to try to publish the results of every research endeavor (Thabane et al., 2010).
Not only should investigators be encouraged to report their pilot studies, they should report the improvements made to the study design and the research process as a result of the pilot (van Teijlingen & Hundley, 2002). In quantitative studies, in addition to feasibility objectives, researchers should indicate how feasibility was assessed and evaluated and how they dealt with any recruitment issues (Algase, 2009;Thabane et al., 2010;Leon et al., 2011). In qualitative studies, researchers should indicate how the effectiveness of the data-collection and analysis techniques were evaluated; results should be interpreted within the context of viability and when necessary, include what is needed to make the study viable (Arain et al., 2010;Thabane et al., 2010). O'Cathain et al. (2015) noted that reports should include a description of the methods used for both quantitative and qualitative analysis and findings.

Application to Distance Education
Although many types of feasibility and pilot studies could be applicable to research in distance education, no framework or typology has been developed specifically for research in this field. Beyond the health arena, (where feasibility studies typically focus on preparing for drug trials in which a single drug or intervention is being tested for specific outcomes), published feasibility and pilot study frameworks are uncommon. In educational research, there is no single factor that might influence the behaviors and outcomes for students. Rather, a number of interrelated personal, circumstantial, and institutional factors (Berge & Haung, 2004) contribute to the learning and teaching experience and affect student outcomes.
Moreover, educational outcomes are often theoretical constructs (preferences related to measures of student satisfactions) rather than direct observables (e.g., remediation of symptoms or a change in microbiology or physiology), and they are generally measured along a conceptual continuum (not a true count such as in tumor size or laboratory tests). Although course examinations and expected outcomes might be somewhat standardized, educational interventions are meant to be student-centered and highly individualized as opposed to highly standardized. Nevertheless, as described above, properly conducted pilot studies can greatly strengthen outcomes of the main study regardless of the field ( Doody & Doody, 2015). In the next section, we describe the process and outcomes of a pilot study conducted prior to the main study on attrition and retention in two undergraduate programs offered by distance education.

Pilot Study on Retention and Attrition
For the purposes of this paper, the definition of pilot study put forth by Doody and Doody (2015) is used, where "a pilot study is a small-scale version of a planned study conducted with a small group of participants similar to those to be recruited later in the larger scale study" (p. 1074). The objective of the pilot study was to increase the probability of success in the main study by testing the feasibility of the procedures for recruitment and retention of participants, testing for content validity and face validity of the questions, and assessing the usability (including ease of access and navigation) of the technology employed for administering the questionnaire.

Conceptual Framework
Berge and Huang's (2004) conceptual framework (Figure 1) was selected for its usefulness in organizing the data and study outcomes. In this framework, the variables identified as affecting student retention are clustered into three main categories: personal, institutional, and circumstantial (Table 1). "[B]oth students and institutions can identify specific variables in these three functional groups when making decisions to persist or when developing programs leading to persistence that is highly contextual to student, institution and event" (Snow, 2016, p. 2).   (2016) observed that although it "appears to provide a holistic approach to student retention, this model has neither been widely tested nor reviewed in academic circles" (p. 2). Hence, our decision to further test the model in the distance education context was based on a number of factors.
First, it has been tested elsewhere, and found to be a useful model in the context of distance education (Tyler-Smith, 2006). Second, it is recognized as a model that is flexible, "context specific" and "allows adopters to address variables as they are deemed relevant" (Berge & Huang, 2004, para. 22), and can be adapted to particular settings in ways that best enable researchers to examine which factors facilitate retention or contribute to attrition. For example, from an institutional perspective, researchers can identify areas where institutional support could affect retention rates such as "curriculum and instruction, academic and social supports, and institutional management" (Berge & Huang, 2004, para. 29).

Participants
The researchers of this pilot study shared the view that in any discussion of learner attrition, one needs to consider the factors that learners themselves cite as reasons for dropping out or not completing. The learner's perception of what constitutes a barrier to continuation or a factor contributing to discontinuation or continuation, provides valuable insights in designing and implementing distance courses, continuing or improving the processes, support mechanisms, and strategies that can enhance retention. Therefore, our goal was to gain the perspective of those who discontinued their studies and those who graduated from the Health Administration (HADM) & Human Services (HSRV) programs at a single open and online university.
The Office of the Registrar provided the student non-completion and graduate data from January 1, 2010 to December 31, 2014. It was originally anticipated that the pilot testing would include a sample of up to 10 students; however, members of the Ethics Review Board questioned whether there would be an adequate response rate from students who had left the two programs, especially those who left a number of years prior to the study. Therefore, the pilot sample was expanded to enable more thorough testing of the response rate, the feasibility of achieving a viable sample for the pilot testing of the survey instrument, and the research processes planned for the main study.
To test our assumption that past students would respond regardless of when they left the programs, all students from the two programs were included if they met the completion and attrition criteria for the  Table 2 shows the composition of the 2010 sample for the pilot study.

Instrument
A comparative online survey design was employed to gain the perspectives of students from two different undergraduate programs to better understand what factors contributed to the completion of their studies or prompted their leaving prior to completion of their degree. As no validated questionnaires relevant to the conceptual framework were found in the review of the literature, adaptations were made to the "leaver" survey questionnaire used by the university to follow-up with students who discontinued their studies. As reported by the university's Department of Institutional Studies, the questions had been tested for validity and reliability; nevertheless, the pilot allowed for field-testing of this questionnaire for content and face validity in order to obtain feedback on the following:  clarity, errors, readability, impartiality, appropriateness of the type and format of questions; and  time required to complete the questionnaire.
Most of the pilot survey questions were in Likert-scale format, with a space for open-ended questions where participants could share their reflections and feelings about the courses, programs, interactions with the university, circumstances facilitating their continuing or leaving, and their thoughts about returning to this university, either to complete their undergraduate program or to enroll in graduate studies. The questionnaire also included questions on demographic data related to student personal profiles: place of residence (rural or urban), gender, age, marital status, dependents, previous experience with distance education, and institutional data: program information, the length of time as a student, reasons for leaving, factors that facilitated continuing on to graduation, etc.
As much as possible, the survey questions were organized related to personal, institutional, and circumstantial variables. Attempts were made to eliminate bias and to systematically incorporate accepted best practices into the survey (Friedman, Friedman, & Gluck, 1988;Friedman & Amoo, 1999).

Process
The pilot testing process occurred over the first four weeks of the project. Institutional data on the student populations from the two programs was reviewed and the prospective respondents who graduated were differentiated from those who discontinued their program. The research assistants consulted Constant Contact (2015) for tips on writing effective invitations, and a decision was made to use three types of emails to communicate the main messages about the survey. First, an invitation email would provide information about the study, introduce prospective participants to the survey, invite them to participate, and explain that completing the survey would signify informed consent. Three days after the first invitation, a reminder email would be sent to those who had not yet replied. Finally, an email to express appreciation for their participation would be sent to the respondents after they submitted the completed survey.
The researchers tested the survey process to ensure the appropriate emails were sent and received; that the survey could be easily accessed and completed, and that the answers were recorded correctly in the LimeSurvey system, all of which were successful. Ethical approval for the study was previously granted through the university's Research Ethics Board (REB), however, prior to beginning the pilot study, the final version of the revised questionnaire, the invitation letter, and informed consent form were resubmitted to the REB for information and filing.
The invitation to participate was emailed to 121 potential participants, including a statement that the survey link would remain active for four days to respond. Twenty immediate automated responses indicated that the intended recipients had not been reached due to an incorrect email address. Those not reached included two graduates and six individuals who discontinued from HADM group (43), and seven graduates and five individuals who discontinued from the HSRV group (78). Hence, potentially 101 emails were received out of the possible 121 (83%) candidates. Table 2 illustrates the distribution of participants we believe received the invitation to the pilot study.

Response Rate
The total responses were quite balanced between the two programs and between the graduates and those who discontinued. Table 4 shows the number of respondents compared to those invited to participate, with an overall adequate response rate of 18%.

Response Pattern
Within hours after the initial email invitation, six respondents completed the survey. No further responses came until the reminder email was sent on the third day. This indicates that participants are likely to respond close to the time they receive the invitation email, but tend to forget afterwards. Once the reminder email was sent, three people responded within a few hours. One respondent wrote: "Thanks for reminding me about the survey. I ended up filling it out." The research team deliberated about sending a final reminder on the evening of the fourth day, but decided against it to avoid annoying the students. By the end of that day, seven more individuals responded to the survey bringing the number to 16 at the deadline, with two responses arriving within the next few days for a total of 18.

Contacting Participants
Type of email address. The candidate's type of email address was not verified before emailing the link to the survey; therefore, 20 responses to the invitation email indicated non-delivery, some from automated work email addresses. The research team decided that for the main study, a careful review of the addresses was needed to avoid using government or other email addresses that appeared to be work addresses. It was decided also that those whose invitation emails bounced back would be contacted by phone.
Reminder emails. Participant response to the reminder emails indicated that sending reminders was a positive incentive in the pilot, and therefore would not have a negative effect in the main study. In fact, in the pilot, after the reminder on the third day, the number of surveys received increased from six to 18, as shown in Figure 2.

Instrument
Time to complete the survey. The survey instructions indicated that the 66 questions could be completed in less than 30 minutes. In the pilot, however, the average time to complete the survey was approximately eighteen minutes. The time it takes to complete a survey affects response rates (Cook, Heath, & Thompson, 2000;Walston, Lissitz, & Rudner, 2006); the ideal duration to secure response rates among college student populations is approximately thirteen minutes or less (Fan & Yan, 2010). Koskey, Cain, Sondergeld, Alvim, and Slager (2015) found that students reported that they would be likely to complete a survey "if it is perceived to take less than 10 minutes to complete" and would not likely complete a survey if it was "perceived to take more than 30 minutes to complete" (p. 21). The researchers decided that an average completion time of 20 minutes would garner an acceptable response rate among the targeted student population; accordingly, for the main study, survey instructions were adjusted to indicate that it could be completed in approximately twenty minutes.
Unanswered questions. The rate of answered quantitative questions was high and there was no sign of ambiguity; however, in 13 open-ended questions, eight people did not respond to one or more.
In reviewing each question, the lack of response did not seem to be related to the clarity of the questions.
Valuable feedback was received from those who did respond to those open-ended questions; therefore, these questions were retained.
Incomplete questionnaires. Of the three incomplete surveys, one participant had completed three out of nine pages, and two others completed two pages. One had been logged into the survey for 10 minutes, and another for two minutes. We could not determine if students encountered difficulties or deterrents to continue; no navigation problems were apparent that might prompt participants to stop.

Revisions to the instrument. Formal recommendations about the survey content and process
were not solicited from the pilot group, although participants did provide suggestions to improve the instrument. As a result of the pilot, a few questions were re-ordered under personal, institutional, and circumstantial variables. The body of the survey was revised to improve clarity and facilitate ease of completion by shortening questions whenever possible, changing ranking questions (e.g., order of importance, 1-6) into rating questions (not at all important to very important); presenting and ordering questions to narrow the subsequent questions based on prior responses and relevance to the participant's specific circumstances (e.g., employment, program chosen, graduation, or discontinuation); and permitting multiple answers when appropriate.
In the open-ended questions, two students requested that someone from the university contact them to discuss continuing their education. Considering that others might be interested in continuing their studies, the contact information for the program directors was added for the main study. The researchers also decided that in order to embed as much flexibility as possible into the main investigation, the following statement would be added at the bottom of the questionnaire: "You may always go back to modify any answer if desired. Make sure that you click ion the "next" button to apply the modification you just made." Additionally, it was agreed that questions would be added to elicit feedback about the survey itself, since this information could be useful for further studies.

Limitations of the Pilot
The main goal of this pilot study was to assess the feasibility of successfully recruiting participants for the study, and to evaluate the technical and navigational aspects of online survey process and the instrument itself. The pilot provided an opportunity to improve our research processes as a precursor to the main investigation. The pilot sample was confined to two undergraduate programs at a single open and online distance university; hence, the data and findings were generated from one institution. These two aspects may limit the generalizability of the pilot findings to other populations. Nevertheless, the conditions for the study are more uniform regarding faculty, course requirements, and institutional elements focusing on a single institution than studying several student groups across institutions and subsequently reduce the threats to internal validity (Robichaud, 2016).

Conclusion
The pilot study undertaken to test the feasibility of the research process and use of Berge and Huang's (2004) retention model as an organizing framework, was vital for informing the main study on attrition and retention of students in two online undergraduate programs. The planned procedures for recruitment and retention of participants, the usability of the questionnaire, and the technology employed for data collection were all tested. The positive responses and relatively good response rate in the pilot from individuals who discontinued their studies, confirmed the feasibility of a larger investigation using a slightly refined process. This was an important outcome given the concern from the Research Ethics Board that distance education students who discontinued their studies would be unlikely to respond, especially if they left five years previously.
Furthermore, the pilot demonstrated that the online open source survey was conducive for data collection, which also supported the researchers' commitment to openness in all aspects of education and research.
Finally, it was evident from the pilot that in an open and online learning context, the Berge and Huang's (2004) retention model is an effective framework for organizing the factors affecting student attrition and retention. This paper highlights the value of pilot testing in terms of improving the design of research studies, adds to the body of knowledge on pilot studies, and contributes to the development of best practices in open and distance education.