March – 2013

Uses of Published Research: An Exploratory Case Study

Fahy photo

Patrick J. Fahy
Athabasca University, Canada


Academic publications are too often ignored by other researchers. There are various reasons: Researchers know that conclusions may eventually be proved wrong; publications are sometimes retracted; effects may decline when studied later; researchers occasionally don’t seem to know about papers they have allegedly authored; there are even accusations of fraud (Cohen, 2011). In this exploratory case study, 10 papers were examined to determine the various ways they were used by others, whether there were cases of reported effects declining, and whether, among those who referenced the papers, there were suggestions that anything in the papers ought to be retracted. Findings showed that all the papers had been referenced by others (337 user publications were found, containing a total of 868 references). Other findings include the following: Single references were far more common than multiple references; applications/replications were the least common type of usage (23 occurrences), followed by contrasts/elaborations (34), and quotations (65); unlike reports regarding publications in the sciences, whether the paper was solo- or co-authored did not affect usage; appearance in a non-prestige journal was actually associated with more usage of some kinds; and well over 80% of uses were in heavily scrutinized sources (journal articles or theses/dissertations). The paper concludes with recommendations to writers about how to avoid producing publications that are ignored.

Keywords: Distance education; publishing; interaction analysis

Introduction and Background

In addition to the long-lamented, generally poor state of distance education research (Keegan, 1985; Moore, 1985; Cannell, 1999; Saba, 2000; Gibson, 2003; Zawicki-Richter, Backer, & Vogt, 2009), there are increasing problems with published academic papers, from a range of disciplines, being ignored after publication (Lehrer, 2010), eventually being proven wrong (“Publish and be wrong,” 2008), being retracted for various reasons by their authors or publishers (Groopman, 2010), or being accused of fraud (“Liar! Liar!,” 2009). The problem even plagues summaries of research in the popular press, where readers are warned that if they do not see subsequent confirmation of research they should suspect that the original, innovative findings “may have fallen by the wayside” (“Journalistic deficit disorder,” 2012). It is also not unusual for effects observed initially to decline when studied later, the “declining verification” problem (Ioaniddis, 2005; Coyne, 2010).

This paper explores these issues in specific reference to my own work. It is an exploratory case study, raising issues which appear not to have been addressed before in regard to published research in open, distance, or general education research. It is intended to illustrate cases (uses of publications by others) and to provide an initial process for evaluating publications against problems reported elsewhere with academic research. To conduct it, I consulted Google Scholar ( to determine how 10 of my papers, the target papers, published between 2000 and 2007, had been used by others. A similar process, also employing results found in Google Scholar, was previously reported by Rourke and Kanuka (2009). Google Scholar is a convenient and thorough way to determine when one’s research has been cited by others; in addition, it provides access to the using work which, as described below, was central to the present study. Most were written by me alone (8), most were peer-reviewed (8), and all were old enough to have garnered attention from the field (if they were ever to do so).

References by others were found in 337 publications, the user papers, which contained a total of 868 references to something in one of the 10 target papers (I didn’t count references I made myself). I examined these uses by looking at how the target papers were used by others and what specific conclusions others may have reached about them. The major purposes of the study were: 1) to explore the various uses others had made of the target papers; 2) to determine whether any of the effects reported in the target papers were found by others to be “declining” in any way; and 3) to ascertain whether anyone had called for anything reported in a target paper to be retracted, based either on new findings or on further examination of target paper data, reported findings, processes, or conclusions.

As noted earlier, lack of use has plagued academic research publications for some time, for various reasons. Lehrer (2010, p. 56) recently reported that fully a third of published studies are not even cited, much less replicated, by others. For example, in 2011 it was noted that of the 16 papers produced by the University of Vermont’s literature department, described as “a fairly representative institution,” 11 had virtually been ignored, subsequently receiving a total of between zero and two references by other authors (“University challenge,” 2011).

In addition to the lack of replication, publications sometimes do not survive subsequent research. Ioannides (2005) has asserted that, based on further examination and the gathering of more data, most initial research can be shown to be “false.” He also criticized the practice of declaring an issue definitively resolved on the basis of a single study, noting that this practice is likely both to inhibit replication and to result in the refutation of results in subsequent studies because emphasis is given to small discrepancies (p. 696). In the same article, he was also critical of the practice in many fields of publishing only positive results, a problem especially when robustness of results is a goal. Lehrer (2010) reported in this regard that in 1959 the statistician Theodore Sterling noticed that 97% of all published psychological studies with statistically significant data found the effect they were looking for, leading him to conclude that psychologists were either extraordinarily lucky or they published only outcomes of successful experiments (p. 55). This is an early occasion of criticism of the predilection of researchers to offer for publication, and for editors to consider, only positive results.

More recently, Weisman (2011) described a similar “bias” of editors for positive results, which might later be found false (or trivial) through further experimentation. (Patton had earlier made the same criticism [1975, p. 25].) Weisman cites “beginner’s luck” findings and regression to the mean as possible explanations for outcomes that are later rejected. Lehrer (2010) writes that Schooler, who first reported the “declining verification” phenomenon, was at a loss to explain why; he eventually blamed causes like “habituation.” Myers (2010), while listing several possible explanations (i.e., investigator bias, population variance, simple chance, para. 5 to para. 11), criticized the tendency of some industries (e.g., pharmaceuticals) to attempt deliberately to profit from outliers (para. 8).

There are other explanations for the fact that research is often later disproved, contradicted, or even retracted (Cohen, 2011). The research might not be well done: Rovai and Barnum (2003, p. 58) reported that only 5% of the research in distance education published from 1993 to 2003 was valid enough to support any conclusions about (in their case) the effectiveness of using technology in teaching. As another example, in biomedical research samples are typically small due to the nature of the field and its research; this fact, however, weakens the likelihood of subsequent corroboration (Ioannidis, 2005; “Journalistic deficit disorder,” 2012). Added to these problems is the fact that academic researchers are often not good at clearly expressing their discoveries or their thinking (Holdaway, 1986). The bias of editors toward positive results, rather than more nuanced, even “no significant difference” findings, has already been mentioned.

Academics ideally should publish in order to have their work read and used by others, their procedures and findings checked or corrected, and their ideas elaborated. While alternative views of thinking or unexpected findings should be welcome, as testing and applications by others may better approximate “the truth” about whatever is studied (Moonesinghe, Khoury, & Janssens, 2007), in practice (including distance education) use and disagreement are actually not common. For example, Schwier, Morrison, Daniel, and Koroluk (2009) examined 15 elements of the online interactions of graduate students. In their analysis, the least common type (at 3 occurrences) was argument/disagreement, while the most frequent type (40 occurrences) was agreement. Manley (2008) reported that disagreement was tied for seventh in a list of fifteen kinds of comments in an online forum he examined. When Jeong (2003) studied disagreement in online interactions among distance students, he concluded that “… statements of disagreement were rare” and that most commonly “disagreement occurred when arguments and counterarguments were exchanged” (p. 37).

The problems, then, are that the design, conduct, and reporting of academic research, and academic writing and communications generally, are often weak, and that some academic research is so faulty (or poorly written up) that it may have to be retracted, or substantially revised, after publication. The phenomenon of failed publication, for whatever reason, ironically makes reliance on the published literature, in practice or subsequent research, a risk for users.

The Study

I consulted Google Scholar regarding 10 of my papers (the target papers), published from 2000 to 2007, to determine their fate and whether problems existed: 1) whether others made use of them; 2) whether others reported any occurrences of “declining effects” in the target papers; and 3) whether there were calls for retraction of any of the works, or of any of the specific published findings. User publications (as revealed by Google Scholar) were grouped for analysis into the following categories:

Google Scholar provides direct links to most using works, usually making the full text of user publications available (the exception is books and book chapters, which are typically not available in full-text form). In total, of the 337 using publications I was able to obtain full-text copies of all but six of the publications through direct links or through the Athabasca University library’s subscription services.

Factors Investigated

Analysis of the use by others of the target papers focused on the following:

1. type and frequency of occurrence of

a) mentions of the target papers;

b) quotations, exact words taken from one of the target papers;

c) applications or replications, use of an instrument, procedure, process, or finding from any of the target papers;

d) contrasts/elaborations, a finding or approach different from, contrast- ed with, or diverged from something originally reported in a target paper;

e) multiple references, a reference in the form of one of several in a series (e.g., Fahy, 2010; Smith, 2002; Jones, 2003), the target paper then listed in the using paper’s bibliography;

f) single references, a sole, stand-alone reference to a single, specific publication (e.g., Fahy, 2010), the target paper then listed in the using paper’s bibliography;

g) usage by others influenced by the target paper’s

i) solo- or co-authored status;

ii) appearance in a prestige journal (one of the “gold standard of quality and utility for online educators” (Elbeck & Mandernach, 2009) or one of “themost prominent and recognized journals in the field of distance education” (Zawicki-Richter, Backer, & Vogt, 2009);

iii) geographic location of the using publication;

2. reports of declining verification, as described by Lehrer (2010);

3. calls for retraction of any of the target papers or any findings, or suggestions of fraud.

Analysis of user publications was conducted using SPSS and Excel (for quantitative questions) and ATLAS.ti (generally, for qualitative questions, though summaries of some elements, such as those presented in Table 3, were also conducted with this tool).


Question 1: Usage of target papers.

Use of the target papers by others is shown in Table 1 (from Google Scholar, as of September 2011).

Table 1

As shown above, every target paper received some use, ranging from 2 to 72 references by others. Usage findings show an analysis of the 868 total references in 337 publications produced by others. The findings, as shown in Table 2, include the following:

Table 2

Single references suggested focus on specific content within the target paper. Lapadat (2007) argues that reference to specific elements in another’s work may indicate a focus on or development of new theoretical models. In this case, single references could be regarded as focused on such specifics, especially in theses and dissertations. To test this idea, the data were examined to determine whether there was any preference among students (thesis and dissertation writers) for use of single quotations. There was a statistically significant difference between theses/dissertations and all other types of publications in use of single quotations (p = .018). More research is clearly needed on this issue, but it appears that in this study, as the literature predicted, students were more focused than other writers on specifics of the target publications, as revealed by their use of single references.

The specifics of users’ references varied widely. Some users simply mentioned a general aspect of a target paper, but made no reference to specific content in that paper (De Wever, 2006; De Wever, Schellens, Valcke, & Van Keer, 2006); others developed their own instruments or procedures based upon the target’s models, sometimes with little detailed reference to the original (e.g., Oriogun, Ravenscroft, & Cook, 2006); some mentioned the targets’ concerns, but without citing specific instruments or procedures (Valcke, 2009); and some researchers creatively applied the target’s tools and procedures to populations not studied in the original paper (Finegold & Cooke, 2006).

Overall, analysis of applications/replications showed the following types of references, and their frequencies, in the user publications (note that some publications contained more than one application/replication).

Table 3

Examples of the above:

Neutral reference (statement of facts, without endorsement or comment on value): “To analyze message content of the discussion transcripts, the authors used the Transcript Analysis Tool (TAT) (Fahy et al., 2001), an adaptation of Zhu’s discussion content categorization” (Gibbs & Bernas, 2008).

Positive reference (suggests value of concept referenced): “We found ourselves resorting to what Fahy et al. describe as the inefficient strategy of collaborative coding: very time-consuming” (Cook & Ralson, 2003).

Positive quotation (suggests value of concept quoted): “The instrument was applied to the analysis of the OAD using the sentence as the unit of analysis, following Fahy’s (2001) observation that, ‘Sentences are, after all, what conference participants produce to convey their ideas, and are what transcripts consist of (p. 4)’” (Murphy, 2004).

Negative reference (denies or questions value of concept referenced; may offer an alternative): “While Fahy revealed that the two methods of analysis are complementary, analysis using two separate methods is time consuming and impractical for application in educational contexts” (Murphy, 2004).

None of the user applications/replications resulted in outright rejection of a finding or a process from the target papers. In terms of usage of the basic communications elements examined in Table 2, above, the user papers that employed applications/replications of material from the target papers differed from the other user papers only in their use of single references and quotations. Again, as noted earlier, this usage may be seen as consistent with the pursuit of new theoretical models (Lapadat, 2007). This finding (and interpretation) should be considered preliminary, and further study is suggested.

The analysis also considered the association of usage with other authorship and publication factors in order to further describe and analyze usage.

Solo vs. co-authored.

In the sciences, collaboration, represented by co-publication, perhaps in response to problems with the perceived integrity of existing published research (“Professor Facebook,” 2011), has increased more than 95% in the past 50 years, with the size of teams growing about 20% each decade (Leher, 2012). Zawicki-Richter et al. (2009) noted a trend “towards more collaboration among researchers in distance education,” as seen in an increase of over 17% in collaborations from 2000 to 2008, as compared with the period 1991 to 1996 (p. 38). Lehrer also observed that science collaborations are demonstrably related to subsequent usage by others: Science papers by multiple authors receive more than twice as many citations as those by individuals, and “home-run papers” (those that receive 100 citations or more by others) are six times more likely to come from a team of scientists than from individuals (p. 23). As an example of the general ubiquity of collaborations in the present era of social networking, Lehrer cites the fact that most Broadway plays are now constructed by teams (p. 25).

Among the 10 target papers studied here, there was collaboration: Five were co-authored and five were solo-authored. However, there was no significant difference in the type or frequency of references attributable to authorship, suggesting that, in this instance, collaboration did not produce differences in usage by, or popularity with, others.

Prestige of publication source.

Elbeck and Mandernach (2009) identified five journals that, “…[b]ased upon popularity, importance, and perceptions of prestige… represent the gold standard of quality and utility for online educators” (p. 14). They were (emboldened titles, in the following, appeared in both lists discussed below):

Zawicki-Richter et al. (2009) offered a somewhat different list of journals, with “reputations as the most prominent and recognized journals in the field of distance education”:

Seven of the 10 target papers originally appeared in one of the above-listed journals, five in journals that were in both (prestige) lists, as follows:

Table 4 shows usage differences observed between papers appearing in prestige versus non-prestige publications.

Table 4

The above shows that, on two of the six measures explored, target papers which appeared in non-prestige journals had more quotes and single references by other writers than was expected statistically (using the X2 test). These are uses, as argued earlier, that suggest reference to specific elements of the target publications and may be seen as linked to theory-building (Lapadat, 2007). Another use of target papers from non-prestige sources, contrasts/elaborations, was also more common than expected in non-prestige publications, but the difference was not statistically significant. There were no statistically significant differences that favoured target papers in prestige journals.

Because in this study those who cited the target papers in non-prestige publications more frequently quoted from them and used more single references than did users of target papers that appeared in prestige publications, there was some evidence that usage focused on single, specific aspects of the target papers. This conclusion, of course, requires more investigation; it is offered here in the spirit of breaking new ground (Rourke, Anderson, Garrison, & Archer, 1999) and developing a “map of the territory” (Garrison, Anderson, & Archer, 2001).

Questions 2 and 3: Declining verification, and calls for retraction.

No occurrences of declining verification, the phenomenon originally reported by Dodson, Johnson, and Schooler (1997), and no calls for retraction, were found among any of the publications that referred to the 10 target papers. In light of the overall uses made, and especially in reference to specific uses that involved application, analysis, re-publication, and review of results, this suggests that readers can have confidence that the results reported in the target papers have been scrutinized and continue to be regarded as valid, both as initially published and as re-used in further work. If serious errors meriting calls for retraction had occurred in the target papers, it is the conclusion of this review that the uses made of the publications would have detected and reported them.

Further evidence for the above can be inferred from the types of uses observed here. The 10 papers were, in total, referenced (formally, that is, with APA-type citations in the using papers’ references section, and informally, that is, mentioned without formal citation) 868 times. Most of the references appeared in journal articles (566, 65.2%), or in theses and dissertations (181, 20.9%), both of which are scrutinized through a formal process of peer-review or faculty over-sight, a central feature of “disciplined inquiry” (Shulman, 1997). And yet, as documented in Table 3, on only three occasions did users express disagreement with anything in the target papers. The overall pattern of review is summarized below; by summing the proportion of journal articles and theses, it can be seen that well over 80% of the target publications were referenced, applied/replicated, quoted, or contrasted/elaborated in a peer-reviewed or otherwise closely monitored publication.

Table 5

Evidence for the integrity of the results in the target papers also exists in the uses made of them by subsequent authors. Analysis of applications/replications showed that most often only one target paper was cited by a user (though the single paper may be cited several times). This suggests that users focused on single sources, and specific aspects, of the target research. Further, most target papers were cited almost immediately after they appeared: Of the ten target papers, seven were cited for the first time in the same year they appeared, two in the year immediately after publication, and one in the second year after appearance. The target papers also continued to be cited over time: The mean period from publication to last (most recent) citation was 7.4 years. (The target papers were originally published from 2000 to 2007.) These findings show how users accessed, studied, and applied the target papers: frequently, soon after publication, and continuously over time.

Further analysis of uses by others showed variety in the types and sources of the publications that used the target papers in terms of geographic origin, publication type, topics, and intended audience:

Summary of Findings

The following summarizes the findings noted above.

  1. All of the target papers were used in some way by others.
  2. Thesis/dissertation writers (students), more often than other users, engaged comparably in all of the communications elements, except quotations; in this area, journal articles exceeded the others. Writers of theses and dissertations were apparently testing theory and the findings of others by re-application and replication; they also more heavily documented their conclusions and analyses (through single and multiple references).
  3. Journal articles were slightly more likely to contain quotations.
  4. Usage patterns suggest immediate and ongoing focus on and use of specifics in the target papers.
  5. Least likely to contain documentation of sources were conference proceedings (on five of the six communication elements, conference proceedings had the lowest ratio of the communication elements to number of publications).
  6. Geographic location of user publications in this study matched closely the pattern reported in other research.

Discussion of Implications

The evidence presented earlier was that research in the social sciences, including distance education, is often not replicated, may not be cited by others, may contain errors that are only detected later, and may even contain fraudulent results or processes (the last two situations, when discovered, invariably resulting in retraction). The fear is that, where close examination does not occur, findings and conclusions may not be examined or verified by others, but may still eventually become part of the “literature.” The intent of this study was to assess in relation to 10 published papers of one author whether there was any evidence that any of these conditions had developed over the life of the publications.

In using the target publications, other researchers tested and, when they did not report egregious weaknesses, or when they referred to the target papers’ specific elements positively, affirmed their usefulness. There was no evidence of declining verification in the time the papers had been in circulation, nor calls for or instances of retraction. The majority of uses were in theses and dissertations, and journal articles (together, over 80%), suggesting that the target papers were applied in the context of further research. An advantage of these uses in assessing the validity of the target publications lies in the fact that students’ work is usually conducted under, and subject to scrutiny by, senior academics, and journal publications are subject to peer-review. These uses could be seen, therefore, as more corroboration of the soundness of the original papers.

It is probably not surprising that conference publications contain fewer references: Some conferences are not peer-reviewed at all, and therefore including documentation in the resources posted within them is superfluous. University publications, on the other hand, are somewhat harder to explain: On four of the six criteria shown in Table 4, university publications were second last (most often to conference proceedings) in the frequency of use of the communications elements studied. Another poorly documented form was books and book chapters, again for unclear reasons. These findings merit further research and explanation.

There did not appear to be a distinct advantage to publishing in prestige journals in terms of expected versus observed usage (although in terms of numbers, not ratios or proportions, most of the resulting usage did pertain to papers which appeared in prestige sources). The frequency of use of material from non-prestige sources is potentially surprising, but may simply relate to these 10 target papers. Further research in this area is clearly merited. (A question that deserves exploration is, What, in terms of their contents or processes, distinguishes prestige publications, after all?)

Differing from the sciences, usage of these papers by others was not found to be associated with collaborative- or multiple-authorship. (There was no difference in usage by others of co-authored vs. solo-authored papers.) Again, further research should determine whether this finding is common in distance education research or the social sciences more generally.

Patterns of usage suggested that other writers were actively attempting to make use of the various contents (tools, processes, findings, conclusions) of the target papers. Certainly, the researcher who made 10 adjustments to previously published tools or procedures appeared to be diligently engaged in finding either an alternative application or in making the existing target tools and concepts fit new purposes. Statistically, those works that made some application of the target findings averaged more references (6.89 vs. 2.29) than those that did not (F = 25.25, p = .000).


This case study was motivated by several convictions: that publications should be used by others; that longevity is one of the tests of the validity of published research (if time is also accompanied by scrutiny, which implies, and follows from, use); and that the pattern of usage can indicate how and whether a researcher’s work has been received and used over time. It was conducted because the literature indicated that such an examination had not yet been performed in relation to published distance education research.

Publications should be used and tested; the cruelest fate any publication can endure (other than being retracted) is not to be noticed by others. Yet, as was reported earlier, the fate of many publications is just that. Reasons for neglect vary, including researchers’ lack of writing skill (Holdaway, 1986) and the preference of editors for work that shows positive results or papers that are more emphatic in their claims (Lehrer, 2010). There is no doubt that results must be relevant and timely and presented in ways that engage readers, especially, as in distance education, where text is the principal (often only) means of communication (Keegan, 1985; Kaye, 1989; Willis, 1992; Khan, 1997; Simonson, Smaldino, Albright, & Zvacek, 2006, 2009; Saunders, 2008). Possible culprits for the poor reception of research (and mistakes to be avoided by distance education researchers) include failure to look to the work of others for guidance; reviewers who are insensitive to the nuances of new research or unconventional findings; and investigators who work again already well-tilled ground. The possible role of readers and practitioners in these processes also bears further examination.


Cannell, L. (1999). Review of [distance education] literature. Unpublished paper, Distance Education Association of Theological Schools, Winnipeg, Canada.

Cohen, J. (2011). Public mea culpas. Technology Review, 114(6), 81–82.

Cook, D., & Ralston, J. (2003). Sharpening the focus: Methodological issues in analyzing online conferences. Technology, Pedagogy, and Education, 12(3), 361–376.

Coyne, J. A. (2010). The “decline effect”: Can we demonstrate anything in science. Retrieved from

De Wever, B. (2006). The impact of structuring tools on knowledge construction in asynchronous discussion groups (Unpublished doctoral dissertation). University of Gent, Netherlands.

De Wever, B., Schellens, T., Valcke, M., & Van Keer, H. (2006). Content analysis schemes to analyze transcripts of online asynchronous discussion groups: A review. Computers and Education, 46, 6–28.

Dodson, C. S., Johnson, M. K., & Schooler, J. W. (1997). The verbal overshadowing effect: Why descriptions impair face recognition. Memory and cognition, 25(2), 129–139.

Elbeck, M., & Mandernach, E. (2009). Journals for computer-mediated learning: Publications of value for the online educator. International Review of Research in Open and Distance Learning, 10(3), 1–20.

Fahy, P. J., Crawford, G., Ally, M., Cookson, P., Keller, V., & Prosser, F. (2000). Development and testing of a tool for analysis of CMC transcripts. Alberta Journal of Educational Research, 46(1), Spring, 85-88.

Fahy, P. J. (2001). Addressing some common problems in transcript analysis. International Review of Research in Open and Distance Learning, 1(2). Retrieved from

Fahy, P. J., Crawford, G. & Ally, M. (2001). Patterns of interaction in a computer conference transcript. International Review of Research in Open and Distance Learning, 2(1). Retrieved from

Fahy, P. J. (2002a). Epistolary and expository interaction patterns in a computer conference transcript. The Journal of Distance Education, 17(1).

Fahy, P. J. (2002b). Use of linguistic qualifiers and intensifiers in a computer conference. The American Journal of Distance Education, 16(1), 5-22.

Fahy, P. J. (2003.) Indicators of support in online interaction. International Review of Research in Open and Distance Learning, 4(1). Retrieved from

Fahy, P. J. (2005). Two methods for assessing critical thinking in computer-mediated communication (CMC) transcripts. International Journal of Instructional Technology and Distance Learning (March). Retrieved from

Fahy, P. J., &. Ally, M. (2005). Student learning style and asynchronous computer-mediated conferencing. American Journal of Distance Education, 19(1), 5–22.

Fahy, P. J. (2006). Online and face-to-face group interaction processes compared using Bales’ Interaction Process Analysis (IPA). European Journal of Open, Distance, and E-learning, 1. Retrieved from

Fahy, P. J. (2007). The occurrence and character of stories and story-telling in a computer conference. Distance Education, 28(1), 45–63. DOI: 10.1080/01587910701305301

Finegold, R. D., & Cooke, L. (2006). Exploring the attitudes, experiences and dynamics of interaction in online groups. The Internet and Higher Education, 9(3), 201 – 215.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. The American Journal of Distance Education, 15(1), 7-23.

Gibbs, W. J., & Bernas, R. S. (2008). Interactional and structural aspects of communication and social interactions during computer-mediated communication. Journal of Computing in Higher Education, 20(1), 3–33.

Gibson, C. (2003). Learners and learning: The need for theory. M. Moore & W. Anderson (Eds.), Handbook of distance education (pp. 147–160). Mahwah, NJ: Earlbaum Associates.

Groopman, J. (2010, May 31). The plastic panic. The New Yorker, LXXXVI(15), 26 – 31.

Holdaway, E. A. (1986). Making research matter. The Alberta Journal of Educational Research, XXXII(3), 249-264.

Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Med, 2(8): e124. DOI: 10.1371/journal.pmed.0020124. Retrieved from

Jeong, A. C. (2003). The sequential analysis of group interaction and critical thinking in online threaded discussions. The American Journal of Distance Education, 17(1), 25–43.

Journalistic deficit disorder. (2012). The Economist, 404(8803), 90, 92.

Kaye, A. (1989). Computer-mediated communication and distance education. In R. Mason & A. Kaye (Eds.), Mindweave (pp. 3–21). Toronto: Pergamon Press.

Keegan, D. (1985). Foundations of distance education (2nd ed). NY: Routledge.

Khan, B. (1997). Web-based instruction (WBI): What is it and why is it? In B. Khan (Ed.), Web-based instruction (pp. 5-18). Englewood Cliffs, NJ: Educational Technology Publications.

Lapadat, J. C. (2007). Discourse devices used to establish community, increase coherence, and negotiate agreement in an online university course. Journal of Distance Education, 21(3), 59-92.

Lehrer, J. (2010, December 13). The truth wears off. The New Yorker, LXXXVI(40), 52–57.

Lehrer, J. (2012, January 30). Groupthink. The New Yorker, LXXXVII(46), 22–27.

Liar! Liar! (2009, June 6). The Economist, 391(8634), 78–79.

Manley, O. (2008). Facilitating communication in an online course. In C. Bonk et al. (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2008 (pp. 1806-1811). Chesapeake, VA: AACE.
Retrieved from

Moonesinghe, R., Khoury, M.J., & Janssens, A.C.J.W. (2007). Most published research findings are false—but a little replication goes a long way. PLoS Med, 4(2): e28. doi:10.1371/journal.pmed.0040028

Moore, M. (1985). Current research in distance education. Epistolodidaktika, 10, 35-62.

Murphy, E. (2004). An instrument to support thinking critically about critical thinking in online asynchronous discussions. Australasian Journal of Education Technology, 20(3), 295–315. Retrieved from file:///C:/Users/patf/Desktop/Active%20papers/Declining%20verification/papers/ALL/murphy,%202004.html

Myers, P. Z. (2010, December 30). Science is not dead [Web log post]. Retrieved from

Oriogun, P. K., Ravenscroft, A., & Cook, J. (2006). Towards understanding critical thinking processes in a semi-structured approach to computer-mediated communication. In E. Pearson & P. Bohman (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2006 (pp. 2390-2397). Chesapeake, VA: AACE. Retrieved from

Patton, M. Q. (1975). Alternative evaluation research paradigm. Grand Forks, ND: North Dakota Study Group on Evaluation.

Professor Facebook. (11 February 2011). The Economist, p. 80.

Publish and be wrong. (2008, October 11). The Economist, 389(8601), 109.

Rourke, L., Anderson, T., Garrison, R., & Archer, W. (1999). Assessing social presence in asynchronous text-based computer conferencing. Journal of Distance Education, 14(2), 50-71.

Rourke, L., & Kanuka, H. (2009). Learning in communities of inquiry: A review of the literature. Journal of Distance education, 23(1), 19–48.

Rovai, A. P. & Barnum, K. T. (2003). Online course effectiveness: An analysis of student interactions and perceptions of learning. Journal of Distance Education, 18(1), 57–73.

Saba, F. (2000). Research in distance education: A status report. International Review Of Research And Open And Distance Learning, 1(1). Retrieved from

Saunders, R. (2008). What’s next? Report on the forum for higher education in Canada. Retrieved from

Schwier, R., Morrison, D., Daniel, B. & Koroluk, J. (2009). Participant engagement in a non-formal, self-directed and blended learning environment. In T. Bastiaens et al. (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009 (pp. 1948-1956). Chesapeake, VA: AACE.
Retrieved from

Shulman, L. (1997). Disciplines of inquiry in education: An new overview. In R. M. Jaeger (Ed.). Complementary methods for research in education (2nd ed., pp. 3–30). Washington: American Educational Research Assoc.

Simonson, M., Smaldino, S., Albright, M., & Zvacek, S. (2006). Teaching and learning at a distance: Foundations of distance education (3rd ed.). Upper Saddle River, NJ: Pearson Education, Inc.

Simonson, M., Smaldino, S., Albright, M., & Zvacek, S. (2009). Teaching and learning at a distance: Foundations of distance education (4th ed.). Upper Saddle River, NJ: Pearson Education, Inc.

University challenge. (2011, December 10). The Economist, p. 74.

Valcke, M. (2009). Computer supported collaborative learning in higher education: An overview of evidence based approaches. Retrieved from

Weisman, D. (2011, January 4). Jonah Lehrer “decline effect,” now in decline [Web log post]. Psychology Today. Retrieved from

Willis, B. (1992). Strategies for teaching at a distance. ERIC Clearinghouse on Information Resources, Syracuse, N.Y. ERIC Digest: ED351008.

Zawicki-Richter, O., Backer, E. M., & Vogt, S. (2009). Review of distance education research (2000 – 2008): Analysis of research areas, methods, and authorship patterns. International Review of Research in Open and Distance Learning, 10(6), 21–50.