Uses of Published Research: An Exploratory Case Study

Academic publications are too often ignored by other researchers. There are various reasons: Researchers know that conclusions may eventually be proved wrong; publications are sometimes retracted; effects may decline when studied later; researchers occasionally don’t seem to know about papers they have allegedly authored; there are even accusations of fraud (Cohen, 2011). In this exploratory case study, 10 papers were examined to determine the various ways they were used by others, whether there were cases of reported effects declining, and whether, among those who referenced the papers, there were suggestions that anything in the papers ought to be retracted. Findings showed that all the papers had been referenced by others (337 user publications were found, containing a total of 868 references). Other findings include the following: Single references were far more common than multiple references; applications/replications were the least common type of usage (23 occurrences), followed by contrasts/elaborations (34), and quotations (65); unlike reports regarding publications in the sciences, whether the paper was soloor co-authored did not affect usage; appearance in a non-prestige journal was actually associated with more usage of some kinds; and well over 80% of uses were in heavily scrutinized sources (journal articles or theses/dissertations). The paper concludes with recommendations to writers about how to avoid producing publications that are ignored.

Academic publications are too often ignored by other researchers. There are various reasons: Researchers know that conclusions may eventually be proved wrong; publications are sometimes retracted; effects may decline when studied later; researchers occasionally don't seem to know about papers they have allegedly authored; there are even accusations of fraud (Cohen, 2011). In this exploratory case study, 10 papers were examined to determine the various ways they were used by others, whether there were cases of reported effects declining, and whether, among those who referenced the papers, there were suggestions that anything in the papers ought to be retracted. Findings showed that all the papers had been referenced by others (337 user publications were found, containing a total of 868 references). Other findings include the following: Single references were far more common than multiple references; applications/replications were the least common type of usage (23 occurrences), followed by contrasts/elaborations (34), and quotations (65); unlike reports regarding publications in the sciences, whether the paper was solo-or co-authored did not affect usage; appearance in a non-prestige journal was actually associated with more usage of some kinds; and well over 80% of uses were in heavily scrutinized sources (journal articles or theses/dissertations). The paper concludes with recommendations to writers about how to avoid producing publications that are ignored.

Introduction and Background
In addition to the long-lamented, generally poor state of distance education research (Keegan, 1985;Moore, 1985;Cannell, 1999;Saba, 2000;Gibson, 2003;Zawicki-Richter, Backer, & Vogt, 2009), there are increasing problems with published academic papers, from a range of disciplines, being ignored after publication (Lehrer, 2010), eventually being proven wrong ("Publish and be wrong," 2008), being retracted for various reasons by their authors or publishers (Groopman, 2010), or being accused of fraud (" Liar! Liar!," 2009). The problem even plagues summaries of research in the popular press, where readers are warned that if they do not see subsequent confirmation of research they should suspect that the original, innovative findings "may have fallen by the wayside" ("Journalistic deficit disorder," 2012). It is also not unusual for effects observed initially to decline when studied later, the "declining verification" problem (Ioaniddis, 2005;Coyne, 2010).
This paper explores these issues in specific reference to my own work. It is an exploratory case study, raising issues which appear not to have been addressed before in regard to published research in open, distance, or general education research. It is intended to illustrate cases (uses of publications by others) and to provide an initial process for evaluating publications against problems reported elsewhere with academic research. To conduct it, I consulted Google Scholar (http://scholar.google.ca/) to determine how 10 of my papers, the target papers, published between 2000 and 2007, had been used by others. A similar process, also employing results found in Google Scholar, was previously reported by Rourke and Kanuka (2009). Google Scholar is a convenient and thorough way to determine when one's research has been cited by others; in addition, it provides access to the using work which, as described below, was central to the present study. Most were written by me alone (8), most were peerreviewed (8), and all were old enough to have garnered attention from the field (if they were ever to do so).
References by others were found in 337 publications, the user papers, which contained a total of 868 references to something in one of the 10 target papers (I didn't count references I made myself). I examined these uses by looking at how the target papers were used by others and what specific conclusions others may have reached about them.
The major purposes of the study were: 1) to explore the various uses others had made of the target papers; 2) to determine whether any of the effects reported in the target papers were found by others to be "declining" in any way; and 3) to ascertain whether anyone had called for anything reported in a target paper to be retracted, based either on new findings or on further examination of target paper data, reported findings, processes, or conclusions.
As noted earlier, lack of use has plagued academic research publications for some time, for various reasons. Lehrer (2010, p. 56)  In addition to the lack of replication, publications sometimes do not survive subsequent research. Ioannides (2005) has asserted that, based on further examination and the gathering of more data, most initial research can be shown to be "false." He also criticized the practice of declaring an issue definitively resolved on the basis of a single study, noting that this practice is likely both to inhibit replication and to result in the refutation of results in subsequent studies because emphasis is given to small discrepancies (p. 696). In the same article, he was also critical of the practice in many fields of publishing only positive results, a problem especially when robustness of results is a goal. Lehrer (2010) reported in this regard that in 1959 the statistician Theodore Sterling noticed that 97% of all published psychological studies with statistically significant data found the effect they were looking for, leading him to conclude that psychologists were either extraordinarily lucky or they published only outcomes of successful experiments (p. 55). This is an early occasion of criticism of the predilection of researchers to offer for publication, and for editors to consider, only positive results.
More recently, Weisman (2011) described a similar "bias" of editors for positive results, which might later be found false (or trivial) through further experimentation. (Patton had earlier made the same criticism [1975, p. 25].) Weisman cites "beginner's luck" findings and regression to the mean as possible explanations for outcomes that are later rejected. Lehrer (2010) writes that Schooler, who first reported the "declining verification" phenomenon, was at a loss to explain why; he eventually blamed causes like "habituation." Myers (2010), while listing several possible explanations (i.e., investigator bias, population variance, simple chance, para. 5 to para. 11), criticized the tendency of some industries (e.g., pharmaceuticals) to attempt deliberately to profit from outliers (para. 8).
There are other explanations for the fact that research is often later disproved, contradicted, or even retracted (Cohen, 2011). The research might not be well done: Rovai and Barnum (2003, p. 58) reported that only 5% of the research in distance education published from 1993 to 2003 was valid enough to support any conclusions about (in their case) the effectiveness of using technology in teaching. As another example, in biomedical research samples are typically small due to the nature of the field and its research; this fact, however, weakens the likelihood of subsequent corroboration (Ioannidis, 2005; "Journalistic deficit disorder," 2012). Added to these problems is the fact that academic researchers are often not good at clearly expressing their discoveries or their thinking (Holdaway, 1986). The bias of editors toward positive results, rather than more nuanced, even "no significant difference" findings, has already been mentioned. In their analysis, the least common type (at 3 occurrences) was argument/disagreement, while the most frequent type (40 occurrences) was agreement. Manley (2008) reported that disagreement was tied for seventh in a list of fifteen kinds of comments in an online forum he examined. When Jeong (2003) studied disagreement in online interactions among distance students, he concluded that "… statements of disagreement were rare" and that most commonly "disagreement occurred when arguments and counterarguments were exchanged" (p. 37).
The problems, then, are that the design, conduct, and reporting of academic research, and academic writing and communications generally, are often weak, and that some academic research is so faulty (or poorly written up) that it may have to be retracted, or Google Scholar) were grouped for analysis into the following categories: • conference presentations and proceedings; • journal articles; • theses and dissertations; • university publications (unreviewed reports, papers, statements, summaries, and brochures); • books, book chapters, or publications otherwise not available for download as full-text (and therefore often not fully examined in the field). Google Scholar provides direct links to most using works, usually making the full text of user publications available (the exception is books and book chapters, which are typically not available in full-text form). In total, of the 337 using publications I was able to obtain full-text copies of all but six of the publications through direct links or through the Athabasca University library's subscription services.

Factors Investigated
Analysis of the use by others of the target papers focused on the following: 1. type and frequency of occurrence of a) mentions of the target papers; b) quotations, exact words taken from one of the target papers; c) applications or replications, use of an instrument, procedure, process, or finding from any of the target papers; d) contrasts/elaborations, a finding or approach different from, contrasted with, or diverged from something originally reported in a target paper; e) multiple references, a reference in the form of one of several in a series (e.g., Fahy, 2010;Smith, 2002;Jones, 2003), the target paper then listed in the using paper's bibliography; f) single references, a sole, stand-alone reference to a single, specific publication (e.g., Fahy, 2010), the target paper then listed in the using paper's bibliography; g) usage by others influenced by the target paper's i) solo-or co-authored status; ii) appearance in a prestige journal (one of the "gold standard of quality and utility for online educators" (Elbeck & Mandernach, 2009)  iii) geographic location of the using publication; 2. reports of declining verification, as described by Lehrer (2010); 3. calls for retraction of any of the target papers or any findings, or suggestions of fraud.  Table 3, were also conducted with this tool).

Findings
Question 1: Usage of target papers.
Use of the target papers by others is shown in Table 1 (from Google Scholar, as of September 2011). As shown above, every target paper received some use, ranging from 2 to 72 references by others. Usage findings show an analysis of the 868 total references in 337 publications produced by others. The findings, as shown in Table 2, include the following: • Single references were more common than multiple references by a ratio of more than 2 to 1; • Applications/replications were the least common type of usage (23 total occurrences, and no occurrences in relation to half of the target papers; more is said about applications/replications below); • The 65 quotations included 24 (37%) by one user article, in reference to one target paper; • All but two of the 10 target papers (the two most recently published) experienced some contrasting or elaborating use by another user.  The specifics of users' references varied widely. Some users simply mentioned a general aspect of a target paper, but made no reference to specific content in that paper De Wever, Schellens, Valcke, & Van Keer, 2006); others developed their own instruments or procedures based upon the target's models, sometimes with little detailed reference to the original (e.g., Oriogun, Ravenscroft, & Cook, 2006); some mentioned the targets' concerns, but without citing specific instruments or procedures (Valcke, 2009); and some researchers creatively applied the target's tools and procedures to populations not studied in the original paper (Finegold & Cooke, 2006).
Overall, analysis of applications/replications showed the following types of references, and their frequencies, in the user publications (note that some publications contained more than one application/replication).  (Cook & Ralson, 2003).
Positive quotation (suggests value of concept quoted): "The instrument was applied to the analysis of the OAD using the sentence as the unit of analysis, following Fahy's (2001) observation that, 'Sentences are, after all, what conference participants produce to convey their ideas, and are what transcripts consist of (p. 4)'" (Murphy, 2004).
Negative reference (denies or questions value of concept referenced; may offer an alternative): "While Fahy revealed that the two methods of analysis are complementary, analysis using two separate methods is time consuming and impractical for application in educational contexts" (Murphy, 2004).
None of the user applications/replications resulted in outright rejection of a finding or a process from the target papers. In terms of usage of the basic communications elements examined in Table 2, above, the user papers that employed applications/replications of material from the target papers differed from the other user papers only in their use of single references and quotations. Again, as noted earlier, this usage may be seen as consistent with the pursuit of new theoretical models (Lapadat, 2007). This finding (and interpretation) should be considered preliminary, and further study is suggested.
The analysis also considered the association of usage with other authorship and publication factors in order to further describe and analyze usage.

Solo vs. co-authored.
In the sciences, collaboration, represented by co-publication, perhaps in response to problems with the perceived integrity of existing published research ("Professor Facebook," 2011), has increased more than 95% in the past 50 years, with the size of teams growing about 20% each decade (Leher, 2012). Zawicki-Richter et al. (2009) noted a trend "towards more collaboration among researchers in distance education," as seen in an increase of over 17% in collaborations from 2000 to 2008, as compared with the period 1991 to 1996 (p. 38). Lehrer also observed that science collaborations are demonstrably related to subsequent usage by others: Science papers by multiple authors receive more than twice as many citations as those by individuals, and "home-run papers" (those that receive 100 citations or more by others) are six times more likely to come from a team of scientists than from individuals (p. 23). As an example of the general ubiquity of collaborations in the present era of social networking, Lehrer cites the fact that most Broadway plays are now constructed by teams (p. 25).
Among the 10 target papers studied here, there was collaboration: Five were coauthored and five were solo-authored. However, there was no significant difference in the type or frequency of references attributable to authorship, suggesting that, in this instance, collaboration did not produce differences in usage by, or popularity with, others.
Prestige of publication source. Elbeck and Mandernach (2009) identified five journals that, "…[b]ased upon popularity, importance, and perceptions of prestige… represent the gold standard of quality and utility for online educators" (p. 14). They were (emboldened titles, in the following, appeared in both lists discussed below):

• International Review of Research in Open and Distance Learning,
• Journal for Asynchronous Learning Networks, • eLearning Papers, • Innovate: Journal of Online Education, • American Journal of Distance Education.

Zawicki-Richter et al. (2009) offered a somewhat different list of journals, with
"reputations as the most prominent and recognized journals in the field of distance education": • Open Learning, • Distance Education, • American Journal of Distance Education, • Journal of Distance Education,

• International Review of Research in Open and Distance Learning.
Seven of the 10 target papers originally appeared in one of the above-listed journals, five in journals that were in both (prestige) lists, as follows:

• three in the International Review of Research in Open and Distance
Learning, • two in The American Journal of Distance Education, • one in the Journal of Distance Education, • one in Distance Education.  The above shows that, on two of the six measures explored, target papers which appeared in non-prestige journals had more quotes and single references by other writers than was expected statistically (using the X 2 test). These are uses, as argued earlier, that suggest reference to specific elements of the target publications and may be seen as linked to theory-building (Lapadat, 2007). Another use of target papers from non-prestige sources, contrasts/elaborations, was also more common than expected in non-prestige publications, but the difference was not statistically significant. There were no statistically significant differences that favoured target papers in prestige journals.
Because in this study those who cited the target papers in non-prestige publications more frequently quoted from them and used more single references than did users of target papers that appeared in prestige publications, there was some evidence that usage focused on single, specific aspects of the target papers. This conclusion, of course, requires more investigation; it is offered here in the spirit of breaking new ground  (Rourke, Anderson, Garrison, & Archer, 1999) and developing a "map of the territory" (Garrison, Anderson, & Archer, 2001).

Questions 2 and 3: Declining verification, and calls for retraction.
No occurrences of declining verification, the phenomenon originally reported by Dodson, Johnson, and Schooler (1997), and no calls for retraction, were found among any of the publications that referred to the 10 target papers. In light of the overall uses made, and especially in reference to specific uses that involved application, analysis, republication, and review of results, this suggests that readers can have confidence that the results reported in the target papers have been scrutinized and continue to be regarded as valid, both as initially published and as re-used in further work. If serious errors meriting calls for retraction had occurred in the target papers, it is the conclusion of this review that the uses made of the publications would have detected and reported them.
Further evidence for the above can be inferred from the types of uses observed here.
The 10 papers were, in total, referenced (formally, that is, with APA-type citations in the using papers' references section, and informally, that is, mentioned without formal citation) 868 times. Most of the references appeared in journal articles (566, 65.2%), or in theses and dissertations (181, 20.9%), both of which are scrutinized through a formal process of peer-review or faculty over-sight, a central feature of "disciplined inquiry" (Shulman, 1997). And yet, as documented in Table 3, on only three occasions did users express disagreement with anything in the target papers. The overall pattern of review is summarized below; by summing the proportion of journal articles and theses, it can be seen that well over 80% of the target publications were referenced, applied/replicated, quoted, or contrasted/elaborated in a peer-reviewed or otherwise closely monitored publication.  Evidence for the integrity of the results in the target papers also exists in the uses made of them by subsequent authors. Analysis of applications/replications showed that most often only one target paper was cited by a user (though the single paper may be cited several times). This suggests that users focused on single sources, and specific aspects, of the target research. Further, most target papers were cited almost immediately after they appeared: Of the ten target papers, seven were cited for the first time in the same year they appeared, two in the year immediately after publication, and one in the second year after appearance. The target papers also continued to be cited over time: The mean period from publication to last (most recent) citation was 7.4 years. (The target papers were originally published from 2000 to 2007.) These findings show how users accessed, studied, and applied the target papers: frequently, soon after publication, and continuously over time.
Further analysis of uses by others showed variety in the types and sources of the publications that used the target papers in terms of geographic origin, publication type, topics, and intended audience: • The geographic origins of the using publications were America (40%), Europe • Twenty-one of the uses were in publications from university departments, individual faculty, or university presses; among other things, it is uncertain whether these uses were peer-reviewed; • Topic and intended audience might be seen in the titles of the articles or the publications that carried them. The three most commonly used key terms from the users' titles were computing/technology (68), higher education (14), and distance (12).

Summary of Findings
The following summarizes the findings noted above.
1. All of the target papers were used in some way by others.
2. Thesis/dissertation writers (students), more often than other users, engaged comparably in all of the communications elements, except quotations; in this area, journal articles exceeded the others. Writers of theses and dissertations were apparently testing theory and the findings of others by re-application and replication; they also more heavily documented their conclusions and analyses (through single and multiple references).
3. Journal articles were slightly more likely to contain quotations.
4. Usage patterns suggest immediate and ongoing focus on and use of specifics in the target papers.
5. Least likely to contain documentation of sources were conference proceedings (on five of the six communication elements, conference proceedings had the lowest ratio of the communication elements to number of publications).
6. Geographic location of user publications in this study matched closely the pattern reported in other research.

Discussion of Implications
The evidence presented earlier was that research in the social sciences, including distance education, is often not replicated, may not be cited by others, may contain errors that are only detected later, and may even contain fraudulent results or processes (the last two situations, when discovered, invariably resulting in retraction). The fear is that, where close examination does not occur, findings and conclusions may not be examined or verified by others, but may still eventually become part of the "literature." The intent of this study was to assess in relation to 10 published papers of one author In using the target publications, other researchers tested and, when they did not report egregious weaknesses, or when they referred to the target papers' specific elements positively, affirmed their usefulness. There was no evidence of declining verification in the time the papers had been in circulation, nor calls for or instances of retraction. The majority of uses were in theses and dissertations, and journal articles (together, over 80%), suggesting that the target papers were applied in the context of further research.
An advantage of these uses in assessing the validity of the target publications lies in the fact that students' work is usually conducted under, and subject to scrutiny by, senior academics, and journal publications are subject to peer-review. These uses could be seen, therefore, as more corroboration of the soundness of the original papers.
It is probably not surprising that conference publications contain fewer references: Some conferences are not peer-reviewed at all, and therefore including documentation in the resources posted within them is superfluous. University publications, on the other hand, are somewhat harder to explain: On four of the six criteria shown in Table   4, university publications were second last (most often to conference proceedings) in the frequency of use of the communications elements studied. Another poorly documented form was books and book chapters, again for unclear reasons. These findings merit further research and explanation.
There did not appear to be a distinct advantage to publishing in prestige journals in terms of expected versus observed usage (although in terms of numbers, not ratios or proportions, most of the resulting usage did pertain to papers which appeared in prestige sources). The frequency of use of material from non-prestige sources is potentially surprising, but may simply relate to these 10 target papers. Further research in this area is clearly merited. (A question that deserves exploration is, What, in terms of their contents or processes, distinguishes prestige publications, after all?) Differing from the sciences, usage of these papers by others was not found to be associated with collaborative-or multiple-authorship. (There was no difference in usage by others of co-authored vs. solo-authored papers.) Again, further research should determine whether this finding is common in distance education research or the social sciences more generally.
Patterns of usage suggested that other writers were actively attempting to make use of the various contents (tools, processes, findings, conclusions) of the target papers.
Certainly, the researcher who made 10 adjustments to previously published tools or procedures appeared to be diligently engaged in finding either an alternative application or in making the existing target tools and concepts fit new purposes. Statistically, those works that made some application of the target findings averaged more references (6.89 vs. 2.29) than those that did not (F = 25.25, p = .000). is also accompanied by scrutiny, which implies, and follows from, use); and that the pattern of usage can indicate how and whether a researcher's work has been received and used over time. It was conducted because the literature indicated that such an examination had not yet been performed in relation to published distance education research.
Publications should be used and tested; the cruelest fate any publication can endure (other than being retracted) is not to be noticed by others. Yet, as was reported earlier, the fate of many publications is just that. Reasons for neglect vary, including researchers' lack of writing skill (Holdaway, 1986) and the preference of editors for work that shows positive results or papers that are more emphatic in their claims (Lehrer, 2010). There is no doubt that results must be relevant and timely and presented in ways that engage readers, especially, as in distance education, where text is the principal (often only) means of communication (Keegan, 1985;Kaye, 1989;Willis, 1992;Khan, 1997;Simonson, Smaldino, Albright, & Zvacek, 2006, 2009Saunders, 2008). Possible culprits for the poor reception of research (and mistakes to be avoided by distance education researchers) include failure to look to the work of others for guidance; reviewers who are insensitive to the nuances of new research or unconventional findings; and investigators who work again already well-tilled ground. The possible role of readers and practitioners in these processes also bears further examination.