Linking theory to practice in learning technology research

Cathy Gunn, Caroline Steel
Linking theory to practice in learning technology research

Linking theory to practice in learning technology research

Cathy Gunna* and Caroline Steelb

aCentre for Academic Development, University of Auckland, Auckland, New Zealand; bSchool of Languages and Comparative Cultural Studies, Faculty of Arts, University of Queensland, Brisbane, Queensland, Australia

(Received 31 January 2011; final version received 5 October 2011; published: 15 March 2012)

Abstract

We present a case to reposition theory so that it plays a pivotal role in learning technology research and helps to build an ecology of learning. To support the case, we present a critique of current practice based on a review of articles published in two leading international journals from 2005 to 2010. Our study reveals that theory features only incidentally or not at all in many cases. We propose theory development as a unifying theme for learning technology research study design and reporting. The use of learning design as a strategy to develop and test theories in practice is integral to our argument. We conclude by supporting other researchers who recommend educational design research as a theory focused methodology to move the field forward in productive and consistent ways. The challenge of changing common practice will be involved. However, the potential to raise the profile of learning technology research and improve educational outcomes justifies the effort required.

Keywords: research methodology; educational design research; educational theory; evaluation

Citation: Research in Learning Technology 2012, 20: 16148 - 10.3402/rlt.v20i0.16148

RLT 2012. © 2012 C. Gunn and C. Steel. Research in Learning Technology is the journal of the Association for Learning Technology (ALT), a UK-based professional and scholarly society and membership organisation. ALT is registered charity number 1063519. http://www.alt.ac.uk/. This is an Open Access article distributed under the terms of the Creative Commons "Attribution 3.0 Unported (CC BY 3.0)" license (http://creativecommons.org/licenses/by/3.0/) permitting use, reuse, distribution and transmission, and reproduction in any medium, provided the original work is properly cited.


*Corresponding author. Email: ca.gunn@auckland.ac.nz

 

Despite a reasonably large body of knowledge built up over recent decades, an Association for Learning Technology (ALT) sponsored report on Technology in Learning notes that research typically does not address the problem of building an ecology of learning or take important factors related to integration of learning innovations into account (ALT 2010, 5). The authors of that report believe it is impossible to separate theory from evidence in this area, reflecting a narrowing gap between educational theory and practice. The implication is that a body of reliable evidence from research into practice provides an ideal basis for theory generation and testing. Such evidence is also required to attend to theoretical questions about the nature of learning in context (Collins, Joseph, and Bielaczyc 2004). The question that remains unanswered is, what forms of evidence are sufficiently detailed robust and reliable to be acceptable for these purposes?

Reeves, McKenney and Herrington (2010) present a plausible answer by proposing educational design research as one effective way to advance theoretical knowledge, and to provide sector-wide confidence in the outcomes of learning technology research. Other advocates of this approach concur, for example:

Historically, design in educational research has served as a way to implement theories for testing. The emerging design research paradigm treats design as a strategy for developing and refining theories. Three types of theories can be developed: domain theories, design frameworks and design methodologies. I argue for design research … because [it] engages researchers in the direct improvement of education practice. (Edelson 2002, p. 105).

Educational design research is supported by an existing knowledge base, in a methodological approach that blends theory with practice and context in a way that is sufficiently structured to present meaningful evidence to various stakeholders. The “continuous process” element has potential to address the “ecology of learning” challenge identified in a recent report published by ALT (2010, p. 5).

The proposal by Reeves, McKenney and Herrington (2010) is not new. It is a recent iteration of a call to action that has been repeated for more than a decade. Special editions promoting educational design research (with variations on the name but not the methodology) include Journal of the Learning Sciences 10(1 and 2) 2001 and 13(1) 2004; Educational Researcher 32(1) 2003; and Journal of Computing in Higher Education 16(2) 2005. Books include Educational Design Research (van den Akker et al. 2006) and The Handbook of Design Research Methods in Education (Kelly, Lesh, and Baek 2008). Sector-wide engagement with the methodology is progressing slowly however, for reasons as complex as the contexts that researchers aim to explain through its processes. Reeves, McKenney and Herrington (2010) point to weaknesses in programs designed to prepare the next generation of educational researchers and the preference of many journal editors for purely empirical studies. Equally challenging is the funding body preference for “established” methodologies and the requirement for researchers to work under pressures of time to produce results that show impact within an immediate study context rather than contributing to the generation or refinement of theories. With these influences in mind, a study of articles published in two leading international journals between 2005 and 2010 allowed us to see if the issues had changed since earlier investigations were undertaken, and if similar or different influences were now presenting.

A review of learning technology research articles: methodology

We chose AJET (Australasian Journal of Educational Technology) and ALT-J (since renamed Research in Learning Technology) as the source of articles to review because these journals represent leading professional societies and practitioner communities that aim to promote best practice in the field of learning technologies in higher education (Ascilite in Australasia and ALT in the UK). Our decision to conduct a review was motivated by concern that theory in the field was not advancing at the rate that might be expected with the increased volume of research publications and reports being produced. We believed that insufficient attention was being paid to the role of theory in this area of research and that the critical link between theory in educational design and practice was too often ignored. We also suspected that questions posed and actions proposed as a result of earlier critiques of research (e.g., Bannan-Ritland 2003; Barab and Squire 2004; Kelly 2003; Mitchell 2000; Reeves 1995) had not been widely addressed.

We began with three simple criteria for selection, which we applied to a total of 318 articles published in the two journals from 2005 to 2010. This preliminary selection process allowed us to identify studies that focused on practice in a higher education context and presented data on the impact of a technology enhanced learning design, i.e.,

  1. The research was conducted to evaluate some aspect of technology supported learning design or resources in use, or the use of technology to solve an educational problem;

  2. The research involved collection and analysis of data; and

  3. The research was conducted in a higher education context.

This screening process returned 100 articles, which we analyzed on 12 dimensions, developed to reflect the elements of educational design research and qualitative methodology in general. We refined the criteria after a trial run with a sample of articles that also served to verify common interpretation of the criteria by the two researchers. Each paper was assessed on the following dimensions:

  1. What educational problem or issue does the design/innovation attempt to address?

  2. Is the purpose of the evaluation to test or improve design of learning resources, a learning design or a technology solution?

  3. Is theoretical grounding of the educational design concept described?

  4. Does the evaluation use an appropriate methodology, and is it rigorously applied?

  5. Was evidence collected systematically from different sources and using different methods throughout the implementation of the elearning initiative? If not, [how] does this affect the scope of the findings?

  6. Are the limitations of methods used e.g., self-report or sole use of objective or subjective data, noted in the paper?

  7. Was the context of implementation acknowledged in the evaluation design?

  8. Did the evidence clearly show the impact of the initiative on student learning and teacher behaviours?

  9. To what extent was the study longitudinal and what stage of development or implementation was in focus?

  10. What were the outcomes of the study?

  11. Were the findings informative for the study and possibly for other [higher education] contexts?

  12. Is this case an exemplar of any kind and why or why not?

While educational design research principles provided a guiding framework for analysis, we did not judge the overall quality of articles on their application. Rather, we considered the variety of types of research questions the articles addressed and acknowledged where the chosen methodology was suitable to the circumstances. We did, however, consider the broader implications that choosing a particular methodology would have on study findings and on authors’ claims of impact.

As well as context, integration and timeframes, two of our dimensions (i.e., items 3 and 11 in the list above) related directly to the role of theory. The first looked at whether theory was cited as a basis for learning resource or activity design, while the second considered the contribution that study findings made to production of general guidelines, theory development and testing or the general body of knowledge. While these two dimensions are the main focus in this paper, others – such as consideration of context and integration – are reported because they relate directly to the case we present. Pedagogical context (item 7 in the list) is an important aspect of integration that is often overlooked (Barab and Squire 2004; Gunn 2000). Pedagogical context is defined as “the relationship between a setting and how participants interpret that setting, including the meaning of practices” (Moschkovich and Brenner 2000, 463). Contextual detail should, therefore, be included as an integral feature of descriptions of evolving and successful innovations in published accounts. This also impacts on the question of whether findings will generalize to other educational contexts. Without knowledge of contextual influences and a strong theoretical connection, the question can prove difficult to answer. In order to address the role of theory in sufficient depth and detail, findings related to other dimensions will be reported in separate articles.

Results

We found a wide variety of study designs and purposes in the sample of 100 articles. In some cases, technology was used to solve particular educational problems. In others, the affordances of new technology were being explored. The majority was case studies of research into practice, and while investigation of authentic educational settings is generally recommended the “one off” nature of many studies does limit its utility.

Snapshots vs longitudinal views

Figure 1 shows that more than half (n = 53) the selected articles featured studies that were snapshots at a point in time. Less than one quarter (n = 23) reported research that authors described as part of an ongoing study, while just 24 were longitudinal according to the fairly generous definition of a study that involved more than two iterations of an innovative learning design. Without some evidence that findings could be replicated in similar contexts and possibly others, or that they build on or will be used to inform future research, there is little scope to make any comment on underlying theory.


Fig 1

Figure 1.  Longitudinal study or snapshot in time?

Study objectives

Results on the dimension of explicit theory related aims, shown in Figure 2, were similarly patchy, i.e.,

  • 21 explicitly sought to develop or extend theory, to explore its application to practice or to define design principles based on study findings;

  • 23 were exploring the affordances of technology as support for a particular pedagogical approach;

  • 17 were testing a particular, pedagogically driven approach; and

  • 6 aimed to refine a learning design to enhance student learning.


Fig 2

Figure 2.  Aims of research in published articles.

The remaining 33, and the largest proportion of articles in each journal that shared a common purpose, reported studies designed to test the affordances of a technology and the impact this would have on learning. Some made broad reference to theory and how it related to the perceived affordance of the particular technology. However, many articles either failed to provide an adequate description of the pedagogical context or appeared to be so context specific that it was virtually impossible to consider the work as a contribution to theory or a body of knowledge. While such studies may be useful in the immediate context and sometimes beyond, they do little to address the need for a body of reliable evidence and generally applicable theories of learning with technology. This leads to questions about editorial policies and how the aims and scope of journals relate to broader theory generation objectives, as well as the motives of researchers in various professional practice areas.

Inconsistent reporting format

The variable levels of reporting we found in our sample did little to serve the broader goals of the learning technology research community. While many articles offered a full description of objectives, context, theoretical grounding of designs, evaluation methodology, data collection methods and findings, others included few contextual details, sketchy descriptions and drew conclusions without presenting any actual data. From the 100 articles, 65 did include some level of description of the theoretical grounding of the learning design, however; 19 had incomplete descriptions, and the remaining 16 provided minimal detail or none at all. Many authors made no serious attempt to reflect on or extend existing theories based on the study findings. This echoes what Kanuka (2011) and Gibbs (2010 cited in Kanuka 2011) say in their critiques of the tendency for scholarship of teaching and learning research to make scant reference to theory or previous studies and to assume, with little or no evidence, that findings will generalize to other contexts. Such articles add no value to our theoretical understanding of learning with technology, and incomplete descriptions do not reflect the principles of good practice in qualitative research design (Figure 3).


Fig 3

Figure 3.  Is theoretical grounding of design described?

Comments on methodology

Two key principles of qualitative research design are to gather data from different sources to support triangulation as a form of verification and to present all available evidence so readers can verify (or contest) interpretations and conclusions offered by authors. Neither principle was well addressed in many of the articles in our sample. Too many relied on one source of data and failed to acknowledge the limitations this would impose on the outcomes of the study. We found the practice of relying solely on attitude surveys far too common in situations where evidence of the impact on learning would be a real measure of success. A variable range of methodologies combined with incomplete descriptions resulted in a lack of clarity that limited the ability of readers to learn from the study or to consider broader implications of the findings. Some authors cited references without explaining, even briefly, why a particular methodology was chosen and how it worked for the study. Others provided enough detail for experienced readers to find similarities between the process applied in the study and existing methodologies. Without sufficient detail on the methodology it became necessary, but not ideal, to analyze the subtle differences between essentially similar approaches (Figure 4).


Fig 4

Figure 4.  Research methodology applied to study design.

The value of study findings

On the question of whether findings were useful in the study context and more generally, in worst-case scenarios, the findings were of questionable value even in the immediate context. Some articles presented insufficient data to support the conclusions drawn and were, therefore, considered to have produced no results of any real value for readers. Some were so context specific that nothing general could be claimed or derived while others revealed potential but noted that further study was required to develop this into meaningful findings. While some authors acknowledged this need they did not always indicate that further research was planned. The majority of studies did show results that could potentially be considered useful. However, in many cases, the predictive value was limited either by poor research design or the limited scope of data collected. Less than half the papers featured sound research design and sufficient evidence to support claims about general application of findings (Figure 5).


Fig 5

Figure 5.  Were findings useful in context and / or more generally?

While we do not wish to underrate the local value of research findings that apply only to a specific context or lack explicit theoretical linkages, we are concerned about the difficulties this presents to the learning technology research community as a whole. In a restrictive fiscal climate, this combination of issues does little to advance credibility in a field of research that already suffers from low esteem or to advance theoretical understanding of the nature of learning where technologies are involved. To address these limitations, we propose theory development as a link that must be forged to guide learning design and, as a key, unifying theme for learning technology research methodology and reporting.

Theory, methodology and educational research

Educational research methodologies have been a focus for debate for many years, and researchers such as Cobb (2001) have proposed using learning design as a strategy to develop and test theories. In this context, theory can be defined as an organizing framework that brings an additional layer of understanding to concrete experience by implying relationship, consistency and a degree of predictability and testability. It provides a way to move between concrete and abstract explanations of the same phenomenon. For theory development in the field of learning technology to progress, the challenge of balancing the predictive and innovative aspects must be addressed. In a seminal article on this theory development challenge, Brown (1992, p. 147) identified the need for new and complex methodologies to capture the systematic nature of learning and teaching. Her article put educational design research into perspective by outlining the increasingly complex task of studying a field characterized by evolving and sophisticated beliefs about learning. Unlike other approaches, educational design research provides direct and dynamic links between theory, research and practice. These links create opportunities to produce useful learning design frameworks and reliable evidence of impact on learning, thus helping researchers to meet the general criteria for what constitutes good theory.

The weight of evidence

Evidence is clearly a critical factor in the development and refinement of theory. However, our analysis of articles showed that many researchers paid insufficient attention to it in reporting and in supporting claims made as a result of their inquiry. Questions about form and weight of evidence in relation to claims are integral to theory development and testing. Brown (1992) identified a fairly common problem, which we also encountered in the “tendency to romanticize research, and base claims of success on a few engaging anecdotes or particularly exciting transcripts.” The challenge here is to find the means to convey what Brown calls the “selective and seductive” aspects, as well as the representative, reliable and repeatable findings. Mitchell (2000, p. 51) describes another angle on the same problem:

Much published research about education and the impact of technology is pseudo-scientific; it draws unwarranted conclusions based on conceptual blunders, inadequate design, so called measuring instruments that do not measure, and/or use of inappropriate statistical tests.

While research design has clearly evolved in recent years, some studies show it is still a work in progress (e.g., Barab and Kirshner 2001; Fishman et al. 2004; Reeves 1995). Opportunities and methods to collect rich contextual data have expanded to meet the growing complexity of requirements. What remains to be addressed is the critical mindset that promotes the selection of appropriate data collection and analysis tools for the task at hand and goes beyond the immediate and interesting to contribute to theory and design principles in learning technology. This requires methodologies that are generative and transformative, directed at understanding learning and teaching processes, and that aim to both explore and confirm, using methods selected to fit the purpose and circumstances. It also requires a broad perspective, to acknowledge the importance of pedagogical context and ecologies of learning.

The case for longitudinal and theoretically grounded studies

Theory development is an organic process of exploration, discovery, confirmation (through cycles or iterations) and dissemination. The process needs to be theoretically grounded and support testing over time and in different contexts. According to our analysis, timeframe in research design remains problematic. Many studies continue to take a “snapshot at a point in time” to suit researchers’ immediate aims and funding body or other external requirements. While these are valid parameters, they should not be the only ones. The short-term nature of many inquiries meant that few authors explicitly or adequately grounded their research in relevant theory and then attempted to reflect on or extend that theory based on their study findings. This seriously limits the potential to generalize findings to other contexts.

To address the issue of adequate grounding in theory, we reiterate Edelson's (2002) advice to make design a useful part of theory development by keeping it research driven. Edelson argues that in order for design research to yield productive findings, it must be informed by prior research and linked to both research findings and research perspectives. Further, he implores researchers to draw on available theories (even if incomplete) and empirical results so that their work is “guided by an informed understanding of the gaps in current understanding in order to … make a useful contribution to understanding.” (p. 116).

The case for educational design research

The strengths of design studies lie in testing theories in the crucible of practice … in recognizing the limits of theory, and capturing the specifics of practice and the potential advantages from iteratively adapting and sharpening theory in its context. (Shavelson et al. 2003, p. 25).

As a practical way to address problems identified by other researchers and confirmed by our own study, we reiterate the proposal to adopt educational design research as a systematic and holistic methodology for research in learning technology. We believe this offers the greatest potential to reposition theory in the pivotal role it warrants. To serve the broad aims of learning technology research and theory development, Kelly (2006) and Bannan-Ritland (2003) both situate educational design research studies within a larger scientific cycle of exploratory or developmental and validation stages.

Exploratory studies can produce well-designed innovations that are worthy of scaling up. In the final chapter of Educational Design Research, Nieveen, McKenney and van den Akker (2006) note the importance of a developmental stage of learning design that is guided by “deep factual and theoretical understanding.” Edelson (2006) defines this level of understanding through context theory, which focuses on a design setting and might include, for example, an analysis of the needs of particular groups of learners, the nature of subject matter and learning in a discipline. As well as theoretically grounded designs, this stage can produce design principles for use in solving broader educational problems. For this, Edelson (2006) uses the term outcomes theory, as a way to describe the effects of interactions between design setting and design elements. Outcomes theory explains why a designer would choose certain elements for use in one context and different elements for another. It also supports a third kind of theory, which is the design framework and methodology that rests on context and outcomes theories. Our study suggests that few learning designs are grounded in such deep consideration of context or theoretical factors.

Validation studies are used to test the impact of an innovation in use and to provide input for future exploratory and development work. These studies contribute most to advancing domain specific instructional theories, echoing the ALT Report's (2010) call to focus on an ecology of learning and the claim that evidence cannot be separated from theory. Thus, a body of reliable evidence from research into practice at the validation stage provides an ideal basis for this stage of theory generation.

While the literature underlines the importance of both exploratory and validation studies, Nieven, McKenney and van den Akker (2006) note the trend, which we observed in our own research, of emphasizing the final stage of the educational design research cycle. Thus, aims to test claims of causality through studies of learning designs in use are more common than those that focus on the grounding or rationale for designs. This trend is partly fuelled by traditions in educational research and the emergent nature of educational design research as a methodology. The parameters set by funding bodies and editorial standards define norms that both limit what researchers can do and, in the former case, set somewhat arbitrary timeframes they must work within. Another powerful driver is the tendency to focus on producing results that are immediately and practically useful in the study context. While this is understandable from the perspective of individual researchers, these factors combine to limit progress on important aspects of theory development, particularly in the rapidly expanding and high stakes field of learning technology. Educational design research offers a viable opportunity to remove these limiting parameters from the research environment.

The pivotal role of theory in educational design research

Reeves, McKenney and Herrington (2010) join a growing number of researchers (Barab and Squire 2004; Reeves, Herrington and Oliver 2005; van den Akker 2006; Wang and Hannafin 2005) in promoting educational design research as a “socially responsible” methodology.

The opportunity that design offers … is the possibility of using the lessons learned in constructing design procedures, problem analyses and design solutions to develop useful theories (paraphrased from Edelson 2006, p. 101)

The question that remains is, what does adopting this approach, with its objectives of sustainable educational innovation and theory generation mean in practical terms? First and foremost, it means a shift in thinking around acceptable methodology for published work, so that cases for theoretical grounding and systematic exploration become acceptable along with (well designed and thoroughly reported) validation studies. It requires longer timeframes for investigations and broader collaboration than is currently the norm. The collaborative nature of investigations involving learning designers, educational researchers, teachers, technologists and academic support staff is described in the study of elearning sustainability by Gunn (2010). Nieven, McKenney and van den Akker (2006, p. 156) assert that distributed expertise and working relationships within interdisciplinary teams are critical factors to bring credibility to educational design research as a valid form of scientific enquiry and, thus, to broad acceptance of the theories it produces.

The iterative cycles of design, development, implementation and evaluation that are involved require long-term commitment from all parties. This can be hard to maintain in the face of competing demands and priorities, although the potential to improve learning outcomes and advance theoretical understanding of the field should be sufficient incentive to overcome any barriers.

Strong, tested and connected evidence that theory informed technology-enhanced designs can improve learning outcomes is required to further reduce the historical gap between educational research and practice. Researchers have linked this gap directly to the field of educational technology for over 20 years (e.g., Gunn, Woodgate and O'Grady 2005; Hammond et al. 1992; Rickard 1999; Zemsky and Massy 2004). During that time, the volume of learning technology research studies published in educational journals has expanded considerably. However, further steps are needed to turn the findings presented in published papers into reliable evidence that can inform design and provide plausible reports on the impact of technology on teaching and learning. While some learning technology researchers do gather solid evidence of such improvements, the challenge to produce and present evidence in a form that is sufficiently robust, standardized and acceptable to the wider educational community persists. Furthermore, theory needs to be used in an explicit role to inform research studies and learning designs, and the outcomes of studies of these designs should extend or enhance our theoretical knowledge-in-action (practice).

Conclusions

The literature reviewed and the study findings presented in this paper support our case to reposition theory to play a pivotal role in learning technology research. Without theory, there is no solid grounding for learning designs and no generally applicable aspect to findings. Yet, our analysis of articles published in two leading journals found the same situation as earlier studies of a similar nature; well-grounded designs and systematic evaluation approaches reported side by side with poorly conceived or poorly applied methodologies, limited reference to theory, weak results, incomplete descriptions, uneven presentation of data and overblown and unsupported claims of impact and importance. While this is an extreme statement in relation to most of the articles we reviewed the incidence remain unacceptably high and is, therefore, detrimental to advancing the field of research in learning technology.

We echo the call from other researchers who recommend educational design research, with its solid grounding and broad theory extending aims, as a suitable and holistic approach to the field of study. This is not a new approach, but one that has evolved over a number of years in response to changing conditions, beliefs and observations about learning. In some respects, it has developed in a similar way to a theory, i.e., by drawing on that which is already known, accommodating changing beliefs and conditions and proving consistent and predictive as a basis for organizing investigations in a complex and challenging professional field. Brown (1992) sums the matter up by acknowledging the importance of seeing current research goals and associated methodologies as part of the continuum they actually are. Changes in understanding of learning require changes in the methodologies used to study it. Kelly (2003) describes the discourses that come together in educational design research as those that support exploration and produce rich descriptions to illuminate arguments about process, with those that focus on confirmation and feature randomized trials of educational variables. The discourse that emerges from this combined approach is generative and transformative, directed at deep understanding of teaching and learning processes. Its aim is to support arguments constructed around the results of active innovation and interventions in teaching and learning. Both its basis and its outcomes are theories of learning with technology, hence our claim of a pivotal role for theory and educational design research as the reliable method for developing, testing and extending that theory.

If the accumulation of knowledge takes time and requires a continuous process of reasoning, supported by the dynamic interplay of methods, theories and findings as Bloch (2004) advises, then what stage of maturity is learning technology currently at as a field of enquiry? Synthesis across studies is needed to discover, test and explain the diversity of findings. Yet, our own recent study suggests that the sector has yet to establish a set of norms that will allow researchers to reflect on that interplay, and to develop broad understanding, reliable theories and realistic expectations through that reflection.

We acknowledge also that the relationship between research and practice is complex. While sound research methods and reliable evidence can make a significant contribution, it is unrealistic to expect that these alone will create all the conditions necessary for teachers and policy makers to apply the findings of research in learning technology to practice. However, a useful step towards that goal would be to apply the “six guiding principles” that Bloch (2004, p. 99) identifies as underlying all scientific enquiry, including educational research:

  • Pose significant questions that can be investigated empirically;

  • Link research to relevant theory;

  • Use methods that permit direct investigation of the question;

  • Provide a coherent and explicit chain of reasoning;

  • Replicate and generalize across studies; and

  • Disclose research to encourage professional scrutiny and critique.

Our study suggests that there is work still to be done on all these points, except perhaps the first one, if good theories and useable design frameworks are to result. A more even balance between exploratory and validation focused studies is also implied.

References

Association of Learning Technology. (2010) Technology in Learning: A Response to Some Questions From the Department of Business Innovation and Skills, Association of Learning Technology, Oxford.

Bannan-Ritland, B. (2003) ‘The role of design in research: the integrative learning design framework’, Educational Researcher, vol. 32, no. 1, pp. 21–24. [Crossref]

Barab, S. & Kirshner, D. (2001) ‘Rethinking methodology in the learning sciences’, The Journal of the Learning Sciences, vol. 10, no. 1–2, pp. 5–15. [Crossref]

Barab, S. & Squire, K. (2004) ‘Design-based research: putting a stake in the ground’, The Journal of the Learning Sciences, vol. 13, no. 1, pp. 1–14. [Crossref]

Bloch, M. (2004) ‘A discourse that disciplines, governs and regulates: the national research council's report on scientific research in education’, Qualitative Inquiry, vol. 10, no. 1, pp. 96–110. [Crossref]

Brown, A. (1992) ‘Design experiments: theoretical and methodological challenges in creating complex interventions in classroom settings’, Journal of the Learning Sciences, vol. 2, no. 2, pp. 141–178. [Crossref]

Cobb, P. (2001) ‘Supporting the improvement of learning and teaching in social and institutional contexts’, in Cognition and Instruction: 25 Years of Progress, eds. S. Carver and D. Klahr, Lawrence Erlbaum Associates, Mahwah, NJ, pp. 455–478.

Collins, A., Joseph, D. & Beilaczyc, K. (2004) ‘Design research: theoretical and methodological issues’, The Journal of the Learning Sciences, vol. 13, no. 1, pp. 15–42. [Crossref]

Edelson, D. C. (2002) ‘Design research: what we learn when we engage in design’, The Journal of the Learning Sciences, vol. 11, no. 1, pp. 105–121. [Crossref]

Edelson, D. C. (2006) ‘Balancing innovation and risk: assessing design research proposals’, in Educational Design Research, eds. J. van den Akker, K. Gravemeijer, S. McKenney and N. Nieveen, Routledge, London and New York, pp. 100–106.

Fishman, B., Marx, R. W., Blumenfeld, P. & Krajcik, J. (2004) ‘Creating a framework for research on systemic technology innovations’, The Journal of the Learning Sciences, vol. 13, no. 1, pp. 43–76. [Crossref]

Gunn, C. (2000) ‘CAL evaluation: future directions’, in ‘The Changing Face of Learning Technology’, eds. G. Jacobs, G. Conole and D Squires, University of Wales Press, Cardiff, pp. 59–67.

Gunn, C. (2010) ‘Sustainability factors for elearning initiatives’, ALT-J, vol. 18, no. 2, pp. 89–103. [Crossref]

Gunn, C., Woodgate, S. & O'Grady, W. (2005) ‘Repurposing learning objects: a sustainable alternative?’, ALT-J, vol. 13, no. 3, pp. 189–200. [Crossref]

Hammond, N. et al. (1992) ‘Blocks to the effective use of information technology in higher education’, Computers and Education, vol. 18, no. 1–3, pp. 155–162. [Crossref]

Kanuka, H. (2011) ‘Keeping the scholarship in the scholarship of teaching and learning’, International Journal for the Scholarship of Teaching and Learning, vol. 5, no. 1.

Kelly, A. (2006) ‘Quality criteria for design research: evidence and commitments’, in Educational Design Research, eds. J. van den Akker, K. Gravemeijer, S. McKenney and N. Nieveen, Routledge, London and New York, pp. 107–118.

Kelly, M. (2003) ‘Research as design: the role of design in educational research’, Educational Researcher, vol. 32, no. 1, pp. 3–4. [Crossref]

Kelly, M., Lesh, R. & Baek, J. (2008) Handbook of Design Research Methods in Education, Routledge, New York and London.

Mitchell, D. (2000) ‘The impact of educational technology: a radical reappraisal of research methods’, in The Changing Face of Learning Technology, eds. D. Squires, G. Conole and G. Jacobs, University of Wales Press, Cardiff, pp. 51–58.

Moschkovich, J. & Brenner, M. (2000) ‘Integrating a naturalistic paradigm into research on mathematics and science cognition and learning’, in Handbook of Research Design in Mathematics and Science Education, eds. A. Kelly and R. Lesh, Lawrence Erlbaum Associates, Mahwah, NJ.

Nieveen, N., McKenney, S. & van den Akker, J. (2006) ‘Educational design research: the value of variety’, in Educational Design Research, eds. J. van den Akker, J. K. Gravemeijer, S. McKenney and N. Nieveen, Routledge, London and New York, pp. 151–159.

Reeves, T. C. (1995) ‘Questioning the questions of instructional technology research’, Proceedings of the Annual Conference of the Association for Educational Communications and Technology, Research and Theory Division, eds. M. Simonson and M. Anderson, Anaheim, CA, pp. 459–470.

Reeves, T. C., Herrington, J. & Oliver, R. (2005) ‘Design research: a socially responsible approach to instructional technology research in higher education’, Journal of Computing in Higher Education, vol. 16, no. 2, pp. 97–116. [Crossref]

Reeves, T. C., McKenney, S. & Herrington, J. (2010) ‘Publishing and perishing: the critical importance of educational design research’, Ascilite 2010, Curriculum, Technology and Transformation for an Unknown Future, eds. C. Steel, M. Keppell, P. Gerbic and S. Housego, Ascilite 2010, Sydney, pp. 787–794.

Rickard, W. (1999) ‘Technology, education, and the changing nature of resistance’, Educom Review, vol. 34, no. 1, pp. 42–45.

Shavelston, R. et al. (2003) ‘On the science of education design studies’, Educational Researcher, vol. 32, no. 1, pp. 25–28. [Crossref]

van den Akker, J., Gravemeijer, K., McKenney, S. & Nieveen, N. (2006) Educational Design Research, Routledge, London and New York.

Wang, F. & Hannafin, M. (2005) ‘Design-based research and technology – enhanced learning environments’, Educational Technology Research and Development, vol. 53, no. 4, pp. 5–23. [Crossref]

Zemsky, R. & Massy, W. (2004) Thwarted Innovation: What happened to E-Learning and Why? Final Report of The Weatherstation Project, The Learning Alliance, University of Pennsylvania.

About The Authors

Cathy Gunn

New Zealand

Caroline Steel