# Digital Technology in Capacity Development

**Enabling Learning and Supporting Change**

**DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

Cover image: Steve Johnson | unsplash

Enabling Learning and Supporting Change

creating new digital divides.

**Digital Technology in Capacity Development**

frameworks that guide our practice.

and capacity strengthening.

principles into practice.

invested if it is to work well.

funding those who do.

This book focuses on digital approaches to capacity development, reflecting the greater interest in how digital tools and platforms can be used for capacity development in the 'Global South'. While Covid-19 demonstrated some of the benefits of online learning, the widespread, often uncritical adoption of online tools driven by necessity has left many with an experience of 'emergency online learning'. This book aims to assist in the design of technology-enhanced capacity development by sharing evidence of practices that are principled rather than rushed; inclusive rather than

• Part 1 sets out the main thinking that informs our overall approach and the

• Part 2 explores a series of assumptions about technology-enhanced learning (TEL) that are common in the literature and against which we tested our data. It brings new evidence to bear on how TEL can be used more effectively as part of learning

• Part 3 is designed as a practical guide to walk practitioners through the steps to create relevant, inclusive and sustainable digital learning interventions. • Part 4 offers a collection of 16 case studies that illustrate how we have put the

We have worked to evidence how technology can be leveraged effectively to enhance or strengthen capacities of individuals, teams or systems. We make clear that there are no magic bullets, that online approaches are not simply quicker or cheaper substitutes, and that solutions need to be selected carefully, designed well, and significant time

We hope *Digital Technology in Capacity Development* will be of interest to researchers and practitioners in a range of institutions, whether they are directly responsible for designing, delivering or evaluating new initiatives or whether they are advising or

> **Edited by Joanna Wild & Femi Nzegwu Foreword by Professor Laura Czerniewicz**

## DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT **Enabling learning and supporting change**

*Edited by Joanna Wild & Femi Nzegwu* 

Published in 2022 by

African Minds 4 Eccleston Place, Somerset West, 7130, Cape Town, South Africa info@africanminds.org.za www.africanminds.org.za

and

INASP The Old Music Hall, 106-108 Cowley Road, Oxford OX4 1JE, UK info@inasp.info www.inasp.info

2022 African Minds

All contents of this document, unless specified otherwise, are licensed under a Creative Commons Attribution 4.0 International Licence.

The views expressed in this publication are those of the authors. When quoting from any of the chapters, readers are requested to acknowledge all of the authors.

ISBN (paper): 978-1-928502-70-8 eBook edition: 978-1-928502-71-5 ePub edition: 978-1-928502-72-2

Copies of this book are available for free download at: www.africanminds.org.za

ORDERS: African Minds Email: info@africanminds.org.za

To order printed books from outside Africa, please contact: African Books Collective PO Box 721, Oxford OX1 9EN, UK Email: orders@africanbookscollective.com




#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


## **Foreword**

There is no escaping technology. Technology has always been part of human society and of education. In the same way that higher education simultaneously shapes broader society and is moulded by it, so must the various technologies it employs be understood in relation to general social, cultural, and economic discourses. Throughout history technological trends have always been, and continue to be, entangled with power relations and contestations over control.

While technology has always been part of human society, different forms of technology afford different possibilities for individual human choice – when to use technology, how to use technology, what technology means culturally, in what ways it is possible to exert human will over inbuilt technological logics. Ironically, analogue, non-digital technologies (such as pencil and paper) and electronic non-digital technologies (such as overhead projectors) are considered old fashioned, but they afford the most human agency. The terms of engagement are clear with the analogue, and it is possible for such technologies to be considered simply as tools. Digital technologies have more sophisticated affordances (given their microprocessors using binary numerical systems to represent data) but until they are networked, they still only offer one-to-one access to information. In that sense they are less of a threat to human authority. Language laboratories and CD-ROMs, for example, do not 'speak back' and are unchanged by interaction. The terms of engagement between human and non-networked digital technologies are also clear.

It is networked digital technologies which entangle humans and machines through the complex and invisible networks that connect machines and people. The social cost of technology as a social medium is the progressive erosion of human agency and control. Networked technologies are clearly so much more that tools. Nowadays, the more multifarious self-monitoring, analysis and reporting technologies (SMART) are increasingly touted as the new frontier in education. SMART technologies use some form of artificial intelligence or automation to interact, share, inform, monitor or modify users' behaviour and data. In education these may include smartboards, student tracking devices, automated attendance systems and tutor bots. Smart technologies are at the heart of datafied education systems; while there are clear opportunities for their integration, dominant extractive business models mean surveillance, lack of privacy, data selling and other risks.

Although developed sequentially, all types of technology co-exist in a differentiated higher education sector. They all need to be understood and engaged with by educators and students alike. Furthermore, all types are relevant in even the most resource-constrained environments with the most severe digital divides. In a globalised world everyone is touched by technology. Over 90% of the global population has a cell phone which means that the vast majority of people everywhere are incorporated into technological ecosystems without their volition and often with negative effect. Digital divides and inequalities morph into new forms. It is nearly impossible for an individual to escape being a data point in a datafied world. Opportunities to opt out are shrinking, if near impossible, as choices about whether and how to participate are obfuscated.

At the same time, the skewed form and extent of digital encounters are unevenly spread and unequally experienced. Across and within countries, digital, educational, and digital education injustices are manifested differently: economically, politically, and culturally. While justice, access and equity are core principles for higher education, the injustices in the higher education digital system mirror those within societies at large. Thus, the dominant narrative about technology being an emancipatory force needs to be treated with caution.

It is for these reasons that TEL 'capacity development' in terrains with barriers to participation in education and society is critical. As the authors of this book note, in education, the 'online pivot' served only to emphasise this importance. As the authors also observe, the term 'capacity development' is fraught, loaded with contested assumptions. Capacity development in education in the Global South tends to happen in situations shaped by knowledge production and contribution skewed towards the Global North, usually from where the purse strings are pulled.

What to do? Yes, the whole undergirding needs to transform, but this is especially challenging when the higher education terrain is so integrally shaped by broader forces. Systemic social change cannot be achieved by a lone organisation. At the same time, educators cannot wait; they have to build the airplane as they fly it. Thus, 'capacity development', even with all its flaws, remains essential. As this book makes all too clear, 'capacity development' is not a homogenous term, it takes numerous forms, it can be aligned with divergent values. It is particularly relevant in contexts characterised by severe resource constraints where there are capable, resilient people locked down by their circumstances even while demonstrating agency and ingenuity.

What is striking in this book is its pragmatic approach. In day-to-day language pragmatism is understood to mean practical and hands-on, a process of design within constraints. Being pragmatic can also mean being realistic, 'making do' and making decisions with what is available. In Freirean terms this means starting the learning process within the parameters of the local context for meaningful sense-making.

Pragmatism is also a philosophical tradition, associated with Dewey and others. It is premised on the principle that value and meaning are embodied in practical consequences. Thus, it is an approach that prioritises action and flexibility, where usability is an important criterion of merit.

The values and approaches which underpin the framing of 'capacity development' exemplified in this comprehensive book are what I term 'critical pragmatism'. Critical pragmatism is alert to how processes, constructions and interpretations surface issues of power, positioning, ideologies, assumptions and world views. Critical theories of technology assume that technologies themselves 'embody a certain type of rationality' that 'reproduce and construct inequities of power and stratified social and political value systems' while at the same time 'these embodied norms to consciousness can be identified and challenged' (Feenberg, 1996, 1999).

Critical approaches are therefore not neutral and actively seek to identify and address injustice and inequality. They assume that by understanding both the nature of the technologies and the specifics of the context, change is possible. Change and capacity development go handin-hand. Technology-enabled change in education is multi-faceted and requires negotiation between technology, pedagogy, policy, decision-makers, funders, infrastructures, cultures, educators and students. In resource-constrained contexts, human agency is harder to assert and just as essential.

This book shows the possibilities, the successes, the lessons learnt, the messiness and the authenticity of pragmatic capacity building across networks in numerous locations. Fortunately, INASP had the foresight to record its many years of activities and to take the time to analyse and learn from experience. This book makes an important contribution by providing insightful real-world cases and exemplars while the companion analyses of scholarly literature and the thoughtful frameworks offer understandings that are both theorised and practice-based. In effect, the book offers an experience-based account of what it takes to provide enabling environments for educator capacity development and student learning, as well as how to make change happen.

*Professor Laura Czerniewicz, November 2022, Cape Town*

### **References**

Feenberg, A. (1996). Marcuse or Habermas: Two Critiques of Technology. *Inquiry*, 39(1): 45-70. Feenberg, A. (1999). *Questioning Technology*. New York: Routledge.

## **Acknowledgments**

This book is the result of a collective effort, and an even wider collective learning process, over many years amongst INASP's staff, associates and partners, past and present. There are too many individuals and organisations to list here, but in addition to extending INASP's appreciation to all of them, the authors particularly wish to thank those who have been most closely involved in the work which we present here: Julie Brittain who had the foresight to ensure INASP invested in its online learning work; Annelise Dennis for her input into developing the Learning and Capacity Development Framework and Verity Warne for designing the framework's graphic; Sian Harris and Tabitha Buchner for their valuable comments and suggestions; Love Calissendorff and Alex Barrett for supporting evaluation of so many of our courses; the many colleagues who have worked with us to develop and run AuthorAID courses and the AuthorAID community as a whole; our guest facilitators and stewards, including Bernard Appiah, Alejandra Arreola, Buna Bhandar, Dilshani Dissanayake, Funmilayo Doherty, Richard de Grijs, Felix Emeka Anyiam, Aurelia Munene, Haseeb Md Irfanullah, and Zainab Yunusa-Kaltungo; colleagues at the universities of Dodoma and Mzumbe, Uganda Martyrs and Gulu, the Association for Faculty Enrichment in Learning and Teaching in Kenya, and Ashoka East Africa – our partners in the TESCEA project; our partners in the AQHEd-SL project, particularly colleagues at the universities of Sierra Leone, Njala, Makeni; all of those who have contributed to the case studies presented in this book, including Tabitha Buchner, Jennifer Chapin, Sioux Cumming, Annelise Dennis, Josie Dryden, Harriet Mutonyi, Mai Skovgaard, as well as the teams at Thai Nguyen University, the Faculty of Medicine at the University of Colombo, and the Open University of Tanzania. We would also like to thank the two reviewers who generously read the full text and made many valuable comments and suggestions, which enabled us to improve it significantly. We are grateful to the funders who have supported this work over several years. They include the Swedish International Development Agency (Sida), the Foreign, Commonwealth and Development Office (FCDO) and partners who have supported and sponsored our MOOCs, in particular Organisation for Women in Science in the Developing World, Royal Society of Chemistry, and the East African Science and Technology Commission.

## **Introduction**

*Jonathan Harle, Femi Nzegwu, Joanna Wild*

### **What this book is about**

This book is about how INASP approaches capacity development – itself a problematic term that we explore further below – which includes training programmes, peer learning activities, convening groups to build knowledge and solve problems together, and mentoring programmes that match experienced with less-experienced professionals. We set out the thinking that informs our approach, the experience we have gathered and expertise we have built through years of learning (including our fair share of failure and disappointment), and how we try to put that into practice. While the ideas presented here are derived from work with Southern partners, we are clear that they may be just as relevant to those in Northern ecosystems. We have always understood learning processes to be mutual, and have learnt and shaped our practice through the knowledge and lessons that colleagues in the South have generously shared. In a world of finite resources, the wealthy North can also learn much from those who innovate daily without access to the same material or financial resources.

This book focuses specifically on digital approaches to capacity development, reflecting both the greater interest in how digital tools and platforms can be used for capacity development – particularly as a result of the shifts necessitated by the Covid pandemic – and the fact that plenty has already been written on how to design workshops and learning processes for predominantly face-to-face modes of delivery; much less on digital approaches. The book builds on several decades of learning and practice in capacity development. INASP's approach is always to blend the physical and digital, in-person and virtual forms of support and learning, which is why we favour the terms 'technology-enhanced learning' (TEL), 'blended learning,' and 'technology-enhanced capacity development' (TECD). It is also why we discuss approaches and interventions that are purely digital and those which introduce digital tools into learning processes that involve people coming together in person.

It is worth acknowledging here our position in this space. We are a UK based non-profit, non-governmental organisation, and for the past 30 years we have worked with research, academic and policy communities across Africa, Asia and Latin America. As our original name suggests,1 INASP has always thought of itself as a network of partners. Everything we have learnt and done in this time has been achieved in collaboration and close partnership with Southern colleagues. Nevertheless, questions of capacity, knowledge and learning

<sup>1</sup> INASP was originally founded as the International Network for the Availability of Scientific Publications; in 2019 we became the International Network for Advancing Science and Policy.

cannot be disconnected from questions of power, position and privilege. We are conscious of being a UK based organisation working with partners in the South, partners who are the experts in their field and who must grapple day to day with very different challenges to those we may face in the North. They have often relied on organisations like ours because of our proximity to funders and ability to generate funding that they have been unable to generate locally (both because of local financing and the preferences of Northern funders). That understanding informs all of our work and is encoded in our core values and the importance we put on partnerships.

## **Why write a book about using digital technologies in capacity development?**

The book began its life long before the pandemic pushed lives and learning online, and digital tools for learning and working were rapidly and hurriedly adopted. As well as transitioning all of our own work online, we were regularly approached by partners and peers for advice and support. It suggested that there was even more reason to collect our learning in a form that would be usable by others, and specifically those, like us, working within research and knowledge systems. While Covid-19 demonstrated some of the benefits of online learning, the widespread, often uncritical adoption of online tools driven by necessity has left many with an experience of 'emergency online learning.' The nature of the rapid, unplanned pivot has accentuated inequities in access to technologies and connectivity (Czerniewicz 2020; Czerniewicz et al. 2020; Hodges et al. 2020; Young et al. 2021). This book aims to inform the design of technology-enhanced capacity development interventions in the sector by sharing evidence of practices that are principled rather than rushed, and inclusive rather than creating new, digital divides.

## **Organisation and structure of the book**

Part 1 sets out the main thinking that informs our overall approach, and the frameworks that guide our practice. We strive to be resolutely practical, and that means delivering the best work we can, guided by theory and following robust approaches, but being pragmatic too. We recognise that sometimes 'good enough' support at the point of need is better than 'near perfect' support provided much too late. Similarly, we also recognise the principle of 'more haste, less speed'. Taking the time to understand, search out with our partners and apply the 'best fit' for their needs has resulted in huge dividends for all parties. We acknowledge that sometimes we have to compromise on our ideals and ambitions to fit and tailor our work to the needs and timelines of partners and funders, or to respond to broader processes beyond the control of either. While unfolding realities often make things more complicated than hoped or envisaged, by being clear about our values, commitments and goals, we guide ourselves to do the best work we can, within the conditions available to us.

Part 2 explores a series of assumptions about technology-enhanced learning that are common in the literature and against which we tested our data. We assembled our evidence base entirely from work undertaken directly with partners in the Global South, or in the case of our large-scale online learning, designed to support learners based in Global South countries. While some assumptions are borne out by the evidence, not all are. We hope this helps to bring new evidence to bear on how TEL can be more effectively used as part of learning and capacity strengthening – both in Global South contexts and more widely.

Part 3 is the most substantial section of the book and is designed as a practical guide to walk practitioners – whether experienced TECD designers or those new to this role – through the steps to create relevant, useful and high quality digital learning interventions. It identifies the key stages and the questions to ask at each point in the design, delivery, and review process. It suggests tools, frameworks, and models to use along the way. It provides guidance on how to ensure a good learning experience at the point of delivery of a learning intervention and how to review and improve it based on participant feedback and learning from the process.

In Part 4, we have collected sixteen case studies that illustrate how we have, in practice, applied the principles of technology-enhanced capacity development in partnership with organisations in the Global South. References to the case studies are included throughout the book to enable the reader to jump ahead to the relevant section if they want to explore these as they read and to understand how we came to certain decisions when applying digital technologies to learning and capacity development.

## **Who might find this book useful?**

We hope this book will be of interest to a range of people, whether their work is local or global in reach, whether they are based in educational institutions or the public or social sector, and whether they are directly responsible for designing and delivering new initiatives, or advising, guiding and perhaps funding those who do. Capacity development or learning may be a significant part of their role or responsibilities, or they may be individuals with other specialisms and roles, whose work and that of their organisations increasingly requires them to grapple with questions of capacity and learning. Whether they have significant experience of technology and digital tools or not, they are probably curious about how digital technologies might enable them to achieve their aims better, or in new ways – and perhaps wary too, given their own experiences of poorly designed courses or workshops, technology letting them down, or of trying to work digitally over unreliable connections. We hope this book will help them see how technology can be leveraged effectively – but make clear that there are no magic bullets, that technology-enhanced approaches aren't simply quicker or cheaper substitutes, and that tools and approaches need to be selected carefully, designed well, and serious time needs to be invested if they are to work well.

## **Using this book – a guide for the reader**

We don't expect readers will read this book from start to finish, in chronological order. Instead, we expect readers to dip in and out of the sections that interest them, especially while conceptualizing, designing or delivering some form of digital learning intervention. Below we offer some suggestions of where to start, but the contents lists for each section will help the reader to locate the best starting point.

For readers interested in a broader discussion of how to design and deliver technologyenhanced capacity development initiatives, we suggest starting with Part 1. For those who want to find out more about the evidence base for technology-enhanced capacity development (TECD) and the evidence we have assembled from our work, we recommend starting with Part 2. For readers interested in how to design effective TECD interventions, Part 3 will be the right place to begin reading. Those interested in examples of what can be achieved using TECD approaches might enjoy getting familiar with the case studies in Part 4 in the first instance.

This book has been written by many different people, over more than a year. There are inevitable shifts in style and voice as a result of this collaborative writing process. We have opted not to flatten it to a single voice. While we have tried to ensure a clear flow through the text and help the reader navigate the ideas and evidence presented as easily as possible, we also expect readers to find their way through the text based on their specific interests. We hope that this adds to, rather than detracts from, a reader's experience of the book.

## **References**


# PART 1 **Setting the scene**

*Jonathan Harle, Femi Nzegwu, Joanna Wild*

## **Why does 'capacity' need to be 'developed'?**

While often presented as a bundle of neutral technical concepts, as a term, and as an area of practice, capacity development is inseparable from the histories of colonialism and the politics and funding of international development. That is first seen with the introduction of European and American science systems into countries of the Global South under colonial rule, and the parallel dismantling of existing knowledge systems and epistemologies (Hall and Tandon 2017; Mormina and Istratii 2021; McCowan 2019), and their extension in the post-independence or neo-colonial era; and second, in the ways in which science has been incorporated into the development project from the mid-twentieth century.

In many cases, capacity development has been experienced as a set of practices defined by Northern expertise, financed by Northern development and research funders, directed towards building skills and knowledge in the South, and often delivered by Northern academics and professionals. In reality, our experience demonstrates that the art of developing capacity is multi-dimensional, multi-faceted and promotes the flow of knowledge in multiple directions. It is not a unilateral flow of knowledge and skills from one group of people to another. As a term, capacity development – sometimes capacity building or capacity strengthening – has been in use since the late 1980s (Clarke and Oswald 2010). It has been the subject of significant critique and analysis in the intervening period (Brinkerhoff and Morgan 2010; Morgan 2006; OECD 2006; Clarke and Oswald 2010; Mormina and Istratii 2021; Baser and Morgan 2008). It has often taken its starting point from an assumed deficit of knowledge and expertise, and taken the structures, resources and capacities of Northern science systems as its implicit benchmark, and achieving parity with them as an ultimate, if long-term, goal. This perceived deficit stems both from the privileging of Euro-American knowledge and its epistemologies, institutions and systems and a history of appropriation and destruction of Southern knowledge and its systems.

This history means that we, like other Northern organisations, find ourselves in positions of particular 'power and expertise' which we need to be aware of at all times: capacity needs to be developed, supported or strengthened because of systems, processes and structures which have prevented nations and individuals from developing their own knowledge systems, and because the current shape of many Southern knowledge systems reflect those of Europe and North America, which have come to represent global standards and norms. It means that Southern professionals must develop skills, knowledge and confidence to operate, participate and be recognised within those systems, as they also seek to define their alternatives and forms. This is why capacity development, especially where an external, Northern-based organisation is involved, must, indeed can only, be a mutual learning process – in partnership with others who are rooted in their system, who are strengthening their own capacities, or playing similar roles as our own.

In this section we describe our own understanding of and approach to capacity development. We recognise the problematic histories and the flaws of analysis and terminology, but believe that there is nevertheless a meaningful, respectful and valuable practice of 'capacity development' that can be developed, and which can support real change. In short, this is our understanding of 'how to do capacity development well'. We describe our own framework – naturally borrowing and building on the work of others – and explain how and where digital technologies fit into this; and how we seek to embed learning into this practice, to 'close the loop' and ensure that it is a practice that is progressively and systematically improved through experience and reflection. This then provides a foundation for the detailed, practical illustrations which follow in Parts 2 and 3 of the book and consolidate roughly a decade of technology-enhanced work (as part of 30 years of grappling with capacity development).

## **How we understand capacity development**

The United Nations Development Programme (UNDP) (echoing an earlier definition by the OECD's Development Assistance Committee) describes capacity development as 'the process through which individuals, organisations and societies obtain, strengthen and maintain the capabilities to set and achieve their own development objectives over time' and adds that 'if capacity is the means to plan and achieve, then capacity development describes the ways to those means' (UNDP 2009). The emphasis on local leadership ('their own development') and change 'over time' are both important and frequently overlooked.

Over the years, our understanding of change has evolved. Although INASP has long been concerned with supporting Southern research and helping to make Southern knowledge more visible, for many years we understood that to be about *strengthening* research systems. More recently we have come to appreciate that a stronger system is only part of the answer, and that systems also need to be more *equitable* if they are to produce the best research and evidence. This means at least three things.

First, that individuals are able to access opportunities whatever their gender, socio‐ economic background, geography or other characteristics and needs. Second, that it is possible for organisations to contribute to knowledge systems wherever they are located – so that it is not only organisations in capital cities, with long histories, well‐established identities and wide networks that can participate and contribute. Finally, it means that multiple types and forms of knowledge are recognised and valued in the processes of research and evidence use (Harle 2019, 2020; Nzegwu 2019).

Achieving greater equity *in capacity development* also means the shape of our organisastion and our mode of working is shifting too. While for many years INASP secured the funding to work with partners and led projects, we increasingly seek to support projects led by partners that have secured and hold the funding. We build collaborative leadership and decisionmaking structures in projects in which we remain the lead grant holder. In our current strategy (INASP 2020) we explain the intention to shift away from a Northern-based team with Southern partners, to becoming much more of a network, with staff, associates and partners collaborating from physical bases in many countries, and evolving our governance and organisational structures to enable that.

Throughout, our work has been values-led.2 While they have been articulated in different ways, the core principles have remained constant:


These are inevitably challenging commitments to uphold day to day, and we reflect below on how we strive to do so in our work, but are frank about the difficulties, and recognise that we sometimes fall short.

## **A framework to guide our learning and capacity development work**

We have synthesised our learning and our approach to capacity development in our Learning and Capacity Development Framework (Figure 1 below). This framework gives us a common reference point and serves as a tool to use when working with partners. It also responds to our core values to guide our work. Below we explore each of these dimensions in turn, organizing these under five key arguments – which could be taken as principles:


<sup>2</sup> https://www.inasp.info/about/values

Any framework is only ever a starting point, and we don't imagine ours to be a fully comprehensive description of everything that might need to be incorporated and considered in a process of change and capacity development. Similarly, it doesn't mean we always get it right. It is intended as a tool to guide us, reminding us of the questions that need to be asked and the issues that need to be considered. By using it we increase the likelihood that we will design better, but it doesn't escape the simple truth: that change is hard and unpredictable, and we need to be constantly alert to where we need to adjust our approach, and that any tool is only as good as the professional judgements made by those using it.

**Entry points for change Guided by five principles Sustainable and lasting change** Enhancing existing capacity Beyond technical capacity Working across all levels of change Real partnerships and mutual learning Learner-led and technology enhanced Good capacity development enables individuals and institutions to independently and sustainably work towards their desired changes in policy and practice beyond the life of a project In it together Every voice counts Making change last Doing things right **Individual Ecosystems Organisational** Co-defining learning Mastering competencies Co-reviewing organisational capacity Enhancing organisational structures Strengthening relationships Building foundational knowledge Skills strength- ening Strengthening relationships and enabling learning between sectors Strengthening relationships and enabling learning between countries **UNI dimensional INTRA dimensional INTER dimensional**

**Figure 1: INASP's Learning and Capacity Development Framework**

In the sections that follow, we make cross-references to subsequent parts of this book, where further evidence and examples can be found. The case studies that we describe in this book, given its focus, are about using digital technologies. However, not all of INASP's work has been technology enhanced. We also provide references to other examples from our previous work with colleagues in research and policy.

## *1. Capacity always already exists, and we learn together with our partners*

Too often, capacity development starts from the premise that there are deficits – knowledge to be transferred or skills to be taught (Clarke and Oswald 2010). Many tools seek to identify gaps or assess needs – and far fewer start by identifying existing strengths. This runs contrary to everything we know about learning and development: that helping individuals or groups see their strengths and build on them is far more effective than simply pointing out what they lack. And more importantly, capacity *always* already exists – in a team, a group of individuals, an organisation or a wider system. Recognising that from the outset is critical – both so that we collectively focus on the right things to meet the right needs, and build effective partnerships based on mutual respect.

Many efforts to 'develop capacity' assume a one‐way process, through which 'experts' develop the capacity of others. In the context of many international programmes, that expertise is often assumed to flow from North to South, or from professionals or experts in one sector or domain to those in another. Many capacity development interventions still focus heavily on training as a result. Disappointing results from many such initiatives – individuals that haven't developed new skills, or haven't changed how their organisations work – lead some to suggest that capacity development is also a wasted investment (Denney 2017).

For INASP, capacity development describes a collective engagement, with partners, in a process of change, through which we learn together to strengthen or improve the way something is done or can be done in the future. Following our first value of 'in it together', this means understanding that we are *collectively* trying to achieve a set of outcomes, which themselves describe a desired change; and from there identify the appropriate entry points and opportunities to support and nurture that change. Importantly, this means that while the impetus for change may come from partners (who want to strengthen capacities in a particular area or to achieve a particular set of outcomes), it is a process of mutual learning. Our capacity as facilitators and partners is developed as we work with and alongside colleagues to realise the changes they and we, collectively, hope to see. Colleagues at the Africa Centre for Evidence have offered the term 'capacity sharing', to better reflect this understanding, and to recognise the multiple directions in which knowledge and expertise flow, between professions, disciplines and geographies (Stewart 2017; Africa Evidence Network 2021).

As external facilitators and supporters of these processes of learning and developing capacity, we are often outside the process of change that we and our partners hope to see. If we start by recognizing the capacity that our partners bring, and from there identifying – with them – what additional capacity we can offer, we can then agree how best we can augment and add value to what is already there. It requires humility to do this, to acknowledge that we have much to learn, and that our capacity – our knowledge, skills and understanding – alongside that of other partners is also strengthened in the process. That's something that we need to continually remind ourselves of, because in a world that rewards 'experts', it's easy to lose sight of what you don't know. Nevertheless, we also need the confidence to ask further questions, listening carefully to the answers but not being afraid to push further if we don't think we've collectively and successfully understood what is needed and why, or explaining why we don't think a solution will work.

Leveraging the capacity of all partners requires that we design our initiatives collaboratively. We refer to this later as co‐design, and it responds to our second value 'make change last' and our third 'every voice counts'. Designing together, usually across organisations and frequently across countries and continents, takes more time. While it is often energising to work collaboratively, bringing new ideas and approaches to bear, it can sometimes be frustrating for all parties. Progress can seem slow, decisions can take longer to reach, and it can be challenging to develop a shared view and a common approach, and to agree and work to shared standards. It can also be challenging within the time frames of funding calls and the other constraints that project-based and funded work can pose. Despite the difficulties, it is vital. Time and again, we have seen how patient but determined approaches build trust, and how hurrying through these stages can lead to subsequent fracturing. While it is tempting to try and take shortcuts in that process, a design that appears technically excellent can prove entirely unworkable if, in the process, it has ignored the environment in which it is due to be implemented. It can also prove unworkable if the demand from and interests of those who will participate are not met or if there is simply no collective ownership and thus no mutual commitment to its eventual success. Our partners are best placed to understand what is needed in the systems they work in, and they are the ones to drive change. For more on this theme see case studies 13, 15 and 16.

## *2. Technical expertise is important, but it is never sufficient*

## **Processes of change**

Capacity development is commonly presented as if it were a fundamentally technical process, through which expertise, knowledge and skills are combined with money and time to equip a group of people to achieve a particular goal. Conceiving it as a set of technical interventions is fundamentally limiting. Developing capacity is about engaging with and navigating a process of change – whether in individual attitudes and behaviours or organisational practices and cultures. Processes of change are social, cultural and – critically – political, in the sense that they involve changes regarding who has power in a particular situation. That is both power locally – in the organisation or ecosystem, and power in the partnership and project (Green 2016). They must grapple with 'the way things are done', in specific places and organisations, and with the incentives, interests and cultures that underpin those practices, and encourage or discourage change, and, from there, propose new ways of doing things in the future. To do so they must also recognise where power is held, and where power needs to shift, if new practices are to emerge and if new people are to be enabled to play new or different roles. If we want to 'make change last', we need to recognise this, and if 'every voice counts' we need to be alert to those voices missing in any conversation.

## **Understanding context**

When technical solutions do not work as intended, it is often attributed loosely to 'the context' and it is not uncommon to hear that the 'context' matters when trying to identify the best ways forward. But context is often treated as an 'unknowable' – the repository for all the unforeseen factors that impede a project that might be too difficult to understand, too distant to be influenced, or too hard to deal with (Weyrauch 2016). We need to be able to understand 'context' better to respond to it well in the approach we take. To explore how contexts affect the use of knowledge and evidence in policy-making, and to provide a tool to diagnose it better, we collaborated with Purpose & Ideas3 to develop the Context Matters framework (Weyrauch 2016; INASP 2021). We've recently explored how a 'light' political economy analysis, or a 'context and power analysis' might help us to make better decisions as we design and deliver our work (Hayter 2020a, 2020b). In our digital work, we've developed a Scoping and Design Decisions Tool (see Part 3) to help us ask the right questions when embarking on a new project, and quality assurance guides, to ensure we pay attention to critical issues throughout the process. Despite the help of tools, it is rarely easy to find resources and time for comprehensive analysis in every project or with every partnership. Sometimes we have to

<sup>3</sup> Formerly Politics&Ideas.

#### **Part 1** *Setting the scene*

rely on the existing knowledge of the partnership and complement this knowledge through additional conversations. For more on this theme, see case studies, 11, 12 and 16.

## **At the core – not 'bolted-on'**

INASP works predominantly with people and organisations in research and higher education systems. In many cases, we and our partners are interested in the capacities to do research. Just as a problem or need cannot only be defined in technical terms, neither is technical (or subject or disciplinary) expertise in a scientific domain sufficient to design an effective approach to developing or strengthening capacities. While it is common for research projects to include a capacity strengthening element – particularly those which bring together researchers from several countries and institutions – it is important that the 'capacity piece' is designed and resourced properly, rather than simply being 'bolted on' to the research. To have a chance to achieve results, it needs to be designed and delivered in conjunction with those who have expertise in adult learning and professional development. Nevertheless, it can be challenging to get this recognised. Academic systems are built around disciplinary expertise, and knowledge about 'capacity development' approaches is often seen as secondary to disciplinary knowledge, especially when seeking to strengthen research or teaching in those disciplines.

## **Embracing complexity**

Since change is complex, any efforts to strengthen capacity need to recognise and – as far as they are able to – respond to those complexities, acknowledging the tensions between the ideal and pragmatic possibilities (Fisher 2010). In many cases this complex process of change requires groups of individuals to work together, generate new knowledge, nurture new sets of skills and competencies, rethink how things are done, create new structures, processes and policies to enable that change, support individuals as they execute their tasks, and ensure that change – and the capacity needed to ensure it – can be sustained and renewed. This complexity makes it challenging to see where to start, what to do and what not to do, and how to ensure that the right connections are made between various interventions. It also means taking a 'systems' – or 'ecosystems' – approach to change, recognising where relationships need to be built with different individuals and organisations, to influence thinking and practices, or to enable change directly. This is difficult to do when participants have limited time, authority, power or reach. Some factors may not be acknowledged, or if acknowledged, may be deemed to be beyond reach or too difficult to change.

More challenging yet is the recognition that capacity is at best a temporary state, occurring at a point in time. To really develop or strengthen capacity requires that the ongoing ability to adapt and evolve is also achieved, in order to respond to changing needs, as people change roles, and as organisations and groups are expected to respond to new challenges or external shifts. It can also be challenging when working within the structures of an organisation, especially on a project timeline – for example, formal hierarchies or lines of authority can restrict or slow change. While it is sometimes necessary to accept the limitations of change, achieving desired outcomes is possible if a team is committed to learning, adapting and navigating these constraints. It follows that capacity development can't simply be the provision of information, the delivery of training, or the provision of any other form of expertise or resources – although all of these may be part of the process. Instead, it is a collective, difficult, uncertain and complex process of understanding strengths, identifying what needs to be done to expand those, and designing a way to achieve that. In this way, new knowledge, techniques, tools, or approaches that are generated can become part of the fabric of the organisation and the routine practice of the individuals and teams. This enables them to do what they want to do, and continue doing that into the future as contexts change, demands shift, and people come and go. For more on this theme see case study 16.

## *3. Working across multiple levels of change*

It is now common to identify three levels at which change is needed, as shown in our Capacity and Learning Development Framework (Figure 1). These levels all need to be considered for effective capacity development to be designed: the individual, the organisational and the systems levels (UNDP 2009; Punton 2016). Yet, despite this, many interventions focus predominantly on the individual, designing and delivering training or related programmes of support to develop the knowledge or skills of people, and perhaps small teams, but without considering the organisational environments in which those individuals work (Kunaratnam et al. 2021). No doubt this reflects the relative ease of organizing training for individuals, versus the much greater complexities of organisational and system level change, and because the benefits are often more immediately tangible and easier for people to grasp. Support to individuals can certainly be valuable, but for it to have its greatest impact, the design of any intervention needs to consider the way in which the environments in which individuals work enable them to perform optimally or constrain them from developing their skills and putting them into practice. That may be related to how teams work together, or wider organisational structures, cultures, processes, and sometimes incentives.

Whatever level partnerships seek to address, the most effective approaches come from a team that combines expertise in learning, not just the subject or technical knowledge, to ensure that we can design and facilitate meaningful learning and problem‐solving processes. Recognising that change occurs at many levels, and has many facets, helps us to think about the most appropriate entry points in any learning or capacity development process. It also helps us consider the connections that need to be made with other levels so that we can support the progression and layering of learning – even if it is predominantly individuals who are engaged in the process. Nevertheless, it is not always possible or practical to work across all levels, because of time (whether ours or partners'), funding, or the projectised nature of our work and that of our partners. And because projects bring their own timelines and rhythms that don't always match those of organisations, people, and unanticipated events, no matter how well we seek to design a piece of work. For more on this theme see case study 16.

## **Individuals**

Any change initiative is powered by individuals, working in different constellations, and learning together or alone to develop new knowledge, skills and competencies. Individuals in any organistion have particular responsibilities, authority, knowledge, skills and interests. People are also key to change and different people can affect change in different ways and require different kinds of support to do so. At the individual level we have identified four entry points of change:

#### **Part 1** *Setting the scene*


These entry points are not clear‐cut but serve as an indication of the primary focus of a learning or capacity development intervention. For example, our research writing MOOCs support researchers in developing their writing skills by combining introduction of new knowledge with opportunities for practice. However, researchers need to write and submit several papers to become competent in research writing. The AuthorAID mentoring community supports precisely that – practice and mastery under supervision of a more senior researcher. This approach is much more effective in achieving lasting change than one‐off interventions that show the possibility of change but no follow‐through to internalise this change. For more on this theme see case studies 1, 8, 9 and 10.

## **Organisations**

For change to go deeper it needs to affect the practices of groups of people working together, within and across teams, to create broader shifts in practice. It also needs to be enabled by shifts in the policies, processes and systems that organise work and organisational life. Understanding capacity at the organisational level can be difficult, because it represents the collective abilities of many individuals through many teams, structures, and processes. The European Centre for Development Policy Management (ECDPM) identifies a set of five capabilities that, together, they argue, define an organisation's capacity. These are: the capability to act and commit; the capability to deliver on development objectives; the capability to adapt and self‐renew; the capability to relate to external stakeholders; and the capability to achieve coherence (Baser and Morgan 2008). Building on these and other models, INASP identifies three entry points to support change at the organisational level:


Providing support at this level might involve working with teams to learn and solve problems together, through a facilitated process, or identifying where change is needed and diagnosing how that can best be achieved within their circumstances and resources. It's also possible to target support primarily to individual learning needs, whilst also being cognisant of and making connections to the organisational level, for example, by developing an organisational mentoring or professional development programme that can be sustained, and developing the skills of facilitators to do that, rather than simply training a group of people in a one‐off event. Organisational capacity and change are, of course, complex fields of practice and not every initiative can hope to strengthen capacities at that level. For more on this theme see case studies 12, 13, 15 and 16. See also our discussion of approaches used to support capacity for evidence use (INASP 2016).

#### **Ecosystems**

If organisations are vital to deepening and sustaining changes in practice, they are also enabled and constrained by the wider systems or ecosystems of which they are part. They will depend on the capacity and interests of other organisations to achieve change and build enduring capacity bigger than themselves. Change usually requires individuals to work as an existing team or to form new teams across their own organisation, and with colleagues in other organisations. Systems are also essential to scaling change. We use the terms system and ecoystem interchangeably here, since the literature often refers to 'systems', especially in the case of research, but we feel 'ecosystems' better describes the complex web of formal, informal, visible and less visible (or invisible) connections, conditions and contexts.

In the case of INASP's work, these are typically research and knowledge ecosystems – sometimes called the knowledge sector – which are connected to and interact with wider governance, economic and social systems (Datta 2018; Fransman et al. 2021). These systems determine how the actions of individuals and organisations are governed and guided, both by formal institutions – such as rules and regulations – and by the complex weave of established norms, practices, cultures, and the interests, incentives and disincentives that these create. Each of the actors or parts of an ecosystem influence the other parts, meaning that not only is it difficult to predict how change will happen, but, as it does, the system will also change in response. This will in turn change the conditions for change elsewhere – at individual and organisational levels (Bowman et al. 2015; Brinkerhoff and Morgan 2010). Strengthening capacity at a system level is, therefore, a case of supporting individuals and organisations to 'see' their ecosystem, identify how best they can work to achieve change within it, to work outwards to influence the thinking and practices of others in the ecosystem, and to adapt with the system as it changes around them. This makes understanding context and embedding learning and adaptive approaches imperative. We explore these more fully in the section 'Closing the learning loops', as well as our approach to partnership, introduced below.

At the ecosystem level, we identify two entry points through which we can support change:


Providing support at this level can involve facilitating dialogues or processes that bring together people from many different organisations and across sectors or professional groups. It enables them to build or strengthen relationships, co-define problems, formulate collective initiatives to address those at a national or system‐scale, or learn and collaborate across countries and regions. The results may be difficult to predict, and the opportunities to engage in these processes may also be unclear at the outset: while it may be possible to anticipate or design for some, others may simply emerge along the way, and will need to be seized when they do.

It follows from the discussion of the process of change, its complexity, and the levels of change described above, that this often has a generational dimension. By that we mean that it may not be possible to achieve everything from the outset of an initiative, even if all of the areas and levels where change is needed have been clearly identified. To do so may introduce too much complexity, or overwhelm a project at too‐early a stage. For example, while we may see that organisational and even ecosystem change is needed, it can take time to build that understanding with a new group, who may be more comfortable and more familiar with thinking about change and capacity in terms of individual skills and knowledge. An initiative, or a partnership, may need to mature in its collective understanding of a problem before it is able to move towards more complex, or further levels of change in a second phase. The key is, first, to recognise this and ensure it is a conscious and explicit decision (not simply something that is overlooked in preference of a simple intervention). Second, it is important to seek ecosystem connections, so that this learning process can happen, and to shift the group's thinking in the process. Third, one needs to be alert to unexpected opportunities to engage other parts of the ecosystem to influence or lead the change. For more on this theme see case studies 3, 5, 6, 14 and 16, and INASP (2016).

## *4. Real partnership and mutual learning*

Because INASP is typically an external facilitator and supporter to any process, the approach that we take to partnership is an important part of our wider approach to capacity development. There are so many principles and guidelines for partnerships, the term is in such frequent use and there are so many stories of poor practice and inequitable relationships that partnership can sometimes feel like a term that has been emptied of any real meaning. Nevertheless, partnership – the fair and equitable process of collaborative working with colleagues and partners – is central to our practice. For a more considered discussion on partnerships in research see Fransman et al. (2021), and for useful guides see Wiesman et al. (2018) and Newman et al. (2019).

For INASP, partnership is both the right and best way to work to achieve lasting change. It starts with and is founded on mutual respect, and, from there, the building of trust. It means working collaboratively and taking decisions together. It means being clear that our capacity is also developed by our partners, as we learn from and with them. Because change is complex and uncertain, strong partnerships are critical to root those change efforts in local structures and systems, under local leadership – where local simply means 'closest to the change'. Relationships, and the efforts to build them, are vital – within and between organisations, and between individuals. Partnerships are also an important extension of our values that we are 'in it together', that 'every voice counts' and that we 'do things right'.

Of course, there is also a pragmatic dimension. The structures of projects, the struggles for funding, the requirements and expectations of sponsors or funding bodies, and the need to make progress and demonstrate results to these and other stakeholders often require compromises. That might mean doing work less collaboratively to meet a pressing deadline, or that decisions are taken by a smaller group, or even an individual. However, the intention to be as collaborative as possible does at least push us to be clear when we or partners cannot work in that way, and to make sure it is mutually understood, and that roles and responsibilities are clearly agreed upon from the outset (INASP 2020).

Partnerships are also about power – an uncomfortable and often unspoken dimension of relationships, but one we have to grapple with if we want them to succeed – and like any set of human relationships, they are vulnerable to mistakes and fallings out. For all the guidelines and principles, they are difficult and messy. They don't just happen by fiat but must be carefully and deliberately led, facilitated, and nurtured. They take time, investment, determination, and commitment. They take the celebration of success when things are working, and the clear-eyed recognition of failure when things aren't going so well, and a willingness – and at times courage – to confront problems and find solutions and initiate the difficult conversations that are needed to do that. We have also used a partnership survey, to invite our partners to tell us how we are doing and where we need to improve, and have used this to reflect on where we need to make changes (Harle and Barrett 2020). It helps to recognise that, to be effective, the process must be one of mutual learning. If we start by understanding the questions we wish to explore together, jointly defining the problem and understanding context, needs and existing capacities, we're in a better position to identify the most appropriate solutions. For more on this theme see case study 16.

## *5. Learner‐led and technology‐enhanced*

While digital technologies are now part of almost all the work we do, it is important that learners, or those who are working to develop capacity and effect some form of organisational or systems change, *and not the technology*, take centre stage. Technology should be used to enhance the learning and change process, rather than determine what is done and how. We fully align ourselves with what Fawns (2022) called an 'entangled pedagogy' – a model that 'encapsulates the mutual shaping of technology, teaching methods, purposes, values and context' and where 'outcomes are contingent on complex relations and cannot be determined in advance.' Our approach is based on the theory and practice of effective adult learning, which helps us understand how adult professionals learn best. To make the most effective and appropriate use of technology we draw on insights from the technology‐enhanced learning research and practice communities. We are guided by the Principles for Digital Development4 to ensure that technology is deployed thoughtfully and strategically, provides a coherent learning journey, and bridge and connect interventions at different levels. Technology can help us reach new learners, who might be unable to afford the time or the expense of participating in a physical learning programme, provide learning opportunities at scale, reach many individuals collectively and at a lower per‐person cost, or offer more flexible modes of support which allow individuals to fit learning around busy professional and personal lives. We have learnt this through the more than 60 online and blended

<sup>4</sup> https://digitalprinciples.org/

courses that INASP has offered since 2011 for professionals working in higher education and research in the Global South. We have had more than 40 000 participants overall, nearly half completing courses successfully. Feedback has been consistently favourable, with high completion rates and positive outcomes. For more on this theme see case studies 3 and 4.

## **INASP's approach to the use of technology in capacity development**

There are various terms to describe learning that is mediated through technology – e‐ learning, online learning, digital learning and technology‐enhanced learning are used most commonly and often interchangeably. Technology-enhanced learning (TEL) has been used in several ways, but here we adopt a definition that most closely reflects our practice. For INASP, technology-enhanced learning means effective and creative use of digital technology to optimise the learning experience. This definition emphasises that the main goal of using technology is to create the best learning experience possible – whether the learning environment is a traditional classroom (face‐to‐face), an online space (online learning), or a mix of both (blended learning). In this book we use the term 'technology-enhanced capacity development', or TECD, whenever we refer to the entirety of our approaches that use technology to enable the capacity in question. Still, we also employ the terms 'online learning', 'digital learning' and 'blended learning' where it is appropriate. We seek to respond to Selwyn's (2010, p. 66) call to 'develop "context rich" accounts of the often compromised and constrained social realities of technology-use "on the ground" in education settings'. We want to show how technology has been used to support and strengthen capacity development in educational environments with relatively limited digital connectivity and access to infrastructure. We offer lessons derived from practice – including in ways that might be perceived as relatively 'low-tech' but have nevertheless proven to be practically useful – rather than argue for what digital technologies might be able to do. This is even more important now that many learners have had their first and often frustrating experiences with online learning in an 'emergency mode'. In the process we want to be frank about the limitations and inequities of technology mediated approaches and identify the ways it has increased participation and learning success.

## *Four key characteristics*

Here are the four key characteristics of our approach to TECD:


## **Principled**

At the core of our work is the premise that our approaches must be sustainable, relevant and appropriate to the needs of our partners. We officially endorse the Principles for Digital Development,5 which we see as being closely interlinked and embedded throughout our work. To put these principles into practice and ensure they work within our context, we have developed an INASP‐specific approach grounded in our organisational values. The following guidelines help us decide when and how we use digital technology to achieve more significant development impact.

#### **Table 1: Digital Development Principles enacted at INASP**


<sup>5</sup> https://digitalprinciples.org/

#### **Part 1** *Setting the scene*


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


## **Based on systematic evaluation and learning**

INASP's approach is grounded in systematically collecting and evaluating learner feedback before, during and/or following each activity. It can be described as an iterative process designed to generate increasingly better outcomes for our approaches and activities. Feedback mechanisms are designed (with clear outcomes) at the inception of an activity, gathered at a frequency appropriate to the activity, analysed during and after the activity, and fed back into the design process either during the activity (to 'correct course') or in the design of the next similar type of activity. This has helped us identify and repeatedly validate key factors that need to be considered before designing any online intervention, which we then integrate into our scoping and design process (see Part 3). Part 2 of this book shares evidence from a decade of learning from which we have collected case studies from various aspects of our work.

## **Grounded in an integrated approach to educational theory and principles of learning design**

Our practice of designing technology‐enhanced learning is based on recognised pedagogical approaches and frameworks derived from major perspectives on learning: the associative, constructivist/individual, constructivist/social, and situative (Mayes and de Freitas 2004; Beetham and Sharpe 2019). Beetham and Sharpe (2019) assert that 'far from competing, these theories together offer a set of complementary ideas which point to broad pedagogic principles'. Laurillard extends this by arguing that learning experiences will be improved if we harness as many perspectives on what it takes to learn as possible: 'Each of the principal theories of learning has something to contribute, and together they provide a comprehensive account of what it takes to learn' (Laurillard 2012, p. 45). Laurillard's Conversational Framework (Laurillard 2002) maps well against all four perspectives of Beetham and Sharpe (2019, p. 243) and is one of the key pedagogic frameworks we use at INASP alongside Knowles' adult learning principles (Knowles et al. 2014), Kolb's Experiential Learning Cycle (Kolb 1984), Fink's Taxonomy of Significant Learning (Fink 2013), and the Community of Inquiry framework (Garrison 2016). We discuss these frameworks in more detail in Part 3 of the book.

As we moved from offering our first online course in 2011 to fully integrating technology into our capacity development approach four years later, finding the right methodology to guide our design process was paramount. We have adopted the 'learning design' methodology, 'a formal process for planning technology‐enhanced learning activities, usually supported within a community where designs and ideas can be shared and re‐used' (Lewin et al. 2018). We describe learning design in more detail in Part 3 of the book.

## **Fully integrated into capacity development**

Earlier, we described our approach to capacity development as learner‐led and technologyenhanced. We do not seek to replace more traditional methods such as face‐to‐face workshops with online modes of delivery. Instead, digital technology enhances and redefines what we do by:


For example, online learning enables greater participation in our capacity development initiatives, learning at the point of need (i.e. preparation to deliver learning can occur more rapidly), and enhances connections between participants nationally and internationally.

We acknowledge that the pedagogies for traditional face‐to‐face interventions cannot and should not be replicated in an online space. To design powerful learning experiences online, we step away from the pedagogy of a face-to-face workshop and analyse the situation for learning anew, considering both new opportunities and new barriers to learning. Therefore, there is no single type of course and no single pedagogical approach. Each of our courses is different depending on its pedagogic intent and the context of the learners. The learning pathway is purposefully mapped out – we take great care to meet every single person in their learning journey and gradually support them in mastering their skills and capabilities, until they can support others locally.

Each of our TECD activities has its place in the Learning and Capacity Development Framework (see Figure 1), which helps us to be honest and clear about what it can enable learners to achieve and where they need to go next:


## **Closing learning loops through monitoring, evaluation and learning**

## *Learning to improve*

By engaging in a continuing process of reflection and improvement, we aim to discover how capacity development can be more effective in enabling groups of people to do better research, enable learning, and effect change. We have learnt much from others – particularly from our partners, with whom we've designed and delivered learning programmes, facilitators we've worked alongside, and those coming to learn (Bailey et al. 2016). We have also had to learn and change ourselves to become – and continue to be – effective facilitators of and partners in capacity development and learning. Systematically monitoring the difference that our capacity development initiatives create, in the context in which they are delivered, is a fundamental component of the capacity development process. As we seek to understand how best to enable an iterative process of shared capacities and co‐developed approaches, we have recognised the need for an adaptive evaluative approach that continuously 'feeds', refines and sustains capacity development efforts.

The figure below illustrates this ongoing cyclical process. **Implemented** technology‐ enhanced capacity development activities **generate feedback**, which is **reflected** upon and considered by the project team; new **learning occurs**, allowing for **adaptation**, and the next **improved iteration** of the activity or initiative is birthed.

**Part 1** *Setting the scene*

**Figure 2: Ongoing cyclical process of Monitoring, Evaluation and Learning**

## *Embedding monitoring, evaluation and learning in capacity development*

Our approach to the monitoring, evaluation and learning (MEL) of capacity development has its roots in an organisational commitment to equity that dates back to INASP's foundations. The idea of 'participatory' monitoring and evaluation has been foundational in the development of our approach to the 'MEL of capacity development'. Participatory MEL is not new and has its roots in the wider movement of applying participatory research to development. It is, in the words of Rossman (2000), 'fundamentally about sharing knowledge among beneficiaries of the programme, programme implementers, funders, and often, outside evaluation practitioners'. The language feels dated but the principles upon which it is based are ageless and remain as valid now as they were over 30 years ago. Participatory MEL's five principles are participation, negotiation, learning, flexibility and methodological eclectism (Rossman, 2000). Much of this work was predicated on earlier studies of participatory development of which Robert Chambers' work, most notably his 1997 book *Whose reality counts?,* was pioneering (Chambers 1997; Estrella et al. 2000).

The principles of participatory MEL are sound and crucial in addressing issues of programme coherence, vision, equity and ownership. The application of these principles has not always been successful as critics are quick to point out, especially in the interplay of power dynamics – often a factor in the relationship between the funder and the funded, then and now (Estrella and Gaventa 1998; Parkinson 2009). That the application of these principles in practice was flawed in development practice is unquestionable; that the principles themselves remain valid and authentic ones upon which MEL can be designed and implemented to support capacity development initiatives is without doubt. The application of these principles, to the extent that INASP has attempted to contextualise, adapt and implement them with its partners, has yielded many successes in our collective capacities to monitor, evaluate and learn from our capacity development initiatives.

INASP's approach has also benefitted from more contemporary work in the field including:


We aim to systematically track and measure clearly defined and observable change as it occurs throughout the learning experience – at the individual, institutional or ecosystems levels – in a manner commensurate with the scale of the project or activity; and always ensuring a minimal burden on those gathering and using the data. Our approach also seeks to ensure that data are understood within their existing context (i.e. where, how and from whom data are generated).

## *Elements of MEL*

As INASP travelled its own 'MEL in capacity development' journey, it intentionally attempted to address a number of the distorted power relations that limit the genuine application of Chambers' original principles. In so doing, we have identified three essential components that allow for an embedded and adaptable monitoring evaluation and learning 'system' into the capacity development process, whether at the individual, organisational or ecosystems levels. These three elements incorporate and operationalise much of what Chambers and colleagues were striving to articulate and apply within their own contexts. They are: **MEL prioritisation**; **MEL plan development**; **Learning and adaptation**.

## **MEL prioritisation**

Positioning MEL as an *a priori* component of the capacity development process needs to occur at the stage of conceptualisation and planning. Partnering with MEL staff within an individual project or across a partnership ensures that the rationale, approach and staffing for MEL are fully integrated into the capacity development activity at any level of operation. Even more important is the need to ensure that there is a mutual vision, understanding and ownership of the capacity development activity itself as well as the MEL process that underpins it. MEL is needed to learn from and improve the activity or activities which themselves are designed to bring about an identified change in capacity. Rather than participation, INASP seeks equitable engagement, ownership, and sharing of all activities and the learning emerging from these activities.

Even in 'light touch' and usually one‐off capacity development activities (that do not require full-scale MEL plans), the importance of MEL to seek out and focus attention on feedback or

<sup>6</sup> https://www.betterevaluation.org/en/plan/approach/developmental\_evaluation

<sup>7</sup> https://www.betterevaluation.org/en/plan/approach/utilization\_focused\_evaluation

<sup>8</sup> https://www.betterevaluation.org/en/plan/approach/outcome\_mapping

#### **Part 1** *Setting the scene*

evaluative data and subsequent learning about the activity is essential. There are key questions at this stage of MEL prioritisation that need to be asked by the project. These include:


While these questions are not exhaustive, they are illustrative of the level of thought and engagement that makes the difference between a project where learning is fundamental and valued, as opposed to a project that has grafted on MEL, simply as a requirement of the funder.

## **MEL plan development**

The development of a coherent MEL plan is fundamental to the success of any medium- to large‐scale capacity development activity or project. Development of the plan assumes that there is MEL management in place to develop and embed the structures and processes that allow for appropriate data collection, analysis, learning, sharing and use of the learning.

Developing a plan requires that we address a number of key questions:


## **Learning and adaptation**

Learning is one of the central pillars of MEL, yet it is the one that is often forgotten or at the very least neglected in the focus of a capacity development activity or project. It is perhaps a truism to state that learning is fundamental to the entire capacity development objective, potentially impacting activities well beyond the lifespan of the activity. Learning requires significant reflection and decision‐making about the implications of the learning to maximise its potential. The *process* of learning also requires this same level of thinking to identify how best it can become structurally embedded within the activities of the institution (beyond the particular set of activities or project) to ensure learning is routinely occurring, being funnelled back into the wider activity or programme context and that the learning loop is being closed. It is pivotal to the entire process of capacity development. Practitioners must have in place a system that enables them to track, capture, share, reflect on, comment on, and apply their learning. Failure to anchor this in place is a major factor in the limited impact often attributed to capacity development activities and their lack of sustainability.

The following key questions should be asked to identify the availability and quality of an in-built learning and adaptation function:


MEL is an essential component of all capacity development activities. Indeed, it lies at the very heart of the 'development' or 'sharing' element of the process. It is the vehicle through which the growth and embedding of knowledge and skills is assessed and collectively shared. But for it to be effective it must be fit for purpose. In other words, the complexity of the MEL component must align with that of the project. MEL evidence collection can range from a simple feedback activity to more complex and comprehensive survey tools, learning analytics, pre-/during/post-surveys, evidence of organisational culture shifts in decision‐making and policy development, etc. Wherever the capacity development activity falls on the spectrum of MEL complexity, ensuring that awareness is raised about its value and necessity at project inception is, without doubt, essential for the success of the activity or project.

## **Conclusions**

In Part 1 of this book we have reflected on INASP's own learning journey in uncovering what makes for effective capacity development, how technology-enhanced learning supports this outcome and how we have tracked and incorporated learning on the impact of our work into our ongoing and subsequent work. There is little doubt that on this journey INASP has benefitted immensely in its own understanding of TECD as well as its organisational capacity to be more effective practitioners of this art – whether in the Global South or the North. In so doing we have discovered that while a TECD approach may be tailored to a particular context, the same principles apply regardless of where in the world one is situated.

Building on this learning, Part 2 shares the results of our learning, in practice, from over a decade of working with researchers, academics, journal editors, librarians, and educators in low- and middle-income countries through a range of capacity development approaches. In Part 3 we provide guidelines for other organisations that wish to enhance their capacity development approaches with technology. To complement Parts 2 and 3, a case studies section provides some examples drawn from across INASP's work.

## **References**


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*

and Ottawa, Canada: The MIT Press, International Development Research Centre. https://www.idrc. ca/en/book/making-open-development-inclusive-lessons-idrc-research.


#### **Part 1** *Setting the scene*


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


## PART 2 **Examining some common assumptions about TEL in capacity development**

*Joanna Wild, Femi Nzegwu, Elizabeth Clery, Ravi Murugesan, Andy Nobes, Veronika Schaeffler*

## **Introduction**

We discussed in Part 1 that for INASP, technology‐enhanced learning in capacity development refers to the effective and creative use of digital technology to optimise learner experiences. INASP's journey with technology‐enhanced learning began over a decade ago with the launch of our first online course. Since then, its role within INASP's programmes has grown. By the time the Covid‐19 pandemic began disrupting face‐to‐face teaching and learning around the world in 2020, technology‐enhanced learning was already a key component of INASP's approaches to capacity development. As a result of the learning and capacity that we had developed, we were able to adapt our own work readily and advise and support partners as they embarked on this journey (Wild et al. 2020).

In Part 2 we assess the degree to which INASP's approaches, at each level of capacity development, align or not with the existing literature on the use of technology-enhanced learning in developing country contexts – recognising, of course, that there is no one homogenous 'developing country' context in the strict sense of the word. Our objectives are threefold:


In undertaking this study we have reviewed data gathered from a decade (2011 to 2020) of technology‐enhanced capacity development activities with partners in the Global South. While our analysis was conducted over the course of 2021, primarily drawing on data from the start of INASP's TEL work in 2011 to the end of 2020, we have also included some examples from 2021.

## **Methodology**

This study was undertaken primarily through a systematic review of a decade of INASP's quantitative and qualitative data, along with data produced by others. The data synthesis followed a number of broad principles, to ensure its quality and relevance. Both primary and secondary data were analysed.

**Primary data:** We reviewed quantitative administrative and feedback data collected from INASP's TECD activities. Relevant data, including feedback on MOOCs, specialist courses, journal clubs, self-study tutorials and mentoring data, were analysed where a sample of at least 100 participants was available for a particular activity (or across a number of iterations of an activity). To ensure the analysis was robust, the same approach was adopted for analysis for sub-groups of interest, defined by gender or country for instance. Only differences based on samples of at least 100 participants and were statistically significant at the 95% level are reported in this study.

Qualitative analysis was undertaken of open-ended data provided as feedback to INASP's TECD activities, as well as of semi-structured interviews undertaken with participants and INASP's partners. Their analysis involved a combination of inductive and deductive approaches, depending on the extent to which existing theories were prevalent within the literature.

Additional primary qualitative data were generated through interviews of INASP partners in the form of 21 semi-structured qualitative interviews with individuals in partner organisations who were recipients of the courses or led in their implementation locally. These interviews captured participants' learning experiences on the courses as well as their experiences of implementing research capacity initiatives locally, the short- and long-term outcomes and views on generic barriers and facilitators in this area.

**Secondary data:** To locate INASP's data and learning within a broader context, we reviewed academic literature published between 2010 and 2020 which considered TEL, online learning or e-learning either exclusively or partially within the Global South – or for a specific country therein. Sources were identified through searchers in JSTOR (https://about.jstor. org/) to identify items published in English between 2010 and 2020. Search terms included 'Global South' and 'online learning', 'technology-enhanced learning', 'e-learning', 'MOOCs', 'mentoring', 'journal clubs'. These different searches yielded thousands of items, and often duplicates, many of which were rapidly eliminated as they were not primarily researchbased. The references sections of relevant items were reviewed to identify further relevant literature (a snowballing approach). Overall, more than 30 items were identified as relevant for inclusion; for each item, a standard pro-forma was completed recording details of the study's methodology, its findings relating to our research questions of interest, and any caveats that would need to be borne in mind when incorporating them in our research synthesis.

Grey literature produced by INASP was also reviewed including published and internal reports and blogs, with more than 20 items identified as relevant for inclusion.

In drawing data from these different sources together, consideration was given to the consistency and strength of findings, the nature and extent of any divergences between data relating to INASP and the Global South more generally and where further data collection would be beneficial to strengthen or contextualise the current answers to our research questions of interest.

**Ethical considerations:** Two ethical considerations, in particular, were considered in this study:


## **An overview of INASP's technology‐enhanced capacity development approaches**

Over the past decade (2011 to 2020), INASP has developed and implemented a wide range of technology‐enhanced capacity development activities in the Global South, mainly in Africa, Asia and Latin America. These activities have occurred at the individual, organisational and ecosystem level of capacity development, albeit with a greater focus on the individual and organisational levels. Below we name case study examples from INASP's work at each level of capacity development. They have been developed to showcase examples of INASP's organisational learning on good practice within the TECD landscape. The full case studies are provided in Part 4 of this book.

## *Individual level*

In order to develop capacity at the individual level, that is for individuals wanting to grow their skills and knowledge to perfom better or to effect change, within their work contexts, INASP has delivered various courses and other activities. In that process, we have engaged with the four entry points of change identified in Part 1. We have collaboratively defined the learning outcomes desired, built foundational knowledge, strengthened skills and facilitated the mastering of competencies as demonstrated below.

• **Massive Online Open Courses (MOOCs).** INASP's MOOCs are courses lasting between six to eight weeks aimed at early career researchers. They focus on research writing (either in the Sciences or Social Sciences), intending to help participants to overcome barriers to publishing their research. Undertaken regularly since 2015, INASP's MOOCs have now been run on 16 occasions, by INASP (in English) and in partnership with Latindex (in Spanish). You can find out more about this approach in the following case studies in Part 4 of the book:


• **Specialist courses.** INASP has delivered a wide range of specialist online courses. Courses that have been run on multiple occasions have focused on Research Writing in Environmental Health, Editorial Processes, Copyright and Licencing, Critical Thinking, and Monitoring and Evaluation of Electronic Research Use (MEERU). These courses have been aimed at specific (often pre‐selected) audiences in the Global South; for instance, the Editorial Processes course is aimed at journal editors, while the Copyright and Licencing Course and MEERU are aimed at librarians. You can find out more about this approach in the following case study in Part 4 of the book:

Case study 10: Scheduling of INASP's editorial processes for journal editors course

• **Online mentoring.** A number of INASP's TEL activities have involved an element of online mentoring, with participants either being mentored individually or in groups. Online mentoring has been undertaken with different purposes in mind. Mentoring undertaken as part of the AuthorAID project involved mentors helping mentees with specific research tasks they had identified (e.g. discussing how to best edit a research paper). Online mentoring delivered as part of the Transforming Employability for Social Change in East Africa (TESCEA) project9 focused on extending and embedding the learning from online workshops training participants to become trainers in course re‐design. You can find out more about this approach in the following case studies in Part 4 of the book:

Case study 6: Online mentoring in TESCEA: the value of peer‐to‐peer interaction Case study 15: Online mentoring in AuthorAID: Providing facilitated and un‐ facilitated mentoring to a global network

• **Journal clubs.** In 2019, INASP set up a small number of online journal clubs for researchers in the Global South, organised by academic discipline. These were designed to complement the existing support offered by the AuthorAID project,10 providing early career researchers with an opportunity to meet virtually to discuss and analyse specific research papers, keep abreast of new research and

<sup>9</sup> https://www.inasp.info/project/transforming‐employability‐social‐change‐east‐africa‐tescea

<sup>10</sup> https://www.authoraid.info/en/

improve their understanding of the style and structure of leading research papers in their field. The online journal clubs interact using text‐based software such as WhatsApp, often supplemented by live video sessions. You can find out more about this approach in the following case studies in Part 4 of the book:

Case study 3: Selecting the most suitable platforms to facilitate journal club participation Case study 5: Participants value international interaction at journal clubs

• **Self‐study tutorials.** Self‐study tutorials were set up in 2020, partly in response to the Covid‐19 pandemic, and involved modifying existing INASP courses so that they could be accessed online at any time and with no facilitation required. By the end of 2021, they covered 'Critical Thinking', 'Basics of Grant Proposal Writing', 'Search Strategies', and 'Facilitating Events and Courses in an Online World'. Self‐study tutorials are undertaken on an entirely self‐paced basis and are aimed at a broad range of researchers and academics. The learning design of these tutorials ensures a good balance of different types of engaging activities. It supports self‐reflection on one's learning, although there isn't the opportunity of learning exchange with peers through an online discussion forum. You can find out more about this approach in the following case studies in Part 4 of the book:

Case study 7: Critical thinking: the impact of light facilitation on outcomes Case study 9: Self‐study tutorials give participants flexibility around timing

• **Digital snippets.** An alternative approach to improving students' critical thinking skills has been using digital 'snippets'. These snippets are one‐pagers provided in several formats (PDF, jpg, Word) that lecturers can share with their students to encourage critical thought and discussion. This approach was developed in Sierra Leone in 2020 as part of the Assuring Quality Higher Education in Sierra Leone (AQHEd‐SL) project11 to allow lecturers to provide students with a learning opportunity during Covid‐19-related university closures. It responded to a situation where lecturers and students could only stay connected through mobile phones while working from home. The internet connection was characterised by low bandwidth and frequent disruptions. A taskforce transferred the contents of INASP's critical thinking course to this version of digital snippets and distributed it among lecturers. While keeping as many engaging activities as possible, some elements such as videos had to be replaced. Students received the snippets through a WhatsApp group set up for their class. The lecturer and the students could discuss the learning and ask questions through the WhatsApp group. You can find out more about this approach in the following case study in Part 4 of the book:

<sup>11</sup> https://www.inasp.info/project/aqhed‐sl

Case study 11: Developing teaching of critical thinking in Sierra Leone: responding to a local and changing context

## *Organisational level*

A number of INASP's technology‐enhanced capacity development activities have aimed to support the institutionalisation of training for researchers and teaching staff in higher education. These capacity development activities are characterised by a longer, targeted collaboration between INASP and organisational teams and are enabled through a blend of face‐to‐face and online approaches:

• **Workshops involving blended learning.** As part of the AuthorAID embedding initiative12 from 2013 to 2018, INASP supported partner organisations in four countries to institutionalise (or 'embed') training on research writing, for example, by taking on and running the AuthorAID online courses for their institution's researchers. The support from INASP occasionally took the form of bespoke training‐of‐trainer workshops. These workshops have involved face‐to‐face and online components, with considerable variation in the purposes and sequencing of these different elements. You can find out more about this approach in the following case studies in Part 4 of the book:

Case study 12: Embedding workshop in Vietnam: considering the cultural context when scheduling training Case study 13: Developing a bespoke embedding programme in Colombo, Sri Lanka Case study 15: Handing over the INASP research writing course to the Open University of Tanzania

• **Face‐to‐face workshops involving digital tools.** As part of the TESCEA programme, INASP has delivered a number of face‐to‐face workshops aimed at 'training of trainers' i.e., training 'multipliers' to train others within their institutions in a particular area. These workshops have included the use of digital tools, such as Google Classrooms, Mentimeter and Learning Designer to enable participants to undertake specific activities, within a face‐to‐face setting. You can find out more about this approach in the following case study in part 4 of the book:

Case study 4: How the use of digital tools in face‐to‐face workshops can enhance learning

<sup>12</sup> https://www.inasp.info/project/embedding‐research‐writing‐african‐and‐asian‐institutions

## *Ecosystems level*

Developing capacity and influencing change within an ecosystem usually requires a group of individuals from a range of organisations to come together, convene around a shared goal or goals, and formulate a set of shared objectives. This will typically address one or more collective challenges beyond any single organisation, requiring efforts from multiple organisations and individuals. They often require relationships to be forged and initiatives to be built that involve not only those within the research or higher education sector (e.g. research and academics, or even officials from their regulatory bodies), but also businesses, civil society, and officials from other policy and governmental agencies. They will often span a country and sometimes span more than one country.

The Transforming Employability for Social Change in East African (TESCEA) partnerships deliberately took an ecosystems approach. It involved diverse actors in defining and driving change and developing capacity, using initiatives anchored in individual universities to try to influence outwards, and using technology to support the process in various ways.

• **The Transforming Higher Education for Social Change (TESCEA) partnership.** The TESCEA project sought to achieve two overall goals. First, to effect change within individual universities, to improve the relevance and quality of undergraduate teaching and learning, and to embed a 'teaching for critical thinking' approach. Second, to use the process of change within four universities to test and refine an approach to change that could be scaled more widely, and that could be distilled and documented to support other universities and academics seeking to effect similar changes in their teaching and learning. Each university convened representatives from national employer organisations, government, local business leaders and community representatives to guide the process, engage them in discussions about higher education, and influence their thinking. The partnership brought together the expertise of academics, faculty developers, social entrepreneurs, and those with experience in designing and facilitating change processes. The first phase has generated a set of collectively developed outputs and, more importantly, a collective vision about teaching and learning, and a community of practitioners who want to take it further. Technology has been woven into the whole process: from online courses and learning spaces developed for faculty, through digital tools like Ideaflip and Mentimeter integrated into online working sessions, to universities convening stakeholders in online discussions on Zoom and other platforms. You can find out more about this approach in the following case study in Part 4 of the book:

Case study 16: The Transforming Higher Education for Social Change partnership

In the next section we review common assumptions about TEL in capacity development derived from our reading of the academic literature, and explore these in relation to our own data.

## **Testing existing assumptions about TEL in the Global South**

In this section, we identify six key assumptions prevalent in the academic literature around technology‐enhanced learning in the Global South. We then review these assumptions in light of the evidence generated from INASP's activities. Much of INASP's evidence aligns with the literature. However, here and there we encounter areas of divergence. We present this not to suggest that INASP's evidence is superior or infallible. Rather, we seek to contribute to the systematic building of evidence from a range of stakeholders in differing contexts in the hope of enabling greater levels of learning and greater quality in the capacity development outcomes that we all seek. The main assumptions from the literature are that:


Against this backdrop we review the supporting evidence from INASP's technology‐enhanced capacity development interventions. Each assumption is considered in the context of the following five‐point framework:


## *Assumption 1: TEL is associated with a lower level of equity in participation and outcomes, compared to face‐to‐face learning*

## **What is the current evidence?**

There is a consensus that TEL in the Global South is not equally accessible to all sections of society and that the primary users tend to be highly educated males in urban areas, in countries with more advanced technical infrastructure and support for online learning. Groups that receive particular attention in the literature regarding the limited access they face include:


developing countries tend to be young, well‐educated males who are trying to advance in their jobs' (Christensen et al. 2013). Poor digital literacy skills also pose a barrier, even where good internet connectivity is available (Liyanagunawardena et al. 2013) although, in practice, many who do participate in TEL also possess only basic or intermediate ICT skills (Garrido et al. 2016).

• **People with disabilities:** People with disabilities in the Global South also face considerable accessibility barriers (King et al. 2018); these include difficulties accessing online learning due to sensory impairments or physical disabilities, which make travelling to locations where the Internet is available problematic.

The assumption is that the lack of equity in access experienced by these groups leads, overall, to a lack of equity in completion rates. The evidence cited to justify this view primarily relates to MOOC completion. Despite the accessibility issues outlined above, it is consistently stated that participants from developing countries are more likely to complete MOOCs than those from developed countries, for whom a completion rate of a maximum of 10% is commonly cited. However, there is little agreement about the magnitude of this difference. A MOOC completion rate of 80% in Colombia, the Philippines and South Africa (Garrido et al. 2016) is often cited in support of this argument. We should, however, treat it with caution as 80% reflects the propotion of MOOCs users who had completed at least one MOOC of all the MOOCs they had taken – it doesn't reflect a completion rate of a single MOOC. Meanwhile, research with medical students in Egypt found that only 18% had completed a specific MOOC (Aboshady et al. 2015). Clearly the evidence from the literature remains incomplete and is not conclusive.

## **What is INASP's approach?**

One of the aims of INASP's programme over the last decade has been to use blended and online learning to increase participation from under‐represented groups in capacity development, and to provide new opportunities for them to develop desired professional skills. As we shall see in subsequent sections, each aspect of INASP's technology‐enhanced capacity development is designed to remove or minimise the barriers identified by the groups detailed above.

## **Learning from INASP's technology‐enhanced capacity development approaches**

#### *Gender*

Across most of INASP's TECD approaches, women are generally slightly under‐represented; overall, around 4 in 10 participants are typically female, but the gender split varies between activities. Women were most markedly under‐represented in the Editorial Processes for Journal Editors specialist course run in 2019 and 2020 (37%) and in journal clubs run in 2019 (38%). In contrast, in the Spanish‐language, Latin‐America‐focused Research Writing MOOCs run in collaboration with Latindex, women were consistently over‐represented, contributing between 54% and 58% of participants across the three MOOCs in this category. Inevitably, these proportions will partly reflect the proportion of women with particular job roles in the Global South or specific regions in the South. In addition, in contrast to the assumption, the gender balance in INASP's online courses is generally better than what has been seen in INASP's face‐to‐face workshops (Wild et al. 2016).

Once enrolled in INASP's online courses, women are nevertheless more likely to complete them compared with men. The evaluation of AuthorAID's two 'mini‐MOOCs' and the first three Research Writing MOOCs reported a high overall completion rate of 47–68%, with a marginally higher rate of 49–72% for female participants (Hrdlickova and Dooley 2017). This trend is replicated across many of INASP's TECD activities, and remains the case when INASP's partner organisations institutionalise and run these online courses themselves. For the Scientific Research Writing course run by the Open University of Tanzania (OUT) in 2016, women comprised 36% of starters and 44% of completers (Kigadye 2017). For a course run in 2017 by Thai Nguyen University (TNU) in Vietnam, women made up 69% of starters and 77% of completers (Murugesan 2017).

However, women frequently report fewer positive outcomes than men who participated in the same TEL activities, although this pattern is not universal. Data for the early MOOCs and Research Writing in Environmental Health specialist course showed that women were less likely to publish than men following an AuthorAID course (38% published, compared with 44% of men) (Hrdlickova and Dooley 2017). Female participants in online journal clubs also reported fewer positive outcomes than men when it came to publishing.

A different picture, however, emerged in self‐reported confidence in MOOCs. In 2020, AuthorAID MOOC participants were asked to rate their confidence (on a scale of 1 to 5) for the course's learning objectives, both at the start and end of the course. An analysis of the percentage change in confidence for more than 3000 participants across three MOOCs shows that women reported a slightly higher increase in confidence on average than men. The numbers of women and men were roughly equal in this set.

#### *Level of education and technical skills*

Participants with limited research experience, for instance those who do not have an undergraduate degree, tend to be a minority in INASP's capacity development activities. This is unsurprising given that many of these activities are aimed at students or professionals with a certain prior level of knowledge or experience.

Reflecting the wider literature, those new to research are also consistently marginally less likely to complete INASP's MOOCs. While 31% of those who began INASP's two 2020 Scientific Research Writing MOOCs13 were new to research, this was the case for only 24% of those who completed them. This might also be explained by the fact that these MOOCs were aimed at researchers in the process of writing a research paper. Participants not directly involved in a research project may have found these courses did not meet their needs, leading to lower completion rates.

#### *People with a disability*

Between 1% and 2% of participants in INASP's TECD activities report having a disability (this characteristic has been measured since 2018). While these proportions are low, they

<sup>13</sup> 'Research Writing in the Sciences' MOOC and 'Research and Proposal Writing in the Sciences' MOOC

remain stable in INASP's activities, suggesting individuals with disabilities who access TECD opportunities may have already dealt with any accessibility issues they face, that is, they may no longer experience their disability as a barrier to their engagement in the course. The learning design and implementation of INASP's online courses consider participants' visual or hearing disabilities. For example, we include text equivalents of images with instructional content (beyond having alt text for simple images) and provide audio or video content transcripts. While we cannot claim to design for or address every type of disability, we strive to be as inclusive as possible and to understand and design for other kinds of disabilities as we become aware of our participants' needs.

#### *Country*

INASP's technology‐enhanced capacity development activities have been undertaken by individuals from a wide range of countries. Between 2015 and 2020, INASP's MOOCs were completed by participants from 147 different countries. Recent self‐study tutorials in Search Strategies, Grant Proposals and Critical Thinking attracted participants from 78, 49 and 48 countries, respectively. INASP's research indicates that its online courses, particularly MOOCs, have reached significant numbers of participants across many countries, including those affected by conflict, those that are harder to reach, as well as some refugee academics. Countries include Sierra Leone, Somalia, Yemen, Iraq, Afghanistan, Syria and Palestine (Harle and Bottomley 2018; Wild et al. 2020).

About 10 countries, mainly in Africa, consistently supply more than 50% of successful course participants across a range of activities. While the largest proportion tends to come from Nigeria, the following countries are also consistently well‐represented: Kenya, Uganda, Ghana, Ethiopia and Nepal. With the exception of Ghana, these countries are not necessarily those identified in the literature as being particularly accessible for online learning; inevitably, their prominence in INASP's technology‐enhanced capacity development interventions may also reflect INASP's pre‐existing networks and relationships.

Data on completion percentages from MOOCs indicate that the country in which a person resides is a strong determinant of completion and participation rates. Participants from Uganda are considerably more likely to complete Scientific Research Writing MOOCs (with 59% doing so on average), compared with starters from Sudan (43% complete on average) or Pakistan (36% complete on average), for example (see Tables 1 and 2). Overall, participants from countries in South Asia are less likely (45% on average) to complete an INASP's MOOC than African participants (51% on average). The difficulty in reaching non‐completers post the course made it impossible to establish why these differences occur.

Across the board, there is a clear trend for those groups who are better represented in INASP's technology‐enhanced capacity development activities (men, those with more research experience, and those from certain African countries) to complete activities and report better outcomes (although gender is an exception in this regard). However, the magnitude of differences in participation and outcomes is generally somewhat smaller in our experience than the literature implies.

When it comes to completion levels overall, INASP's experiences appear to support the assumption in the wider literature that these are comparatively better in the Global South than anywhere else. Completion rates for INASP's different technology‐enhanced capacity development activities vary substantially, which is unsurprising given the diverse nature of the activities delivered and the varied ways in which participants enrol or are selected. However, in many instances, more than half of the participants complete an activity. INASP's nine Scientific Research Writing MOOCs from 2015 to 2020 achieved completion rates of between 40% and 61%, while those delivered in Spanish by Latindex performed slightly better (between 54% and 58%). Some of INASP's specialist online courses, delivered to much smaller groups of participants, achieved even higher completion rates (64% on average for the five rollouts of the Copyright and Licencing course and 79% for MEERU). INASP's self‐study tutorials do, however, achieve lower completion rates overall, between 25% and 50%.

For more information on this assumption, see case study 1, which discusses how in‐built flexibility in INASP's MOOCs may lead to increased participation.

## **Conclusions and implications**

The picture painted by INASP's data regarding the accessibility of technology‐enhanced capacity development in the Global South is somewhat less negative than that presented in the wider literature. Delivering courses online appears to have enabled the participation of individuals from a large range of countries and improved equity of access for women. However, while various explanations have been advanced to explain the latter trend (e.g. women being less inhibited in accessing online learning or having more time to do so), it is unclear what the relative contributions of each has been in practice. Moreover, differences in outcomes, with under‐represented groups tending to report fewer positive outcomes, means equity remains an issue of concern.

Assessing the outcomes of technology‐enhanced capacity development on women researchers is particularly complex. While attendance and completion rates in online courses are high, and confidence increase levels are mixed, the most concrete outcomes (in the form of actual research publications and other outputs from the research writing courses) are significantly lower than those of men. Whether this is an outcome that a good quality technology‐enhanced capacity development can influence is open to debate, as the low research output of women researchers is a well‐documented global problem. For example, INASP's Voice of Early Career Researchers survey found that women were less engaged in research output activities than men (a finding that is reflected widely in the broader literature), as well as having fewer research opportunities, and feeling less certain about their future as researchers (Nobes and Warne 2021).

In an independent systematic review published in the journal *Computers & Education*, the AuthorAID research writing MOOC was said to have 'exceeded aims' in enabling 'the inclusion and development of large numbers of female, regional, Global South participants with family responsibilities who had been noticeably under-represented in previous face to face programs' (Lambert 2020).

To improve both tangible and less readily discernible outcomes from technology‐enhanced capacity development interventions, those involved in its design and delivery clearly need to focus on understanding the views and experiences of these under‐represented groups – as well as those who chose not, or were unable, to access these interventions in the first place.

Points to consider when designing technology‐enhanced capacity development interventions:


## *Assumption 2: Technical and technological barriers to online participation are widespread and reduce the efficacy of learning.*

## **What is the current evidence?**

As discussed in the previous section, two factors limiting the accessibility of technology‐ enhanced capacity development interventions are individual technical skills and the available technological infrastructure within countries in the Global South. These factors have been shown to limit participation in and outcomes from TEL. Research with medical students in Egypt found that the second main reason for their non‐completion of MOOCs was slow internet speed, cited by 54% (Aboshady et al. 2015). In Sri Lanka, 95% of participants in MOOCs indicated they faced several challenges when participating in MOOCs with infrastructure and not having the necessary skills cited by 58% and 51% respectively (Warusavitarana et al. 2014). Evaluations of online mentoring programmes in Malaysia and Kenya similarly identified problems with technological infrastructure, with slow internet connections inhibiting conversations between mentors and mentees (Ligadu and Anthony 2015) and being inadequate for two‐way video streaming (Obura et al. 2011).

## **What is INASP's approach?**

INASP's technology‐enhanced capacity development interventions are designed for low‐ bandwidth environments to ensure accessibility (Harle and Bottomley 2018). Here, 'low‐ bandwidth' is used in a broad sense not just to refer to low‐speed internet connections but also connections that have a low or limited internet data allowance. Low‐bandwidth internet connections are common in developing countries (Ahmed 2020). The Scientific Research Writing MOOCs – which attract the greatest number of learners – are also mobile‐optimised, recognising that, in many countries of low and middle income, mobile access is the most prevalent route to the internet (Wild et al. 2020).

INASP's choice of the Moodle LMS also reflects the popularity of, and therefore relative familiarity with, this application around the world, including in developing countries (Hill 2021). The open‐source technologies behind Moodle have been well‐established for more than 20 years. Many of our learners may have encountered a Moodle‐based site before, and we provide related technical support in our courses, which we describe in detail in Part 3 (in the section 'Delivery and sustainability').

The main unit of learning on INASP Moodle is an online course, and all of INASP's courses are characterised by a largely text-based and interactive approach to delivering content and learning activities. In the learning resources, elements such as multiple‐choice questions, interactive exercises, and 'reveal/hide' content are frequently embedded. Videos are occasionally used to enhance the content – these are often optional learning resources, and a transcript is provided for those who might struggle with connectivity. INASP has used different tools to create interactive, online learning resources, but has also provided offline versions of these resources for learners who struggle with internet connection. Learning resources are licensed under Creative Commons licence14 CC‐BY‐SA, allowing for downloading and reuse. Learning activities where the learners actively contribute something original take the form of discussion forums, individual or group assignments, written reflections, online resource sharing and curation. Learners can download their work and have full ownership of it.

In the wake of the Covid‐19 pandemic and the ensuing 'emergency remote teaching', there is perhaps a widespread assumption that online courses consist largely of live sessions using tools such as Zoom, Google Meet or Microsoft Teams. However, since 2011, INASP's courses have followed a largely asynchronous model, allowing learners to study when convenient to them. Learners who take INASP's courses come from all over the world, tend to be busy professionals (just one-fifth of participants in a recent MOOC said they could make time for the course 'during office hours'), and may have low‐bandwidth connections as they are based in the Global South. For these reasons, the asynchronous model has proved appropriate for INASP's course delivery.

## **Learning from INASP's technology‐enhanced capacity development approaches**

Across a wide range of INASP's TECD activities conducted over the past three years, between 20% and 40% of participants consistently identify 'poor internet connections' as a challenge, while around 20% identify 'unreliable electricity' as being a problem. While access to the internet is commonly perceived as a potential challenge for online learners in developing countries, electricity is key to getting online in the first place. In 2015, the electrification rate (population with electricity) in developing countries was 68.3% compared to 99.5% in the developed world, and in Africa it was only 37.8% (Table 3 in Ouedraogo 2017).

However, internet access and power outages might not be the biggest challenges for the audiences who take INASP's courses. A survey which was sent to those who did not complete the Research Writing in the Sciences MOOC (April to May 2021), found that the majority of respondents cited time management issues and other work issues (70%) as a central challenge in completing the course, whilst only 30% mentioned internet problems and 20% mentioned family commitments. Electricity problems were the fourth most mentioned problem, by 15% of participants. This suggests that internet and electricity problems, whilst clearly an issue, are not the primary reason that learners drop out of online courses. The top suggestions to deal

<sup>14</sup> https://creativecommons.org/licenses/

with this challenge included providing a self‐paced version of the course, extending deadlines, increasing the regularity of online courses, and providing offline materials.

Participants do not frequently report having problems related to the specific technological platforms and tools used by INASP to deliver activities. Ninety-one per cent of participants across three iterations of the Editorial Processes course reported that 'navigation through the online platform, resources and activities' was easy; 86% and 81% respectively indicated this in relation to the second and third Spanish Research Writing MOOCs run by Latindex. While it cannot be assumed that such low levels of difficulty would be replicated among non‐completers or non‐participants, this does suggest that problems relating to an online mode of delivery stem more from broader infrastructure issues affecting the internet access for some participants generally, rather than usage issues relating to the technology and platforms selected by INASP. Problems with specific technological tools and platforms were more common in capacity development activities aimed at institutionalising online training within partner institutions, despite participants having higher technical skills, with some being IT specialists. Twenty-one per cent of participants in the AuthorAID Online Course Toolkit Programme designed to teach participants to deliver INASP's Research Writing course themselves, reported that navigation through the toolkit programme was difficult – suggesting that this presents problems for approximately one in five people. Meanwhile, qualitative feedback highlighted specific issues with downloading course materials and accessing the Moodle site as the most common.

While the different technological elements employed in the face‐to‐face TESCEA Training-of-Trainer workshops were generally viewed very positively (as discussed in the next section), technical problems with Google Classrooms in particular were identified in qualitative interviews with users. The greater prevalence of technical problems in this subset of activities may reflect the fact that more technological skills are required from participants in courses aimed at 'embedding' training at an institutional level. Participants are being trained to deliver online courses themselves – so a considerable degree of familiarisation with technical tools is required.

For more information about the evidence around this assumption, see case study 2, about designing MOOCs for a low-bandwidth environment, and case study 3, about selecting the most suitable platforms to facilitate journal club participation.

#### **Conclusions and implications**

Issues relating to an online mode of delivery remain problematic for a sizeable minority of participants in INASP's TECD activities – and may be dissuading an even greater proportion from undertaking these courses in the first place. However, these issues primarily relate to regional, national or individual infrastructure issues (involving internet availability and the reliability of electricity supplies), rather than the specific platforms and tools used by INASP. It means that they are less easily solvable at an organisational level, although hopefully the design of INASP's TECD activities for a low-bandwidth environment will have already reduced the numbers experiencing such problems. Inevitably, training individuals and organisations in the Global South to deliver online training themselves requires a reasonable degree of technical skills – and INASP's evidence suggests that more may be needed to achieve this than would necessarily have been anticipated.

Points to consider when designing technology‐enhanced capacity development interventions:


## *Assumption 3: Combining online and face‐to‐face approaches is more beneficial than training conducted exclusively online*

## **What is the current evidence?**

Combined with a range of other factors, the difficulties associated with an online mode of delivery in the Global South described in the previous section have led to wide support within the literature for 'blended' learning, comprising both online and face‐to‐face elements. In Ghana, for example, students were found to prefer mixed mode and web‐supplemented courses to web‐dependent and fully online courses (Tagoe 2012), with there being a widespread negative perception about online education (Kotoua et al. 2015). E‐mentoring specifically is often compared with face‐to‐face mentoring with the implicit assumption that it is a poor alternative to the traditional model (Tinoco‐Giraldo et al. 2020). More generally, a number of studies have shown that blended or hybrid learning models are most effective, rather than programmes offered exclusively online (Trines 2018; Palvia et al. 2018). This has been attributed to the fact that such an approach combines the respective advantages of face‐to‐face and online learning. It has been argued that 'the blend is optimal because it combines the value of the face‐to‐face interaction with teacher and peers, which is constrained in time and place, with the online environment, which is self‐paced and less time‐constrained' (Laurillard and Kennedy 2017).

## **What is INASP's approach?**

As earlier discussed, in TECD activities aimed at enabling online training to run locally, INASP has combined both face‐to‐face and online approaches. The nature, timing and sequencing of face‐to‐face and online has been highly varied and has been informed by the specific learning outcomes as well as the needs and attitudes of participants in the relevant capacity development activities.

## **Learning from INASP's technology‐enhanced capacity development approaches**

Data from participants in INASP's TECD interventions challenge the assumption in the literature that online learning is viewed as an inferior alternative to face‐to‐face or blended learning in the Global South. Seventy-one per cent of participants in the second and third Spanish Research Writing MOOCs run by Latindex thought that online courses can be 'a good alternative' to face‐to‐face courses; a similar proportion of participants (67%) in the second and third iterations of the Copyright and Licencing course expressed this view.

Both approaches were valued for different reasons. Those individuals who had attended both face‐to‐face and online workshops as part of the TESCEA programme tended to prefer the face-to-face version in terms of the interaction which it facilitated with others, and the online version in terms of its flexibility in scheduling their learning. Qualitative data collected by INASP indicate a growing acceptance of online training (Schaeffler 2019). This appears to have accelerated as a result of the Covid‐19 crisis which compelled education and learning to move temporarily online.

INASP has found that combining face‐to‐face and online modes of training can have specific advantages that increase the effectiveness of both elements. In several instances, capacity development activities aimed at embedding online training locally involved an initial online session; this was viewed as highly valuable and increased participant motivation, focus and skills required for the subsequent face‐to‐face session. The addition of an initial online session to capacity development activities in Tanzania led to a noticeable difference in the quality of engagement in the face‐to‐face workshop which followed, with full participation and high levels of enthusiasm, compared with its previous iteration. This approach also enabled the face‐to‐face training to be completed more quickly (Murugesan 2019).

On the other hand, scheduling an online session after a face‐to‐face workshop can add value to the course by building on the knowledge and skills gained in the face‐to‐face session. This was the case of a training undertaken at the Thai Nguyen University (TNU) in Vietnam (Murugesan 2017). In Vietnam face-to-face contact is essential for building rapport so, naturally, online learning had to follow a face-to-face session and not the other way round. This approach benefitted both participants' motivation and knowledge retention.

In a similar vein, participants in INASP's online MEERU course were encouraged to create face‐to‐face learning and support groups in their institutions – and there is some evidence that this approach was helpful in generating ideas, getting advice and contributing to their overall learning outcomes (Wild et al. 2016).

While the use of technological elements within face‐to‐face workshops was regarded as adding value, they were primarily viewed as beneficial in that they acquainted participants with a variety of technological tools which they could ultimately use in their own work. This was found to be the case in INASP's face‐to‐face workshops with TESCEA participants, which employed a variety of technological tools including Google Classrooms, Mentimeter and the open‐source Learning Designer tool.

More information about the examples discussed here can be found in case study 4, which discusses how the use of digital tools in face‐to‐face workshops can enhance learning.

## **Conclusions and implications**

INASP's data indicate a growing acceptance of TEL as an alternative to face‐to‐face learning, helped further by the Covid pandemic. In INASP's experience, combining online and face‐ to‐face elements within different capacity development activities has clear benefits, not only by allowing for the use of a wider range of learning approaches, but also by enabling these elements to complement each other and strengthen the learning gained overall. In this sense, face‐to‐face learning and TEL should not be conceptualised as being in competition with each other, as they are often depicted in the literature. When it comes to acquainting participants with new technological tools and their practical use in training, whether online or face‐to‐face, both approaches are clearly invaluable and complementary in terms of generating interest, enthusiasm and understanding. This combined approach is of particular benefit for activities aimed at institutionalising online capacity development approaches within organisations.

Points to consider when designing technology‐enhanced capacity development interventions:


## *Assumption 4: Online learning does not support participants' interactions well, which has a negative impact on learning outcomes*

## **What is the current evidence?**

Interaction (between course participants and with facilitators or trainers) is consistently presented as an aspect of learning where online learning in the Global South (and elsewhere) is outperformed by face‐to‐face and blended learning. A common criticism levelled at online learning is the fact that it is, 'an inferior, isolated, anonymous learning experience … that cannot compete with the real‐world, tangible and touchable learning environments in which it is much easier for students and teachers to interact and exchange ideas' (Trines 2018).

Such viewpoints are indeed reflected in the experiences reported by some participants in online learning in the Global South. Among medical students in Egypt who had completed online courses, 84% were satisfied with the overall experience, but reported much lower levels of satisfaction regarding student–instructor (32%) and student–student (20%) interaction. Similarly, a study comparing online and face‐to‐face versions of a short course found that students only evaluated the two courses differently with regard to the quality of instructional interactions, which were rated significantly lower for the online course (referenced in Laurillard and Kennedy 2017).

We also find that the quality of interaction in online learning can have a significant impact on its outcomes. Interaction emerged as a strong predictor of learner satisfaction in an online mentoring programme in Sri Lanka, explaining 50% of the variance (a greater proportion than was explained by any other factor), leading the author to conclude that if participants are satisfied with online interaction, they are more likely to be satisfied with the learning experience as a whole (Gunawardena et al. 2012). The degree of interaction, then, remains a key factor to be addressed when designing and delivering successful online capacity development interventions in the Global South.

## **What is INASP's approach?**

INASP's online capacity development interventions have employed a diverse range of approaches to facilitate interaction between participants, facilitators, and course leaders. These utilise a range of technological tools and include synchronous and asynchronous approaches that include:


Underpinning these approaches has been INASP's desire to use its courses to create 'communities of learners', by involving guest facilitators, developing structured and facilitated forums, and encouraging participants to interact with each other as they study (Harle and Bottomley 2018). For MOOCs specifically, selected facilitators are experts in their research field, which is intended to facilitate a higher level of discussion in the forums (Nobes and Murugesan 2017).

## **Learning from INASP's technology‐enhanced capacity development approaches**

INASP's varying approaches to encourage interaction between participants and their facilitators and peers are viewed positively by users. At least 9 in 10 participants in recent Scientific Research Writing MOOCs agreed that 'the level of support and guidance offered in this course was enough to complete the course successfully' (a similar proportion of participants in the Social Science Research Writing MOOC also stated this).

Participants were also positive about the feedback they received from facilitators. Ninetyfive per cent of participants in the Editorial Processes course rated the feedback on their action plans provided by facilitators as useful. Across the four recent Scientific Research Writing MOOCs, 30–43% provided the highest rating of 5 for the usefulness of the feedback provided via peer assessments, whilst 72% to 80% regarded this as either 'very' or 'somewhat' useful. Participants also report that giving feedback (not only receiving feedback) is a useful learning experience. Clearly the presence of a facilitator on a course is appreciated and seen as valuable.

Further evidence in support of the value of facilitation is seen in the self-study tutorials. Here completion rates are around half of those reported in moderated or facilitated courses. We have found that even light levels of moderation or facilitation (e.g. sending weekly announcements about upcoming course activities and reminders about deadlines) have a positive impact on completion rates.

Participants in INASP's capacity development activities aimed at embedding online learning within partner organisations were enthusiastic about the opportunities to interact with those playing similar roles in other institutions. Participants in a capacity development workshop held face‐to‐face at a partner institution in Tanzania in 2016 were keen to keep their online community going, with one stating, that *'networking was really good for me. Now, I have a community, IT team, I don't feel like I'm alone'* (Murugesan and Wild 2016).

However, there was a demand for an even greater degree of interaction among participants in INASP's AuthorAID Online Course Toolkit Programme. This may be because participants are relatively isolated within their own institutions, often tasked to act as 'trailblazers'. As one participant in the course stated, *'I was the only person chanced to go through this training from my institution.'*

It seems that face‐to‐face workshops that bring potential trainers from different institutions together, can create a sense of community and willingness to connect further online. This may be more difficult to achieve if the training is fully online from the outset as in the case of AuthorAID Online Course Toolkit Programme.

Trainers trained as part of the TESCEA project also had a chance to get to know each other in a cross‐institutional face‐to‐face training context before continuing their learning fully online. Among participants in the first two rounds of an online course for institutional multipliers, 14 of the 16 identified facilitator support as one of the successes of the course, while 10 singled out the Zoom drop‐in clinics. Feedback on the successes of the workshops described how, 'the live sessions (drop-in clinics) were very helpful in further clarification', 'facilitators were so close and following up every day and supportive', and 'it was so engaging with very active mentors who were available full time'.

There is no evidence that outcomes from INASP's online courses were less positive when interaction was limited, but some evidence that the presence of interaction may secure higher completion rates, perhaps by motivating participants or sustaining their interest (for more information, see case study 7).

Participants in INASP's online courses viewed the opportunity to interact with individuals from a wide range of countries as a significant benefit of an online approach; this was particularly evident for MOOCs and online journal clubs. However, where participants faced country or institution‐specific barriers or challenges, bespoke training with more limited participation was found to be more effective indicating the need to carefully assess the pros and cons of wider participation on a case-by-case basis.

Moreover, INASP's research suggests that interaction should not be viewed as compulsory for every online capacity development intervention. Research in Uganda and Ethiopia concluded that, while social interaction plays a role in group learning initiatives, it could also negatively influence learners' likelihood of accessing these initiatives. Some women learners could be put off joining a learning initiative by a lack of self‐confidence and previous adverse experiences of social interaction in an online space, such as negative responses from men. (Schaeffler et al. 2020). While INASP has found that participants who post on MOOC forums are statistically more likely to complete a course (Murugesan et al. 2017), there are always participants who are somewhat 'quiet' learners and engage only with the learning resources and activities such as quizzes and written assignments.

In a similar vein, research uncovered that learners could be accustomed to a 'knowledge transmission' approach to teaching and learning which may lead to them shying away from expressing their opinions in discussion forums. This finding was reflected in INASP's own experiences of running a few online courses with some Ethiopian participants where it was challenging to make discussions (in English) lively (Schaeffler 2019). As a corollary to this, it has also been noted that online learning can benefit people who feel somewhat constrained in face‐to‐face interactions, due to shyness or who need more time to think (Nobes et al. 2018).

For more background to the discussions in this section, see case study 5, about how participants value international interaction at journal clubs; case study 6, about online mentoring in the TESCEA project and the value of peer‐to‐peer interaction; case study 7, about critical thinking and the impact of light facilitation on outcomes; and case Study 8, about approaches to encouraging interaction in Research Writing MOOCs.

## **Conclusions and implications**

Participants in INASP's online courses are more positive about the opportunities available for interaction than we would have anticipated, given the wider literature. This is probably due to INASP's learning and therefore conscious efforts to address known challenges around interaction through a careful design process (you can read more about this topic in part 3 under 'Design decisions'). Opportunities to interact with peers in particular – be they participants in other institutions or countries – are highly valued.

There is some evidence that social interaction positively impacts completion rates – perhaps by motivating participants to continue engaging as a peer group. This suggests some value in identifying a 'minimum' level of interaction for online courses that satisfies participants and sustains their interest (and the likelihood of completion). However, INASP's data suggest that we should not simply view interaction as a compulsory element of every single intervention, but should try to understand the needs and preferences of individual participants better.

Points to consider when designing technology‐enhanced capacity development interventions:


## *Assumption 5: Timetabling learning online, whether synchronous or asynchronous, is challenging*

## **What is the current evidence?**

The literature on online learning in the Global South argues that its efficacy is adversely affected by time pressures. Issues with time vary depending on whether online activity is undertaken on a synchronous or asynchronous basis. When a synchronous approach is used, it is difficult for participants from different countries (with conflicting time zones) to make themselves available to participate at the designated times. Atkins et al. (2016), for example, report that the organisation of online journal clubs in the Global South was challenging as the 16 partners involved were in different time zones, university schedules differed, and other commitments of experts, students, and audiences varied.

However, when online learning is designed to be asynchronous and flexible, it is not confined to a specific block of time within a participant's work or study schedule, meaning they often end up fitting it in around other more time‐dependent activities. This has been experienced both as an advantage and a challenge. In Sri Lanka, 86% of participants viewed MOOCs as a great innovation, enabling learning without time zone and locality restrictions (Warusavitarana et al. 2014). Online education in Ghana is seen as having an advantage in that students do not need to resign from their jobs to take courses or to arrange childcare.

On the other hand, learning at home was often found to be hectic, requiring self‐ discipline (Czerniewicz et al. 2020; Smith and Watchorn 2020; Kotoua et al. 2015). This is reported among participants in Colombia, the Philippines and South Africa who cite a lack of time as the biggest reason for not taking MOOCs, with 50% identifying this factor (Garrido et al. 2016). Similarly, 77% of medical students in Egypt cite a 'lack of time' as their reason for not completing a MOOC (Aboshady et al. 2015). Workload was identified as a major reason for poor participation in a mentoring programme for online tutors in Sri Lanka (Gunawardena et al. 2012). In other words, whilst the flexibility regarding when to undertake asynchronous online activities is appreciated, it doesn't always mitigate other external pressures or commitments.

## **What is INASP's approach?**

INASP's online capacity development activities have primarily been delivered using an asynchronous format, with participants completing them in their own time (often within the context of a broader weekly timetable with associated deadlines). INASP's online courses typically involve three to five hours of learning per week. However, in some instances, looser scheduling has been used. Recent self‐study tutorials have been conducted on an entirely self‐paced basis, although participants are advised to complete each tutorial within a certain period of time – for example, within a month.

Learners get automated reminders if they are not making progress as per the recommended schedule. A minority of INASP's online capacity development activities have involved a synchronous approach; online journal clubs frequently supplemented text‐based discussions with live video, using platforms such as Zoom. Certain aspects of MOOCs and specialist courses have also been conducted synchronously, and this was also the case for live drop‐in clinics in the online training of trainers conducted as part of the TESCEA programme.

## **Learning from INASP's technology‐enhanced capacity development**

The difficulties encountered with INASP's synchronous online capacity development activities broadly reflect those reported in the literature. Time was highlighted by participants in online journal clubs as one of the major challenges facing them. Zonal time differences meant sessions were not always scheduled at convenient times and could be difficult to fit around participants' work, study and family schedules. Participants identified as problematic, 'the local time difference in scheduled meetings', 'timing of meeting to accommodate those from other parts of the world', and that 'several times I missed online webinars due to busy schedules'.

For INASP's online capacity development activities undertaken asynchronously, the majority of participants appeared to appreciate the greater levels of freedom in scheduling their own learning. This seems to be particularly true for the self‐study tutorials that allow the participants to work fully at their own pace. For example, 86% of participants in the Search Strategies self‐study tutorial found that being able to work on the tutorial in their own time helped them and just 14% found the lack of schedule or deadlines a problem.

Where INASP's online capacity development activities involved tighter deadlines for completing them, the majority still viewed the given scheduling favourably, but a higher percentage of participants reported time management as a challenge.

Across three Research Writing MOOCs from 2018 to 2020, run over a slightly longer period, 62% to 68% of participants said having enough time helped them to complete the course, whilst 8% to 15% identified not having enough time as a challenge.

This appreciation of scheduling freedom reflects the finding reported in relation to preferences for online versus face‐to‐face learning discussed under Assumption 3. Those who had attended both face‐to‐face and online TESCEA workshops tended to favour the latter because of the more flexible scheduling involved. This aspect of asynchronous learning meant that participants could learn at their own pace, which was regarded as enhancing the quality of their learning. As one TESCEA trainer stated, 'the learning became flexible because I could move in my own pace. There was no one to push me and tell me … you have to finish this and that so I made sure I understood each and every process'.

This was seen as an advantage for online mentoring, with one mentee recalling, 'it became easy for me to express whatever I had in mind; or for whatever I felt might change my lesson plan I had more time to express it to my mentors'.

For some, the scheduling of online activities within the academic year was also viewed as influencing their effectiveness. Among participants in INASP's capacity development activities aimed at embedding online learning within partner organisations, there was a clear demand for these activities to be scheduled during less busy times of the academic year, namely when students are away and there are no examinations in progress.

In a similar vein, a participant in the pilot of the MEERU course stated, 'since most participants are from higher education institutions or from research institutions, I would suggest you make it available during the holidays. This can be done by grouping countries in the same region such as East and Central Africa together or Southern Africa'.

The Covid‐19 pandemic has had specific impacts on the dimension of 'time' for participants in INASP's recent technology‐enhanced capacity development activities, although the nature and direction of this impact has not been uniform.

These contrasting views as expressed by a number of participants in the MEERU and critical thinking courses are captured below. While some felt that the ability to work from home as a result of the pandemic gave them greater flexibility, others indicated that it negatively affected their concentration or that they became side‐tracked by other domestic chores.


In one of the MOOCs offered in 2020, the contrasting statements entered by two of the participants allude to the disparate effects of the pandemic on individuals:


INASP's evaluation has found that both synchronous and asynchronous communication approaches are useful in online engagement and that it is important to decide what is most appropriate for the needs of the audience (Wild et al. 2020). As we will see in the next section, the notion of trade‐offs is appropriate. For example, the problems experienced in relation to synchronous timing for online journal clubs (e.g. different time zones) arguably are out‐ weighed by the reported benefits associated with real‐time social interaction.

For more information about the examples shared in this section, see case study 9, about self‐study tutorials giving participants flexibility around timing, and case study 10, about scheduling of INASP's Editorial Processes for Journal Editors course.

## **Conclusions and implications**

Data from INASP's online capacity development interventions endorse the time challenges that are linked to synchronous and asynchronous modes of delivery discussed in the wider literature. The picture in relation to INASP's online activities undertaken on an asynchronous basis is, on balance, rather more positive. While a minority would prefer tighter scheduling, the majority appreciate (and recognise the advantages of) the flexibility created by INASP's looser approach to scheduling, particularly when fewer deadlines are involved. However, this does not mean that this is always the most desirable approach to scheduling, as activities delivered on a synchronous basis have been shown to have other benefits, primarily in terms of interaction. Overall, it is important that we remain alert to the particular advantages and difficulties associated with specific approaches to scheduling in specific contexts, and balance these against other benefits when considering which approach is most appropriate. Clearly, the Covid‐19 pandemic impacted individuals' time in very different ways. It is therefore important that the types of activities initiated and evaluated in 2020 are also repeated and evaluated in a 'post‐Covid' context, particularly with regard to the value ascribed to time and scheduling.

Points to consider when designing technology‐enhanced capacity development interventions:


## *Assumption 6: The use of a 'one-size-fits-all' approach to designing TEL is inherently problematic as it does not take account of context*

## **What is the current evidence?**

The literature is highly critical of the use of a 'one-size-fits-all' approach in the development and implementation of online learning in the Global South, particularly where courses are developed in the North, for participants in the Global South alongside a (majority) Northern audience. It is argued that such an approach neglects to take account of local cultural factors that may affect participation in, attitudes towards and capacity for online learning. Along similar lines, it has been argued that the adoption of western, English‐language online courses in developing countries tends to perpetuate the hegemony of western countries in global education (Trines 2018). It is further believed that the uneven flow of data from North to South has the potential to limit the development of local academic cultures (King et al. 2018).

To avoid such a scenario and to improve outcomes for online learning in the Global South, it is suggested that emphasis should be placed on the role of understanding local contexts before designing e‐learning systems (Joshua et al. 2015). There is limited evidence of tools being developed to facilitate such an approach to inform the design of e‐learning systems. However, such a tool has been proposed for South Africa, with its 11 different languages, nine provinces and varied cultural practices (Joshua et al. 2015).

## **What is INASP's approach?**

Many of INASP's technology‐enhanced capacity development activities have been highly bespoke in design and delivery, aimed at specific audiences, sometimes within individual institutions or countries, and frequently with comparable levels of experience or seniority. Moreover, INASP has recognised the importance of ensuring that local contexts inform the design and delivery of its TECD and has developed a Scoping and Design Decisions Tool (see Part 3), to ensure these factors inform design decisions from the outset.

## **Learning from INASP's technology‐enhanced capacity development activities**

INASP's research has identified the importance of country context, social norms and values, when developing and implementing TECD activities. This led INASP to develop a scoping and design tool to ensure that these factors inform design decisions. The design tool includes, for example, scoping questions around equitable and sustainable access, including barriers to learning. The underlying logic is that answers to these questions help inform the design of or adjustments to the TECD approach.

INASP's approach to developing the AuthorAID research writing MOOCs could arguably be framed as a 'one-size-fits-all' approach, as the courses are designed to support early career researchers from all over the Global South. Supporting such a broad range of students across different continents without applying too rigid an approach is a unique challenge – the data on participation in AuthorAID MOOCs (see Tables 1 to 6) reveal a wide variety of completion rates, which may be related to different levels of knowledge and experience, aptitude for online learning, level of English language, and gender differences.

INASP has attempted to level the playing field with a mixture of approaches, including a foundation of low bandwidth, simple‐language content to support those with the poorest internet infrastructure and the most basic levels of English. Additionally, MOOCs provide different levels of support, from structured forum discussion, to 'check your understanding' quizzes for more advanced peer-assessment exercises, with the aim of providing varied and equitable support to the broad range of students. The different options (levels) for course completion reflect INASP's commitment to ensure a level of equity within what is clearly a non-homogenous context – for example, where participants are at differing ability levels the option of a 'pass' grade is available to some who choose not to complete peer-assessment activities (which are required for a 'merit' grade).

As noted in Assumption 4, offering capacity development activities to researchers from a wide range of countries has been viewed very positively by participants who value opportunities to network and learn from those working and learning in different national contexts. However, for capacity development activities aimed at embedding online training within INASP's partner organisations, the opposite was found to be the case. At a capacity development workshop in Tanzania in 2016, the four institutions participating were found to be at different stages of developing online courses, with different levels of interest, technical infrastructure and experience (Murugesan and Wild 2016). Even though the opportunity to meet colleagues from other institutions was appreciated by the workshop participants, INASP's evaluation of the workshop highlighted the difficulty of delivering a 'one-size-fitsall' approach, when it comes to embedding (Nzegwu 2018). As a result of this experience, later capacity development workshops were frequently delivered to participants from just one institution.

As documented in Assumption 1, INASP has taken into account local technical infrastructure and practices in terms of the use of different platforms and applications, when deciding how to run its technology‐enhanced capacity development activities. For MOOCs in Research Writing and journal clubs, this involved considering the infrastructure and current practices across the Global South as a whole, and in Africa specifically. In its development of approaches to teaching critical thinking in Sierra Leone, rather than implementing a pre‐ existing online course, INASP undertook a scoping exercise with a local taskforce to see what was feasible in the local context (see case study 11 for further details).

For more on the examples mentioned in this section, see case study 11, about developing teaching of critical thinking in Sierra Leone and responding to a local and changing context; case study 12, about embedding a research writing workshop in Vietnam: considering the cultural context when scheduling training; and case study 13, about developing a bespoke online embedding programme in Colombo, Sri Lanka.

## **Conclusions and implications**

INASP's experiences endorse the view in the literature that a 'one-size-fits-all' approach to developing TECD activities in the Global South, or indeed anywhere else, is problematic. This is primarily because such an approach does not take account of the considerable diversity that exists between groups of participants and countries in their attitudes to and aptitude for online learning, and in its local accessibility. For activities designed to address the embedding of online training in particular, it is vital to consider the local (even institution‐specific) context and culture, to ensure that the planned learning is relevant, appropriate and addresses potential barriers. For our Research Writing MOOCs, we have trialled creating bespoke 'online classrooms' for groups of researchers from specific countries or institutions. Moving forward, it is vital that we continue to consider the local contexts when designing TECD activities, as a failure to do so will inevitably limit their effectiveness.

Points to consider when designing technology‐enhanced capacity development interventions:

• What are the attitudes to and experiences of online learning of the intended participants and what are the technical skills and infrastructure they have available to them? Is the existence of diversity in any of these areas likely to lead to problems in the delivery of technology‐enhanced capacity development activities?


## **Discussion: to what extent do these common assumptions hold?**

In this part of the book we have provided a detailed overview of INASP's technology‐enhanced capacity development approaches as they have been applied at the individual, organisational and ecosystems levels of operation. We also looked at the outcomes of INASP's work and how its experiences of delivering TECD activities in this manner compare to outcomes in the wider literature.

There are clear areas of alignment but also distinct areas of difference between the two. Broadly speaking, INASP's experience aligns with the literature on the point that in online activities there is a lower level of equity in participation and outcomes, compared to face‐to‐face learning. However, from INASP's experience the magnitude of differences in participation and outcomes is generally somewhat smaller than is implied by the literature. In some aspects such as gender, INASP's experience runs counter to the prevailing literature. INASP's evidence suggests that technological barriers to online participation are widespread and reduce the efficacy of learning, however, these issues relate more to infrastructure at a regional, national or individual level (involving internet availability and the reliability of electricity supplies), rather than the technology used by INASP. Indeed, we have found that INASP's learned approaches ameliorate the impact of these broader infrastructural constraints. This is not to minimise the issues of inequity that exist for many users, however, we have evidence that technological solutions advanced by INASP have improved outcomes for users. INASP's experience also does not align with the literature on the point that TEL does not adequately support participants' interaction. Participants in INASP's technology‐ enhanced capacity development activities are more positive about the opportunities available for interaction than the wider literature suggests. For INASP, feedback on its online TECD activities undertaken asynchonoulsy is, on balance, more positive than the literature implies. While a minority do prefer tighter scheduling, the majority appreciate (and recognise the advantages of) the flexibility created by INASP's looser approach to scheduling, particularly when fewer deadlines are involved. Finally, INASP's experiences clearly endorse the view in the literature that a 'one-size-fits-all' approach to developing TECD activities of any kind does not afford the levels of tailored learning and support that truly assist the development of capacity in the longer term.

In Part 3 of the book, we examine the principles that contribute to, indeed underpin, the learning that INASP has gained about what actually enables technology‐enhanced capacity development to be truly successful.

## *Data tables*

The source data for the tables below are from the 11 AuthorAID English‐language MOOCs that INASP ran between 2015 and 2020.


**Table A: Completion rates and % of women completers by country – Africa (top 10)**

#### **Table B: Completion rates and % of women completers by country – South Asia**



#### **Table C: Completion rates and % of women completers by country – South East Asia**

### **Table D: Completion rates and % of women completers by country – Latin America (Includes both Spanish language and English language courses)**


#### **Table E: Completion rates and % of women completers by country – Central and East Asia**


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


#### **Table F: Completion rates and % of women completers by country – Middle East**

## **References**


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


## PART 3 **A step by step guide to technology-enhanced capacity development**

*Joanna Wild, Ravi Murugesan, Veronika Schaeffler*

## **Introduction**

In Part 3, we provide a comprehensive overview of the underpinning principles and factors that contribute to successful technology-enhanced capacity development (TECD) initiatives. Details of the process of conceptualisation, planning, trialling, delivery, evaluation, learning, reflection, iteration and more learning are shared to demonstrate the thoroughness of the process that underpins such initiatives. Our specific objectives are three-fold:


Many of our TECD interventions (as outlined in Part 2) have been primarily online and blended courses delivered in various formats. Our experience suggests that, often, there is a lack of understanding and appreciation among stakeholders in a range of sectors regarding what it takes to design, develop and deliver successful online capacity development interventions in the form of online courses or e-workshops. Hence there is rarely sufficient time and funding dedicated to this task. In this part of the book, we aim to outline, step-by-step, the process involved and thereby provide organisations that wish to use technology to enhance their capacity development with the requisite guidance and tools to do it well from the start. Details of INASP's work and the specific initiatives it has helped to deliver are provided in support of the principles we identify and the key assertions we make – all from the perspective of a decade of detailed learning.

## **Quality assurance (QA)**

Before looking at the steps in developing TECD interventions, we consider the topic of quality assurance in digital and online learning. The 2016 Commonwealth of Learning report on the uptake of quality assurance polices, processes and guidelines at open universities in Commonwealth countries concluded that 'the importance of quality assurance in open and distance learning is widely understood; however, there are countries and institutions where progress has been slower than might be expected and where policy-makers, managers and practitioners are still in need of advice and support' (Latchem 2016).

Quality assurance encompasses:


Only a limited number of studies have addressed issues related to quality assurance in online learning (Yeung et al. 2019) and, until recently, no study has synthesised various approaches to quality measurement in online learning (Esfijani 2018). Similarly, research studies have rarely questioned the prevalent double standard in teaching where 'online learning must be more rigorously evaluated than in-person teaching' (T. Bates 2022). Since its beginnings, online learning has been stigmatised as being of lower value and quality as compared to face-to-face teaching and learning (Shelton 2011). In response, many national bodies and organisations have established standards to describe the quality of online and blended learning (Yeung et al. 2019; Perris and Mohee 2020; T. Bates 2022; Abdous 2009; Shelton 2011; Esfijani 2018). The Sloan Consortium (SC) and Quality Matters (QM) are two organisations providing specifications of standards that are frequently referenced in assessing the quality of online education (Perris and Mohee 2020), with QM covering schools, higher education and professional development (T. Bates 2022). Bates points out that all of the QA standards and frameworks were established before the Covid pandemic for mostly asynchronous online learning and there is a lack of standards or frameworks for teaching delivered mostly through synchronous video streaming sessions.

Yeung et al (Yeung, Zhou, and Armatas 2019) conducted a comprehensive review of benchmarks for institutions in higher education and organised them into five categories: overarching, MOOCs, online programme, online course, and blended learning programme. While there are many benchmarks for online programmes and online courses, the same is not true for blended learning. There are, however, QA frameworks, guidelines and rubrics for blended learning. Recently, the Quality Assurance Rubric for Blended Learning was developed by the Commonwealth of Learning (COL) as part of the Partnership for Enhanced Blended Learning (PEBL) project. The rubric was developed in collaboration with partner universities from East Africa (Kenya, Rwanda, Tanzania, Uganda) and is available under Creative Commons licence for others to reuse. The rubric helps to examine an institution's course design process in areas such as orientation, content, instructional design, use of technology and student experience (Perris and Mohee 2020).

Many professional bodies and institutions have developed QA frameworks, guidelines, procedures and indicators for achieving compliance, accountability and/or self-improvement (Abdous 2009) and to serve as measurement of quality in online learning (Esfijani 2018). Abdous argues that depending on institutional values, some institutions adhere to predetermined standards to ensure accountability while others follow less prescriptive guidelines for self-improvement. Nevertheless, a meta-synthesis (Esfijani 2018) of QA indicators and measurement approaches found that quality indicators and criteria were mostly developed in and for western contexts.

Overall, quality assurance tends to focus more on aspects such as inputs (e.g. learner enrolments, readiness for online learning, learner-teacher ratio) and resources (e.g. course content, course structure) than on outputs and outcomes (e.g. retention and completion rates, learner satisfaction) which are more difficult to measure. Additionally, quality is often measured from a single perspective (e.g. that of students) rather than from the perspectives of all stakeholders included (e.g. teachers, learning designers, employers, communities) (Esfijani 2018).

At INASP we align our quality assurance approach with the process-oriented lifecycle model proposed by Abdous (2009). In this model, QA is perceived as a dynamic, iterative and ongoing process that can be embedded within the online learning development process to ensure a good learning experience for learners. Rather than striving for compliance with a set of predetermined standards, we have followed less prescriptive guidelines and recommendations available in the TEL academic and practitioner literature, for example, SECTIONS model by Bates (A.W. Bates 2015), and Wright's quality assurance factors (Wright 2011). In this process, we have systematically captured our learning and contextual evidence for what works and what doesn't, and on this basis, improved our practice and revised our set of guidelines. We focus on all four aspects of quality assurance discussed by Esfijani (inputs, resources, processes as well as outputs and outcomes) – as presented in Part 2 of this book – and we carefully choose our indicators to measure what is most important to us given our institutional values and principles and the context in which we work. For example, the gender and equity aspects of our work which are written into our institutional strategy, are reflected in our indicators for all five aspects of QA as illustrated in Table 7.



We do not consider quality merely from a single perspective, for example, a student perspective only. We include other stakeholders' perspectives early on in the process (see phase 1 'Scoping') – facilitators, learning designers, critical friends, and senior management. Who we bring into the QA process depends on the context. Context matters immensely for quality assurance: no standards, guidelines and recommendations can be applied off-the-shelf without taking account of the context. What each institution measures and how it prioritises what it measures will depend on many factors: institutional values, broader environment, accountability to the funders etc. In Part 1 we demonstrated how our approach to TECD is grounded in the Principles of Digital Development, aligned with our organisational principles and values and expressed in the Learning and Capacity Development Framework that guides our practice. In Part 3 we build on existing frameworks, guidelines, and checklists by refining, expanding and complementing them with our own learning from doing TECD in 'developing country' contexts. Our guidelines and recommendations are embedded in each phase of the development of a TECD initiative, as described in the following sections of this book. At the end of each phase, we list a few high-level guidelines for TECD practitioners.

## **Five ingredients of success**

Over the course of INASP's work, we have identified five key ingredients that make TECD more successful. Ensuring staff commitment to these ingredients of success is part of our quality management. We describe them below.

## *1. Meet people where they are*

Successful TECD interventions begin by recognising what participants need and what motivates them to enrol in and complete an online learning opportunity. What is the added value of using technology from the learner perspective? How will it benefit and allow them to accomplish things that might not be possible otherwise? Consider the relevance of gaining new knowledge and skills for the learners, assess the level of challenge (too little and they may be bored; too much and they may be overwhelmed) and identify motivational factors and opportunities for validation of new knowledge. The questions to explore should include aspects such as learning habits, access to technology and the internet, and digital literacy skills.

Creating a complete learner's profile is best done through a scoping activity at the very beginning of a capacity development project. Some scoping techniques are described in the following section.

Meeting people where they are also means designing for 'hyflex' learning, that is, providing a flexible course structure so that learners can choose how to engage in learning: online or offline, in class or remotely, in real time or asynchronously (Milman et al. 2020). Such a 'hyflex' approach mitigates the known barriers to learning: access to devices and the internet, varying proficiency in the language of instruction, and timetabling. These barriers are discussed in detail later on in Part 3.

## *2. Design for online*

Online learning is often seen as simply changing the medium of delivery without changing pedagogy. Often, there is a temptation to think, 'We've got this workshop with all the slides, resources and activities; why don't we put these materials online and scale up the learning opportunity to reach more people at a lower cost?'. The truth, however, is that classroom dynamics in online spaces are very different from face-to-face settings. Online learning requires more explicit and thought-through design, considering the lack of visual cues, slower responses, and delayed feedback. However, online spaces come with unique advantages: learners can learn at their own pace and re-visit recorded discussions, and activities can be designed to encourage the immediate application of what is learned in the workplace context.

There are many theoretical frameworks and pedagogical models for online learning. Beetham and Sharpe (2019) translate different learning theories to digital design principles; this is a great resource to guide the design of learning outcomes, learning activities, and feedback and assessment. Other authors, for example, Laurillard (2002; 2012) and Bates (2015), offer guidance and models for assessing media affordances and selecting digital technologies to support various types of teaching and learning activities. Laurillard's model is covered in 'Design – Guiding frameworks' in the phase 3 section.

## *3. Be inclusive*

Being inclusive is a recognition of context and the differentials in learners' experiences, their familiarity with digital technology, and their access to it. Online learning offers an opportunity to connect with harder-to-reach populations and locations. For example, INASP's AuthorAID research writing courses are accessed by people in countries and regions affected by conflict or unrest and with populations of displaced people. We have seen that online courses tend to have an encouraging gender balance, as evidenced in Part 2 under Assumption 1. People with caring responsibilities may find it easier to attend courses that don't require travel and have flexible timings. A scoping activity (see phase 1 section 'Scoping') will help to define the benefits and constraints of using technology and digital tools, which can then be addressed in the learning design. Providing offline options in an online course, for example, via downloadable resources, will benefit learners with patchy internet connections and those who may be travelling from remote locations. It is also important to consider users who are differently abled, for example, through the choice of images, colours, and fonts; compatibility with screen-reading technology; and the use of alt text for images.

## *4. Enable social interaction*

A key learning from our MOOCs is the value of social interaction for the success of online learning. Much of this interaction is catalysed by course facilitation. MOOCs often have thousands of participants. However, usually there are only a few facilitators or teachers, who mark their presence through videos and, to a lesser extent, through generic contributions to discussion spaces. We have found it possible to increase the facilitation presence in our MOOCs by drawing on a community of volunteers (Nobes and Murugesan 2017; Murugesan et al. 2017). Guest facilitators play a crucial role in our online courses, providing expert advice and moderating discussions to engage participants.

Social interaction between the learners should be encouraged in online spaces so that while they may be learning as dispersed individuals, learners feel connected and part of a wider community which can lead to better performance (Graff 2006).

Garrison et al. (1999) and Salmon (2013) offer models to guide learning designers in building social interaction opportunities in their courses. We discuss Garrison's model in 'Moderation, facilitation and technical support' in the phase 6 section. Well-planned peerlearning activities (such as providing rubrics for peer assessment of writing pieces) and forum discussions can aid social learning and knowledge co-construction.

being in a group of people, even though you don't see each other, you can ask them a question… we were given some task of writing […] and then we marked each other's papers and came up with recommendations – that was a great experience I had. (MOOC participant)

## *5. Co-design for sustainability*

Principles of sustainability and local ownership (see the Principles for Digital Development in Part 1) should underpin any TECD interventions, ensuring that we support lasting change and that capacity development can continue once the lead (foreign) organisation is no longer involved. Plans for sustainability should be developed at the start of the project, in close collaboration with partners. A rigorous scoping activity will reveal what is possible in a given context, which can then help tailor support to particular needs in a manner that enables sustainable ways of working.

For example, many of our previously facilitated courses have been converted into selfstudy online tutorials and can be accessed by anyone, anytime. To enable adaptation and reuse, we have licensed learning materials under a Creative Commons licence (CC-BY-SA to be specific). It is also important to package learning materials so they can be easily downloaded and customised. This will make it possible for partners to reuse, repurpose, and adapt the course content to suit the specificities of their contexts.

I wanted as many trainers and trainees to benefit [from the training] and not have to grope in the dark like I did to find their research feet. The AuthorAID online course that I implemented at my institution [in 2018] was very successful as I've been getting requests from those who couldn't make the pilot course to run another one. (Dr Zainab Yunusa-Kaltungo, online facilitation course participant)

We have learned that all five points described in this section are essential contributors to the success of online and blended capacity development interventions in the Global South, and all must be given equal attention in the course scoping, development, and delivery processes.

## **Phases of a TECD project**

The ADDIE model – Analyse, Design, Develop, Implement, Evaluate – is one of the most frequently used models to guide the design and development of online courses. The model's success is associated mainly with the high standard of courses produced due to the systematic and thorough process described in the model (A.W. Bates 2015). Bates summarises the criticisms of the model mentioned in the literature, two of which resonate with us in particular: first, it focuses heavily on content design and development, without much attention to the social interaction aspect of course delivery; second, it does not offer enough guidance at each stage on how to make choices and decisions. At INASP, we do not develop large and complex courses that might benefit from the application of prescriptive processes and procedures that the ADDIE model supports. Our TECD interventions are usually between 2 to 6 weeks long, with 3 to 4 learning hours per week. They need to be easily adaptable for a variety of our audiences and contexts, rather than the content being 'locked' in rich multimedia resources that will be difficult to adapt. While we follow six phases in any course development project – (1) scoping, (2) planning, (3) learning design, (4) implementation, (5) piloting and review, (6) delivery and sustainability – an important aspect of quality management is making sure that the phases are accomplished in a collaborative, iterative, and nimble manner, a process that has been called 'entangled pedagogy' (Fawns 2022) (see Part 1).

In the sections below, we describe each phase of a TECD project in detail, providing advice, guidance, resources and tools to support their implementation.

## *Phase 1: Scoping*

Scoping helps to put the learner at the centre of the design process and make informed decisions about key aspects of design and delivery. For example:

• Who are the prospective participants: their learning or career objectives, gender, location, and career level? What level of existing knowledge do they have?

Participants are likely to have different training needs, and the course may need to be tailored to expert as well as novice participants. This will influence the number and types of learning paths that will be designed within the course and help to determine whether a single course is appropriate.

• What equipment and infrastructure do participants have, and what are common barriers to using digital technologies?

The answers will have a bearing on delivery approaches (e.g. the use of video is helpful for learning but may make the content inaccessible to learners with low-bandwidth internet connections) and the balance between synchronous and asynchronous modes of delivery.

• What is the best time for the delivery of a capacity development intervention? Are there periods when participants are unavailable? How much time can they set aside during their working week for learning? Are they more likely to have time in the evening or weekends?

The answers will help to structure learning activities in a way that supports rather than disrupts learning.

• How do prospective participants perceive online learning and how familiar are they with learning in this way? What do they consider 'good' or 'bad' teaching? For example, do they expect to receive training from an expert? Will they be reluctant to engage in team-based problem-solving?

The answers will help balance pedagogical approaches and learning activities to keep participants engaged and give them just the right amount of challenge.

Context matters significantly for the success of any online learning initiative. It's tempting to think that an online course developed for a specific audience in one country can be delivered with no changes to the same target audience in a different country. This may be true sometimes but not always.

In the next section, we introduce scoping techniques and the Scoping and Design Decision Tool that we have developed and refined through the years to help us capture relevant contextual factors in the scoping phase of our projects and make informed design decisions.

## **Scoping techniques**

The selection of the scoping technique depends on several factors:


We describe some techniques that we use regularly during the scoping phase:

**Desk research** is a relatively inexpensive way of collating pre-existing knowledge. In some projects, we applied a context and power analysis lens to learn about the countries or regions we work in and better understand the stakeholders that need to be involved. This analysis contributes to our learning about the context relevant for TECD interventions within these projects. Desk research can help with the initial incorporation of appropriate technology in projects and informs the questions to be asked when planning other scoping techniques, such as surveys or interviews, that will provide deeper insight into contextual factors.

For example, in 2019 we undertook a study that included a meta-analysis of learner feedback from our online courses and a literature review on the learner context in Ethiopia and Uganda (Schaeffler 2020a). This review helped us to include appropriate TECD approaches for specific learner groups in the workplan of a follow-up project.

**Interviews** are a technique that can provide deeper insight into specific aspects of a project. This technique depends on identifying the right interview partners who are available and are prepared to share their insights. Our experience is that about half a day is needed per interview: for preparing and conducting the conversation and analysing the recorded data. This is a time-consuming but effective method to understand contextual factors better.

For example, we carried out interviews to prioritise activities to support the adaptation and reuse of one of our courses in a partner institution in Uganda. We interviewed the key stakeholder groups about their learning needs, attitudes to online learning, ideas to make online learning initiatives sustainable, and the existing institutional capacity for hosting a learning platform. This helped us to co-develop a realistic plan of action.

**Surveys** are helpful if you want to capture opinions or knowledge from a relatively large group of stakeholders. For example, we used this technique to define the course content (knowledge and skills) relevant to early career researchers in Uganda.

**Focus group discussions** can be used to surface knowledge about certain aspects through collective brainstorming and sharing of opinions, knowledge, and ideas. Focus group discussions need to be well-prepared and moderated; the analysis of the collected data also needs time. Like with interviews, it is a time-consuming but very effective way to understand contextual factors better. For example, we discussed the preliminary scoping results with staff at our partner organisation in Uganda to verify our understanding of the context before jointly working out design steps for the learning platform. Together we discussed who had responsibility for hosting the learning platform, which stakeholders needed to be involved to make the learning initiative sustainable, and what could feasibly be accomplished within the timeframe and with the available resources.

## **Scoping and Design Decision Tool**

INASP's Scoping and Design Decision Tool consists of two components: scoping areas and design decisions. Together they provide a set of questions to explore and the aspects to consider in each focus area. We share these questions for adaptation and reuse in the 'Resources' section. Below we discuss each component in greater detail.

*Scoping areas* aim to improve understanding of the learner audience and the broader socioeconomic and cultural context as well as institutional factors that may influence learning. *Design decisions* are based on the results of a scoping activity. They will, in turn, inform and guide the learning design for the TECD initiative. The figure below illustrates these interdependencies.

**Figure 3: Steps and interdependencies in the early phases of a TECD project** 

#### *Scoping areas*

A scoping phase gives pointers for what kind of learning and capacity development intervention is most appropriate in the context of a particular project, for example, what kind of learning format (e.g. self-study or facilitated) and mode of delivery (e.g. face-to-face, online, or blended) will be most suitable. The following areas need to be considered in the scoping phase.

#### **Demographics of the target audience**

The age, gender, location (rural or urban areas, regions, countries), educational and professional background, and other relevant demographic details of the target audience should be noted down.

#### **Relevance: content, time, added value**

Any successful course, whether online or in-person, begins by recognising (a) what the participants need to learn, that is, what knowledge, skills, behaviours, interests, and attitudes they will develop through the course; (b) what motivates the participants to learn and why; and (c) what the participants might have to unlearn: the common misconceptions, habits of mind, and practice that will need to be surfaced and challenged before new attitudes and practices can be formed.

#### *Design decisions*

Design decisions are based on the results of the scoping activity. These results inform the choice of (a) course content, format, and mode of delivery; (b) the type of support provided to the learners; (c) opportunities for social interaction; and (d) choice of the learning platfom. The decision-making process needs to be iterative and adaptive. With new information and learning, some decisions may need to be revised. If, for example, an early decision was to develop a self-study tutorial for budgetary constraints, this will have pedagogical implications for the design and, ultimately, for the learners' experience and learning outcomes. It is, therefore important that a scoping activity precedes any decision about a capacity development intervention. In the following sections we describe the design decisions to be taken after the scoping activity is completed.

First, let's turn to a different question: who needs to participate in the decision-making process? We recommend including one or more subject matter experts, a learning designer, and critical friends from the audience. One of the principles for digital development is 'design with the users' (see Part 1, section 'Four key characteristics'). To adhere to this principle, a thorough scoping activity is needed to collect information from the target audience and key stakeholders. Next, a few (3 to 5) critical friends should be involved in the course design, development, and evaluation process.

#### **Who are 'critical friends'?**

Critical friends are representative of the target audience. These could be, for example, people involved in developing capacity of the target group locally, in their institutions, or nationally. Among the critical friends, there should ideally be a couple of people who know the target audience very well because of extensive interactions.

The learning designer, subject matter expert(s), and critical friends should work together to make the first design decisions based on the results of the scoping activity.

#### **Course scope and parameters**

The results of the scoping activity will help determine the scope of the capacity development initiative, including its overall goal (e.g. to improve participants' research writing skills) and learning outcomes: what the learners *will know* (e.g. the components of a research paper), *will*  *be able to do* (e.g. draft a research paper adhering to publishing standards), and *will become* (e.g. confident in writing scientific research papers) at the end of the course or learning initiative.


The scoping activity should have revealed the learners' existing knowledge and skills, thus helping to define the appropriate entry point for learning (see Figure 1 in Part 1). The overall goal of a capacity development initiative should be based on what you know about the target group's motivation to learn and the perceived benefits from engaging in the learning experience. Sometimes a learning gap is clear to external stakeholders, but not to the target audience. In that case, prospective learners may first need to be sensitised about the issue. This entry point in our Learning and Capacity Development Framework is called 'building foundational knowledge'. At other times, the target audience will have already gained solid knowledge and skills in an area but require support in transferring what they have learned to a specific problem in their workplace. This entry point in our framework is called 'mastering competencies'.

The next step is to develop a course outline. We recommend thinking in terms of concepts rather than topics. Thinking in terms of topics is a content-centred approach, whereas concepts are learning-centred.

What are concepts? Concepts have a common set of features across situations and contexts. Concepts are universal, timeless, and transferable, unlike topics that are generally bound to a particular context, place, or time. A topic is smaller in scope than a concept: it's a specific instance of a concept (Omingo et al. 2021). An example of a concept could be 'Gender', while topics might include 'Introduction to Gender', 'Gender Analytical Frameworks', and 'Gender Roles'. The graphics below illustrate the difference between conceptual and traditional learning (Omingo et al. 2021: 19).

**Figure 4: Representation of traditional learning. Details are the starting point, and they ultimately lead to the big idea. (Omingo et al. 2021)**

#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*

**Figure 5: Representation of conceptual learning. A big idea is the starting point, and it is broken down to bring out the relevant details. (Omingo et al. 2021)**

For each concept in the course, draft one or two learning outcomes and estimate the time learners will likely need to achieve the learning outcomes.


#### **Course format**

The next step is to determine the format of the course. Given the overall goal, content, intended learning outcomes, and what is known about the audience, what format will be most appropriate for the learning initiative? The decision about course format should precede the decision about mode of delivery. At this stage, consider how much structure and support the learners will need to achieve their intended learning outcomes. For example, will they be motivated enough to complete a self-study tutorial, and will this learning format be sufficient to achieve learning outcomes? In Table 8 we list a few options to consider:



#### **Part 3**

#### *A step by step guide to technology-enhanced capacity development*


#### **Mode of delivery**

After establishing what course format will be most suitable for the audience, it will be possible to decide on the mode of delivery by considering the results of the scoping activity. The mode of delivery can include one or more of the following:


It can also be beneficial to offer learners more than one mode of delivery (which is called hyflex learning) so that they can choose how to learn.

The Covid-19 pandemic has changed people's attitudes toward online learning (Alqudah et al. 2021; Stoehr et al. 2021). While Covid-19 has brought about a realisation of the benefits of online learning, there has been a widespread, often uncritical adoption driven by necessity (Czerniewicz 2020; Hodges et al. 2020; Young et al. 2021). For many who are new to online learning, Zoom has dominated and defined their experience. As discussed in Part 2, a blended approach (combining face-to-face and online components) can often be more beneficial to the participants than a course delivered entirely online. Initial face-to-face contact can help create a strong foundation for subsequent interactions online (Nicol et al. 2003; Conrad 2005).


Finally, would the audience benefit more from a Small Private Online Course (SPOC) or a Massive Open Online Course (MOOC)? The answers will differ depending on the demand for the course, the topic (e.g. niche or of broader relevance), the purpose (e.g. building foundational knowledge or encouraging knowledge exchange), and the target audience (e.g. language spoken, professional background).

For example, our Licensing and Negotiation course for librarians was a closed course for a small number of participants due to the topic's sensitive nature: an essential part of the course was exchanging hints and tips for successful negotiation with journal publishers. On the other hand, our research writing MOOCs are open to all due to the massive demand for training on this topic and its relevance to researchers everywhere.

• What are the possibilities, benefits, and drawbacks of offering a course to small audiences or to large audiences, and on a local level or a global level?

#### **Motivation and incentives**

The first question to answer is whose motivation we are talking about. It is important to identify the key role players in the successful delivery of a course. For example, senior management might have a great interest in developing the capacity of their employees in one area, while they might be reluctant to provide training in another area if that could threaten the existing power relations. In the first case, senior management is more likely to encourage participation in a training opportunity and even free up time for the employees to complete it during their working hours. This happened in our Monitoring and Evaluation of Electronic Resource Use (MEERU) course, where we included senior managers in the process of participant enrolment on the course. It resulted in high levels of engagement and completion.

What about the participants' motivations? What does the participant profile from the scoping activity say about what might incentivise them and make their learning worthwhile? In terms of incentives and drivers, we have found that a certificate of completion is highly appreciated as evidence of professional development and is an important driver for course completion. Criteria for receiving a certificate of completion may include passing quizzes, contributing to discussion forums, submitting a final assignment for peer review, and reviewing a course colleague's work. For courses composed of separate modules, digital badges15 can be issued to reward the completion of each module. The Moodle LMS supports the creation of badges at the course level, which can be automatically issued to learners if they meet specific criteria. The Moodle plugins database also has tools to support gamification, which can be defined as the 'the application of typical elements of game playing (rules of play, point scoring, competition with others) to other areas of activity, specifically to engage users in problemsolving' (Hall 2014).

<sup>15</sup> https://openbadges.org/

Other motivational factors mentioned by our partners include the relevance of the course to the context of one's own country and learning from local scholars; immediate benefit, that is, being able to apply learning immediately in the work context; expectation of a positive change or anticipation of career advancement; and having dedicated time and space to learn on the course.

In some cases, asking for a small participation fee can result in learners' increased motivation. Many of us take something more seriously if we pay for it. At INASP we have been making our courses – particularly our MOOCs – accessible to everyone free of charge. We also issue certificates free of cost.

As for the main demotivating factors that lead to participant dropouts, we have identified tight deadlines (at the course level) and disruptions that individual participants may experience during the course (e.g. a sudden increase in workload, poor internet connectivity, or illness). Also, a long-standing culture of teacher-driven instruction at many institutions can be a barrier to learning in courses that require independent learning and self-discipline. It is important to address learners' motivations at the start of the course by being explicit about what will be expected of them and why.


#### **Learner support**

The next aspect to consider includes the extent of learner support on the course. This decision should be informed by the pedagogical intent of the course and careful consideration of the participants' needs and constraints, for example, their skills in self-directed learning. Often, it will also be influenced by the budget or funding available. Decisions need to be made about:


For example, our self-study tutorial on critical thinking is a foundational course for undergraduate students to help them critically appraise the information they access. Learner support in this course is embedded in the content through narration, FAQs, and answer keys. By involving critical friends and students in the design process, it was possible to identify which parts of the course learners may need support with. Our choice of support was entirely pragmatic – we didn't have enough funding to offer the course in a facilitated manner and we knew that the need for the course was urgent. We made the best of the circumstances – the course is both offered openly as a self-study tutorial and we used the contents for a project with Sierra Leone (see case study 11).

On the other hand, our learning design course offered as part of the Transforming Education for Social Change model16 is aimed at helping teachers in higher education in East Africa master competencies in developing transformative learning experiences. It is not a foundational course but a final step in a comprehensive learning pathway designed to make a lasting change in how academic teachers conceptualise, design, and deliver courses. As such, it requires a good amount of expert facilitation to be successful.


Peer support can be a very important component of an online course. It can be manifested in different forms: a simple feeling of presence – 'we're in it together'; opportunities to reach out with questions or viewpoints; and close collaboration on activities in the process of knowledge co-construction. While many learners feel a natural need to connect with fellow learners, they might not always be happy to share their experiences, knowledge, and ideas, especially if they perceive others as possible competitors. It is about what will work for the learners – what depth of interaction will be supported and why.

The figure below illustrates four levels of interaction in an online course. It is a simplified version of Gilly Salmon's 5-stage model (Salmon 2011). Whereas Salmon's model encourages the use of all 5 stages within one course, our simplified version works across a variety of course formats. It helps us establish to what depth we are going to support social interaction within each TECD intervention. For example, our self-study tutorials don't offer any opportunities for interaction, not even socialising. Although the latter would benefit the learners, we do not have the capacity to regularly monitor learner activity and the content of their posts or messages. Digital safeguarding is essential in online courses – ensuring that there is no place for any abuse.

On the other hand, our MOOCs offer all levels of interaction – from socialising through sharing knowledge and ideas in a discussion forum to reviewing each other's research paper abstracts. And there are some courses in which the key aim is networking and exchanging ideas – for example, our Licensing and Negotiation course. To craft opportunities for successful interaction, we need to know what questions are likely to inspire fruitful discussion and what communication channels will best support it.

<sup>16</sup> https://www.transformhe.org/

**Part 3** *A step by step guide to technology-enhanced capacity development*

#### **Figure 6: Depth of interaction in an online course**


#### **Choice of the learning platform**

Learners' access to the internet and devices, levels of digital literacy, and perceptions of online learning are crucial pieces of knowledge before designing an online learning experience.

When we started offering online courses at INASP, we chose Moodle LMS as our learning platform. The reasons were twofold: first, we are committed to using open-source and opencontent approaches, and second, Moodle has been a popular choice in low- and middle-income countries. Since many of our course participants are already familiar with the platform, they can focus on learning instead of grappling with technical aspects.

However, Moodle is not always our choice for delivering technology-enhanced capacity development interventions. Sometimes, simpler tools might be more appropriate. For example, in 2020, we delivered a series of face-to-face workshops for academic teaching staff in Tanzania and Uganda. We complemented the workshops with pre- and post-workshop activities organised in Google Classrooms. For an e-workshop worth five hours of learning time, we designed a Padlet wall17 to guide and structure an initial asynchronous brainstorming activity, which we followed up with synchronous activities in groups and in plenary via Zoom.

## **Recommendations**


<sup>17</sup> https://padlet.com/

## *Phase 2: Planning*

Scoping should be followed by a planning phase, that is, using the results of the scoping activity to decide on aspects such as the course learning outcomes; mode of delivery; length of the course; whether it will be synchronous or asynchronous or both; and whether it will be facilitated or self-study. The planning phase involves:


Frequently, the first two phases – scoping and planning – are interrelated. In an ideal world, any digital learning initiative would start with a rigorous scoping activity. In practice, however, we often respond to tenders and calls for proposals with a short turnaround time and/or predefined objectives; we find ourselves having to decide how to deliver a learning initiative before we fully understand the needs and the context of our learners. Nevertheless, if a scoping activity is built into the proposal, it may be possible to adapt the plans to some extent.

If there is a pre-defined budget (financial and human resources) and a clear project timeline (from inception to delivery), consider these in the planning phase. We never have unlimited resources to do things, so compromise is a reality. For example, the scoping activity might have revealed that the learners will need more facilitation support than what the budget allows. In that case, these options can be considered: limit the number of course participants to allow for a better facilitator-to-learner ratio; consider shifting funds from a different category (e.g. video production); and explore opportunities to recruit volunteers as facilitators.

## **Team and timeline**

In the scoping section, we mentioned three key roles in making early design decisions about a technology-enhanced capacity development initiative. The design and development team should include the following roles.

#### *Project leader*

This is an overall supervisory role. The primary responsibilities in this role are to:


(h) secure further institutional or client buy-in along with the budget needed if the course will be offered on an ongoing basis.

#### *Subject matter expert*

This role is about possessing strong subject expertise in one or more of the main topics that will be covered in the course and the ability to write original instructional content in an engaging manner (and/or present in video or audio form if multimedia will be part of the course content). The subject matter expert should be keenly aware of copyright and licensing, and should take care to include references to external sources. Subject matter experts should develop content for all the topics, which could mean recruiting more than one expert.

#### *Critical friends from the target audience*

A 'critical friend' is someone from the course's target audience or someone who knows the target audience and their needs very well. Critical friends need to share the vision or goals of the course and at the same time should be capable of and willing to critique how the course is designed or delivered. Critical friends should be included in all course design and development stages.

#### *Learning designer*

This role is at the heart of online pedagogy. Teachers and trainers who are subject matter experts and experienced in facilitating face-to-face workshops may be suddenly tasked with pivoting online, but may lack the knowledge of online pedagogy and experience in managing the dynamics of online interaction. Having a learning designer work closely with subject matter experts will significantly improve learners' experience on the course. However, this isn't a role that someone can assume without training or working under the supervision of an expert. From courses to certifications to full-fledged master's degrees, there are a variety of ways to acquire skills in learning design. This role is described in detail in phase 3 'Learning design'.

#### *Technical expert*

This role is about having expertise in tools, applications, or platforms that will be used to develop and deliver the online course. If the course will be hosted on an LMS, the technical expert should have strong knowledge of this LMS, backed up by experience, certifications, or engagement with the LMS community. The technical expert in the course team need not be responsible for running the LMS on a server – an IT department or a vendor often handles this function. However, the technical expert should be confident in interfacing with the IT team to troubleshoot issues, coordinate upgrades, and make customisations.

#### *Language specialist*

This role involves editing the texts, so they are easy to understand and have no grammar or spelling mistakes. An online course generally contains a fair amount of text, even if multimedia is widely used. By 'text', we mean not just the instructional content in the course but the introductory advice for learners (that is, what the course is about, what is expected of them) and instructions for learning activities. Even if the subject matter expert and learning designer are fluent in the language of the course, it is always a good idea to have a language specialist check the text once all the content is in place.

#### *Monitoring, Evaluation and Learning (MEL) expert*

This role is about the design of methods and instruments to support the evaluation of technology-enhanced capacity development interventions and generate learning to inform future practice.

#### *Multimedia designer (if required)*

If original multimedia elements or artwork need to be created for the course, video editors or graphic designers should be part of the team. These roles have become quite specialised in recent years, and as consumers, we expect high-quality animations, video and images.

#### *Facilitators and technical support for course delivery (if required)*

These roles are described in detail in phase 6 'Delivery and Sustainability).

**Note:** Frequently, one person can contribute expertise in more than one area, for example, a learning technologist can contribute combined skills in learning design and technical development, and a learning designer may be able to contribute toward MEL functions. The important thing is to factor in these roles in the planning phase and allocate time and budget for their contributions.

## **Timeline**

The next step is to develop a timeline with realistic milestones for each phase. One frequently asked question is how long it takes to create an online course. The answer is not straightforward and will depend on the approach to learning design. In the case of a co-design approach, where a course is collaboratively developed with partners, the development will take longer but the course may be more sustainable than a course designed without much collaboration.

There are excellent online tools to develop project timelines, assign roles, and monitor progress. We would recommend, for example, Trello,18 Clickup,19 and Microsoft Planner.

## **Recommendations**


## *Phase 3: Learning design*

Learning design is a term most commonly used to describe 'a formal process for planning technology-enhanced learning activities, usually supported within a community where designs and ideas can be shared and re-used' (Lewin et al. 2018). It is also referred to as 'design for learning' – the term coined by Beetham and Sharpe in 2007 for the process by which teachers or other practitioners arrive at a plan for a learning situation. Both terms overlap in that they focus on activity-centred learning and shareability of a design product between teachers

<sup>18</sup> https://trello.com/

<sup>19</sup> https://clickup.com/

and designers. Beetham and Sharpe (2007) and Laurillard (2012) prefer the term 'design for learning' as it emphasises that learning, in its contingent nature, can never be fully designed: 'But we can do our best to design for learning, in the sense that we create the environment and conditions within which the students find themselves motivated and enabled to learn.' (Laurillard 2012, p. 66)

Both terms – 'learning design' and 'design for learning' – are relatively new in academic and practitioner literature and used largely within contemporary technology-enhanced learning and learning design communities. But most have probably heard of the wellestablished field of instructional design, so we find it important to describe how learning design and instructional design differ.

Instructional design originated in didactic approaches to learning with knowledge acquisition at the centre of the learning process. The role of an instructional designer is to follow prescriptive guidelines and models to plan and guide students through a set of instructional sequences to achieve a desired learning outcome (Oliver et al. 2002, pp. 496–497). Although instructional design has more recently taken account of constructivist and situative approaches to learning, its behaviourist roots are still evident (Conole 2015) and instructional design tends to focus more on teachers (as producers) than learners (as consumers).

In contrast, learning design puts learners centre stage in the design process and 'sees design as a dynamic process, which is ongoing and inclusive, taking account of all stakeholders involved in the learning-teaching process' (Conole 2015, p. 35). Learning design is not prescriptive but provides general concepts and principles by which teaching and learning can be planned for.

Beetham (2008) identified four stages of a learning design cycle; we have added one more stage in the process (called 'develop') to acknowledge that design and content creation are closely related yet distinct stages in the learning design process:

**Figure 7: Stages of a learning design cycle. Adapted from Beetham (2008).**

## **Recommendations**


The stages in the cycle are interrelated. For example, the design or content may need to be modified during the delivery (or 'realisation' phase) if the learners' feedback indicates that urgent changes are required.

Here we would like to clarify how these stages are related to our phase-wise approach. We focus on stages 1 and 2 (that is, design and development) in phase 3 of our work: learning design. In the context of our work, the learning design phase includes the development of tangible materials (e.g. documents and other computer files). In other words, the design in an abstract sense is already instantiated to a certain extent. Then, we use an LMS for the digital, web-based implementation of the tangible outputs of the learning design. Therefore, stage 3 of the above model (instantiation) maps to phase 4 of our approach, which we call 'implementation'. Finally, stages 4 and 5 (realisation and review) are mapped to phase 5 of our work: 'piloting and review'. Therefore, in the rest of this learning design section, we will discuss 'design' and 'development'. In keeping with this focus, we have made these two stages in the above graphic more prominent.

Learning design is important for all formats of learning and all modes of delivery including face-to-face learning; however, it is truly indispensable when planning for online and blended learning experiences. This is because, without face-to-face contact with the learners and given the asynchronous nature of much of online learning, there is limited opportunity to 'read' the learner and adjust teaching and instructions 'on the fly'. It is easier for the learners to get lost in their learning journey or distracted by the abundance of resources and links. Hence, teachers and trainers need to outline all the steps of the teaching and learning process, carefully describing instructions and ensuring follow-through from one activity to another. Articulating one's learning design enables sharing of innovation and effective practices with others. This exchange of knowledge and practices has been much needed during (and in the aftermath of) the Covid-19 pandemic, where many institutions globally have found themselves having to move to teach online without knowing how to do it. Below we list four reasons why it is worth having an experienced learning designer as a part of a course development team.

#### *Reason 1: Coherent learning journey*

Adult learners need to see the big picture of their learning journey. They need to understand where it starts, where it ends and the connection points on the way. Learning designers help to connect these dots, they help to create a learning journey that is clear to the learners and helps them move along towards desired outcomes. Enabling a high-quality *learning experience* (for the learners for whom the course is being developed) is a learning designer's primary expertise and point of focus. In this, they complement content experts for whom the primary expertise is subject matter.

#### *Reason 2: Relevance*

Adult learners need to see how the activities they engage in help them move towards successful learning. They also need the right level of challenge – too easy or too difficult tasks can quickly lead to disengagement. Content experts are very close to their subject matter; everything can seem important. It might be difficult for them to map the learning outcomes with the right teaching and learning activities and measurable assessments. This is where learning designers come in. They work closely with content experts to help them prioritise the content and design relevant and engaging activities.

#### *Reason 3: Respect for time*

Adult learners are busy and often juggle competing priorities. Their time is precious to them, and they experience a strong need to use it wisely. Learning designers are experts in (online) teaching methods and can estimate the time needed to complete activities successfully. This leads to an improved learning experience.

#### *Reason 4: Choosing the right technology*

Finally, learning designers know about technology and digital tools. It's also part of their expertise. They spend time evaluating what works and what doesn't in various contexts; they stay abreast with new developments and the literature in the area. They can effectively support content experts in choosing the right tools for content presentation and activities.

So far, we have talked about learning design as a process. In addition to describing the process, learning design as a term is also used to describe (1) the result of the learning design activity – an actual learning design, and (2) an area of research and technological development.

A learning design 'represents and documents teaching and learning practice using some notational form so that it can serve as a description, model or template that can be adaptable or reused by a teacher to suit his/her context' (Agostinho 2009, p. 3). A learning design can be shared with others through a representation. In the section 'Design tools – an example of Learning Designer', we provide examples of learning designs represented in the Learning Designer tool. As mentioned, such designs can be shared with others for inspiration or adaptation.

Learning design as a field of research and practice (along with accompanying digital tools) emerged in the early 2000s as a response to the limited uptake of digital technologies to support teaching and learning processes. The aim was to better understand the process of designing for learning and arrive at tools that can help teachers (a) make informed decisions about creating learning interventions that are pedagogically effective and make appropriate use of learning technologies, and (b) enable sharing and reuse of learning design representations to spread innovative practices (Conole 2015, p. 118). As Conole described it in a presentation, 'Learning Design bridges the gap between the future offered by technologies and the limitations of our courses' (e/merge Africa 2017).

Learning design has continued to grow and diversify into many branches of research and development, including the development of design tools and workshops, for example, Learning Designer,20 ABC Learning Design,21 Carpe Diem,22 and the 7Cs of Learning Design Framework (Beetham and Sharpe 2019). At INASP, we have found the Learning Designer tool particularly useful in guiding us in the process of learning design for these reasons:


<sup>20</sup> https://www.ucl.ac.uk/learning-designer/

<sup>21</sup> https://abc-ld.org/

<sup>22</sup> https://www.gillysalmon.com/carpe-diem.html

3. The end result is a visual representation of learner experience based on a sequence of teaching and learning activities selected and described by a designer (more about this in the section 'Design tools – an example of Learning Designer').

## **Design – Guiding frameworks**

In the design phase, the subject matter expert, learning design professional, and critical friends produce a detailed outline of each module of a course by aligning intended learning outcomes, teaching and learning activities, and assessment. It's an iterative and reflective process that will define the learner experience in the course, and it should be guided by educational theory on how people learn.

#### *Conversational Framework and the six learning types*

The Conversational Framework is a well-established and widely referenced model of how adults learn in formal educational settings (Laurillard 2002). The framework is empirically based, that is, it links teaching design to empirical data about adult learning. It helps instructors and teachers to build teaching and learning based on knowledge of learners and from the learners' point of view. The framework builds on the work of educational theorist Gordon Pask and his Conversational Theory (Pask 1976) and draws on other recognised learning theories such as experiential learning, collaborative learning, constructivism, and social constructivism.

The framework characterises the teaching–learning process as 'an iterative dialogue between teacher and student focused on a topic goal' (Laurillard 2002, p. 77). It asserts that this dialogue can be facilitated by six different types of learning: acquisition, inquiry, discussion, practice, production, and collaboration. Laurillard (2002, 2012) defines the six learning types as follows:

Acquisition: learners develop new concepts and they learn about what others have discovered and what is already known in the field. They do it, for example, by listening to the teacher, watching a video, reading a book or going through a website.

Inquiry: learners use and improve their 'how to learn' skills to find things out for themselves and make meaning for themselves. They identify, analyse, compare, and critique resources that reflect the concepts being taught.

Discussion: learners have the opportunity to further develop concepts by articulating their viewpoints and sharing opinions with one another and with the teacher.

Practice: learners apply their understanding of concepts to complete a task set by the teacher. The focus of practice is on individual students and their ability to adapt their conceptual understanding to a task at hand. Learners complete a task and receive feedback to improve.

Production: learners use what they have learned and consolidate it by producing an output that can be assessed by the teacher and/or the peers. The focus of production is on individual students and their ability to build new knowledge using what they have learned.

Collaboration: learners use what they have learned and consolidate it by producing a joint output that can be assessed by the teacher and/or the peers. The focus of collaboration is on negotiating meaning, participation, and co-creation.

A high-quality learning experience requires all six types of learning which together encompass the Conversational Framework, making the teaching–learning process discursive, adaptive, interactive, and reflective (Laurillard 2002, p. 86).

**Part 3** *A step by step guide to technology-enhanced capacity development*

**Figure 8: Conversational Framework. Adapted from Laurillard (2012).**

Laurillard describes her framework in a short video (Kennedy and Laurillard 2019). The framework asserts that learning is an activity that develops both concepts and practices, and the two assist each other to develop over time: 'At the upper level of the framework, the teacher and learners communicate about concepts, and learners do the same with each other. And every interaction is an opportunity for concepts to develop. At the lower level, teachers and students model and share their practice through actions and feedback in a special learning environment. Again, all those interactions are an opportunity for practices to develop. If the learning environment is quite challenging, then , to get the best feedback, the learner has to integrate concepts and practices – and that's when the learning process really begins to benefit the learner for long term' (Kennedy and Laurillard 2019).

We use the Conversational Framework to guide us in the selection of teaching and learning activities for our online courses. The balance of activity types will differ depending on the learners' entry level. For example, a course that aims at building foundational knowledge (see Learning & Capacity Development Framework in Part 1) will have a higher percentage of learning through acquisition and inquiry than other learning types, as shown below.

**Figure 9: Example of a division of learning types in a course aimed at building foundational knowledge**

On the other hand, a course that aims at helping learners master competencies in a professional area (see 'Learning & Capacity Development Framework' in Part 1) should have a higher percentage of production or collaboration, as shown below.

#### **Figure 10: Example of a division of learning types in a course aimed at helping learners master professional competencies.**

There is no right or wrong in what the balance of learning types should look like. It depends on the learning theory the designer chooses and the audience's needs. The advantage of the pie-chart visualisation is in supporting critical reflection on whether the design represents the course's learning outcomes.

#### *Other guiding frameworks*

The Conversational Framework is not the only pedagogical framework we use to guide our design work. The other models we use extensively are Knowles' Adult Learning Principles, Kolb's Experiential Learning Cycle, Bigg's Model of Constructive Alignment, and Fink's Taxonomy of Significant Learning.

#### **The Adult Learner (Knowles, starting from the 1980s)**

Knowles identified six characteristics of an adult learner, a framework which has been explicated in various publications, for example, in Knowles et al. (2014).


We find it helpful to use these guidelines when designing for learning. For example, when promoting our courses, we clearly communicate why gaining knowledge and skills in an area is essential and what value it will bring to our target audience (characteristic 6 above). We ensure that the timing of our interventions matches the academic life cycle. For example, when working with our partners in East Africa on redesigning undergraduate courses to bring in critical thinking and problem-solving skills, we selected courses nearing the review and validation process so that the redesign process would align with teachers' needs (characteristics 3 and 4 above). We provide choices for learners in terms of content they engage with, activities they complete, communication channels they use, and where and when they choose to learn (characteristic 1 above). We recognise participation and contribution through positive feedback and encouragement (characteristic 5 above). Finally, we use Kolb's experiential learning cycle, described below, to guide the design of learning activities that bring out learners' prior experiences (characteristic 2 above).

#### **Experiential Learning Cycle (Kolb, starting from the 1980s)**

In his theory of experiential learning, Kolb asserts that 'learning is the process whereby knowledge is created through the transformation of experience' (Kolb 1984, p. 41). Kolb identified four mutually supportive stages in the learning process: concrete experience (CE), reflective observation (RO), abstract conceptualisation (AC), and active experimentation (AE).

**Figure 11: Representation of the learning cycle. Based on Kolb (1984).**

In our design practice, Kolb's cycle reminds us that learning doesn't have to (and shouldn't) start from theory and learners acquiring new concepts. Rather, learners should be actively engaged in the learning process from the beginning. Therefore, although not every session needs to follow the cycle, it is good practice to introduce a new topic by giving learners an activity or a task to get involved in (CE): they can either experience a new situation or recall and reflect on a situation from their past. This will help to bring to the fore any assumptions that learners already have, which might need to be challenged (RO). The teacher or trainer can then present new theory and facts to build on the experience (AC), finishing with an opportunity to practice the new knowledge that has now emerged (AE).

#### **Constructive alignment (Biggs 1996)**

Bigg's principle of constructive alignment (Biggs 1996) describes the relationship between three elements in a learning design:


In practice, it is best to (a) start with defining learning outcomes for each of the concepts covered in the course, (b) decide on the approaches to assessment, and (c) plan for teaching and learning activities that will guide the learners towards achieving the learning outcomes. Every learning design should also include feedback mechanisms, that is, means for the learners to check their progress in developing knowledge and skills. These mechanisms can involve selfor peer-assessed rubrics, automatically graded quizzes, and written feedback on assignments. The figure below is an example of learning design alignment for a topic in engineering.

**Figure 12: Example of learning design alignment for a topic in engineering**

#### **Taxonomy of Significant Learning (Fink 2003)**

Bloom's Taxonomy of Educational Objectives (Bloom et al. 1956) is well known. This taxonomy describes cognitive, affective, and psychomotor domains, but the cognitive domain has perhaps received the most attention from educators. We have been inspired by a broader taxonomy called the Taxonomy of Significant Learning (Fink 2013). Fink recognised that gaining skills such as learning how to learn, leadership, and communication, and dispositions such as resilience, ability to adapt to change, and tolerance are as important as cognitive skills. Fink's Taxonomy takes Bloom's Taxonomy further by adding three domains: 'human dimension', 'learning how to learn', and 'caring' (see Figure 13). Fink's integration, application, and foundational knowledge domains correspond with Bloom's cognitive domains, so here we will focus on the new domains added by Fink.

**Part 3** *A step by step guide to technology-enhanced capacity development*

**Figure 13: Fink's Taxonomy of Significant Learning goals and its correspondence with Bloom's Taxonomy (Brewley et al. 2015)**

The 'human dimension' refers to a better understanding of self and others: it's about personal growth and improving interaction with others. The learning outcomes under this dimension might include, for example, developing resilience, self-regulation, confidence, teamwork, interpersonal skills, and communication skills.

The 'caring' domain is about developing feelings, interests, and values. Learning can (and often does) impact how we see something and care about it. The learning outcomes under this domain will include, for example, developing greater social awareness, respect for differences, and the ability to act as a catalyst for change.

'Learning how to learn' relates to learning about the learning process itself. The learning outcomes might include, for example, becoming a self-directed learner and developing selfmotivation to learn.

Fink's Taxonomy of Significant Learning guides us in the selection of learning outcomes that go beyond the cognitive dimension of learning and toward supporting learners in developing broader skills.

For each learning outcome, the teacher or trainer needs to think about how they can assess whether the learners are making progress to achieve them. It can be, for example, through facilitated discussion on a forum: checking if the learners' posts indicate an understanding of a concept and giving feedback. Opportunities for self-assessment can be provided through quizzes and other interactive exercises and games. Sometimes it is possible to combine several learning outcomes in one assessment.

#### **Design tools: an example of Learning Designer**

A wide range of tools and approaches can be used to support the process of designing for learning. Office software (such as Microsoft Word and PowerPoint) and even pen and paper are certainly useful tools, but they do not come with any pedagogical base to build on.

The Learning Designer tool is a web-based application freely available to all, and it has been used by thousands of educators and trainers worldwide.23 The tool is underpinned by the Conversational Framework described earlier.

The Learning Designer tool helps to put learners' needs and experience at the centre of the design process. It facilitates an intentional design process and asks the learning design professional to think beyond 'what I need to deliver' and to carefully consider 'what is the best way for the participants to engage with and understand this content'. It shifts the focus from delivering content to dialogue between the facilitator and the learners. The tool offers an automated visualisation of the learner experience based on the choices made by the course designers, including feedback on the balance of learning types (see the section on 'Conversational Framework and the six learning types') and showing the designed versus planned learning time. Learning designs are easily sharable and editable, thus allowing for a collaborative design process and for sharing outputs with critical friends and local multipliers. At INASP, we allow for frequent feedback loops and adaptation before our designs are ready to be implemented in our LMS.

**Figure 14: Example of a learning design created in the Learning Designer tool**

The benefits of designing for learning using the Learning Designer tool include:

1. Shifting the focus of the design process from teaching to learning: from what a facilitator will be doing (e.g. delivering a presentation) to what learners will be doing (e.g. listening). Moreover, this tool nudges the user (i.e. the learning design professional) to think in detail: for example, 'group work' is often a good

<sup>23</sup> The use of Learning Designer has been taught in a MOOC on Blended and Online Learning Design developed by UCL on FutureLearn.

teaching strategy, but what exactly will students be doing in the groups – will they produce something or investigate something? When using Learning Designer it is possible to plan the learners' experience and learning activities carefully.


**Figure 15: Learning experience analysis in the Learning Designer tool**

The pie chart on the left-hand side of the figure gives an overview of the balance of various activity types in the learning design. The course team – especially the learning designer – should know what this balance should be before even starting to use the Learning Designer tool. For example, if there is an intention to apply socio-constructivist pedagogies, the pie chart should have a good proportion of collaboration. The pie chart prompts the learning designer to double-check whether their pedagogical intent for the design corresponds with reality: 'I don't have any collaboration in this course – is that correct or have I missed something?'

Other pie charts shown by the Learning Designer tool indicate how much learning will take place face-to-face versus online, synchronously versus asynchonously, unfacilited versus facilitated. Finally, the horizontal bar shows how much time the learners will spend working individually, in groups or as a whole class. The course team members can review all this information and make changes to improve the learning experience.

#### *Working with the Learning Designer tool*

The Learning Designer tool is popular with many learning designers because it prompts them to think of all the elements that need to be considered when designing for online or blended learning. At the same time, the tool is flexible enough to allow the user to input information in a way that is meaningful to them.

The Learning Designer tool helped me to think deeply about the courses that I was designing, not merely writing a course outline, but also to break up every bit, including the activities that students will do and how I will assess. So although it requires a lot of time, when it is done, it somehow simplifies the work to do later. (Josephine Namuli, Assistant Lecturer at Uganda Martyrs University, Uganda)

A design unit in Learning Designer can be a whole course, workshop, or module, or it can be an individual learning session. Our practice has shown that adult learners who undertake professional training alongside other commitments can spend roughly 3 to 4 hours per week on learning. Our courses usually take 12 to 24 hours of learning to complete, and we create a separate design in the Learning Designer for each week or each concept in the course.

The first step when working with Learning Designer is to specify the learning outcomes. The tool includes Bloom's Taxonomy and provides a choice of verbs to make the outcomes measurable, which is important for assessment.

The next step is to define the teaching and learning activities the learners will engage in to achieve the learning outcomes. For each teaching and learning activity, the following aspects need to be specified:


Entering these details produces a visual analysis of the learner experience, as shown in the figure above.

Some of the teaching and learning activities should provide a means of formative assessment and feedback. Learning activities that involve practice, production, or collaboration are helpful in assessing learner progress. In learning through practice, the teacher or trainer can ask learners to complete a quiz, do an interactive exercise, or use an online simulation. For such activities, digital tools can compute the results and give learners instant feedback. In learning through production and collaboration, the teacher or trainer asks students to produce an output (or artifact) that consolidates their learning. The teacher can create a rubric to guide the learners in peer assessing each other's work, or they can assess the learners' work themselves. In our MOOCs, we often use peer assessment to provide learners with feedback. The Moodle LMS comes with a tool called 'Workshop' which simplifies the peer-review process, and other Learning Management Systems offer similar functions too.

After creating a timeline of the teaching and learning activities and opportunities for assessment, the course team should review the design to check for alignment between the learning outcomes, assessment, and activities. The activities should build on each other and there should be a clear follow-through and logic from one activity to the next. For example, if learners are asked to produce a written assignment, they should be given feedback in some way. Follow-through means that the activities support learners in achieving all the learning outcomes step by step, culminating in an assessment activity with feedback on their progress. The Learning Designer tool has a drag-and-drop function to shift activities around and add new ones where needed. It also provides space for notes, making it easy to give and receive comments and suggestions for improvement.

#### **Development: writing content and activities, selecting tools**

#### *Addressing learners in the course materials*

In face-to-face learning settings, experienced teachers and trainers give verbal instructions, quickly assess participants' reactions and non-verbal cues, and adapt their instructions and support accordingly. They also create an atmosphere conducive to learning: building trust among the learners, encouraging participation and mutual support, telling stories and even digressing, and creating a sense of discovery and adventure in learning. This is much more difficult in online settings, even synchronous ones, where many non-verbal cues and spontaneity in conversation are lacking, and much of the instruction is in written form. That is why it is not enough in online learning to upload 'dry' learning content as a series of files into a learning management system. Rather, the content needs to be 'wrapped' in a narrative that creates a sense of a facilitator speaking to the learners and guiding them from one resource and activity to another. The tone of writing should be conversational rather than giving dry instructions.

At the start of the course, there should be an introduction or induction resource to help learners understand the big picture of the course: the structure, the flow, and the completion criteria.

When describing each activity in the course, it is helpful to follow this structure:

• Describe the **purpose** of the activity: why learners are doing the activity and how will it benefit their learning.


#### *Selecting media and tools*

Digital technologies offer new ways in which we can interact with content, communicate, collaborate, and create. However, facilitators and trainers often lack the time and skills to use digital technologies effectively and creatively. Learning management systems, for example, are often used in a simplistic way to help organise digital content (e.g. videos, graphics, digital text) and to provide opportunities for discussion (e.g. discussion forums or links to videoconferencing) without modifying or redefining the nature of learning with technology (Puentedura 2014). Puentedura's SAMR model talks about four levels of integration of digital technology in learning design:


Puentedura (2016) argues that digital technologies can make a significant change to how we learn, but this change is not intrinsic to the given tool but rather a question of a different way of practice associated with the tool.

Suppose a teacher or a trainer is new to using digital technologies. In that case, they may need to evolve their practice, perhaps starting with substitution and augmentation until they get more comfortable and can start developing activities that would not be possible without digital tools. Puentedura's model is helpful in pushing us to think beyond replicating faceto-face practices in a digital environment and replacing analogue tools with digital ones. It

<sup>24</sup> https://ideaflip.com/

doesn't mean that substitution and augmentation are bad practices; it's just that they shouldn't be the only practices in a digital space that offer so much opportunity for knowledge coconstruction, collaboration, and immersive learning.

When making decisions about the choice of digital tools, it is also important to consider aspects other than their teaching and learning function. This is where Bates' SECTIONS framework comes in handy (A. W. Bates 2015). Every letter in SECTIONS stands for an aspect to be considered when choosing digital technology or media. We won't discuss 'T' (Teaching functions), 'I' (Interaction) and 'N' (Networking) as these refer to the nature of teaching and learning practice described above. We want to focus on the other equally important elements of the model:


As Bates (2015) argues, 'if a student cannot access or use a technology, there will be no learning from that technology, no matter how it is designed', and further, 'access (and ease of use) are stronger *discriminators* than teaching effectiveness in selecting media.' This is certainly true in many developing countries where the recent Covid-19 pandemic accentuated inequities in access to technologies. Instead of imposing digital tools and media that work well in the context of developed countries, course designers should consider local solutions and opportunities in the developing South. Equally, learners should be educated about the uncritical use of 'free' tools that come with the hidden cost of trading personal data. For example, the Padlet tool that we have frequently used in the past is not compliant with the newest General Data Protection Regulation issued by the European Parliament (Hegner 2021). This is because Padlet was developed in the United States where GDPR doesn't apply. However, it is also possible to use Padlet without creating an account, which we now recommend to our course participants.

## **Inclusivity, diversity and gender aspects**

Providing equitable opportunities for all learners, regardless of their gender, culture, ethnicity, abilities, or other personal characteristics, should underpin the development of any technology-enhanced capacity development initiative from the beginning. What does this mean for the learning design? An equitable learning design means acknowledging learners' different needs but usually without knowing the individual learners that will participate in the learning opportunity. The latter is, in particular, true for MOOCs. The needs of a learner are influenced by many aspects, such as gender, cultural background, and socio-economic status. An inclusive learning design will support the learner's ownership of their learning journey so that everybody can participate according to their needs, abilities, and interests.

To achieve such an equitable and inclusive learning design, INASP follows the principle of putting people at the centre of any design process. Building a co-design team, as described earlier, can reduce the risks of biases that may lead to a learning initiative that meets the needs of only some groups of learners and excludes others. The co-design team should mirror the diversity of potential learner groups, for example, by having a good gender balance. The learning design team should understand equity concepts and be aware of their own unavoidable biases, for example, when writing content or selecting teaching and learning strategies. It's good to have someone in the co-design team who plays the role of a gender and equity champion; this person can review the learning design and course contents from a gender and equity perspective. We recommend the gender-responsive pedagogy handbook from the Forum for African Women Educationalists (Mlama et al. 2005) and the genderresponsive pedagogy framework developed by the TESCEA programme (Chapin and Warne 2020) as learning resources.

In our experience, it is essential to be mindful of inclusivity when selecting teaching and learning materials, writing content, using teaching and learning strategies, carrying out assessments, and even setting up a learning space.

#### *Teaching and learning materials*

Selecting teaching and learning materials that are interesting and relevant for all learners within the target group is not an easy task. We include a variety of materials sourced from various authors to serve the learners' needs and cover a diversity of perspectives. In interviews, our partners have mentioned that they want to learn from relatable information and experiences, so they appreciate resources from their own geographic region and culture. However, there is also an appetite to learn from global authors who can introduce the learner to unfamiliar points of view and a broader range of knowledge.

Learners can regard authors of texts and speakers in videos as role models, so the learning design team should be careful not to give the impression that specific roles or topics have certain kinds of experts. We have found that it is not always easy to find material demonstrating the variety we would like to achieve. Due to budget limitations (high-quality video production can be expensive) and to adhere to the principle of sharing and reusing educational materials (or open educational resources), we tend to look for resources with Creative Commons licencing before creating our own. However, in our experience, most CC-licenced educational videos in the research and higher education contexts tend to show protagonists from Europe and North America.

Learners can be strategically involved in addressing any bias in educational resources. For example, suppose the learning design team has not been able to find studies that include all gender groups. In that case, this gap can be pointed out during the course delivery and learners can be encouraged to reflect on the meaningfulness and validity of studies that exclude certain population groups.

#### *Language*

As mentioned before, the learning design team should include a language expert who ensures the language is correct and at the right level for the learners. The language should be easily understandable, inclusive, non-offensive, and culturally appropriate. Biases are expressed through the use of language and therefore words need to be used carefully. For example, if a text includes the expression 'confined to a wheelchair', learners who use a wheelchair may feel disrespected because of the negative meaning of the word 'confinement'.

We suggest that the learning design team consider questions such as:


Finally, providing videos and audio transcripts can help reduce language barriers.

#### *Teaching and learning strategies*

Individuals have different preferences for how to learn and interact. There could also be cultural expectations about how a person behaves that make a certain kind of interaction difficult. In our capacity development programmes on gender equity, we have observed that it can sometimes (but not always) be helpful to separate the gender groups to encourage more open discussions. You may also find that some learners hesitate to utter their opinion or ask questions when their supervisors or managers are present. Therefore, it is important to give the learners as much control as possible over how they want to interact. When peer-to-peer interaction is part of an online course, the learning design should take into account that some learners may prefer little or no interaction with other participants or would appreciate a more private space for discussions.

In INASP's research writing MOOCs, for example, we see learners from all around the world and we do not make any assumptions about how they would prefer to learn. While we encourage learners to interact with each other and the guest facilitators on the forums, interaction is not an essential component of the course. Some learners like to use the discussion forums as they go through the course materials and take part in the quizzes or other activities; others participate in the course 'quietly'. However, when interaction among learners or sharing one's work on the course forums is integral to the learning experience or learning objectives, we make this clear in the course invitation or induction resource. These courses are typically for small, pre-selected audiences, such as the university lecturers who have taken the learning design courses offered through the TESCEA project.

If any equipment is used in learning activities, there is a risk of excluding learners who don't have access to or the ability to use such equipment. Less resource-rich learners can be significantly affected by choice of equipment or software when participating in a technologyenhanced capacity development intervention. For example, if the target audience for a course uses mobile phones as their primary digital device, the course should be compatible with small screens.

Participants should also have as much control as possible over their learning assessment. Some learning assessment methods may not be suitable for all learners. For example, if participation in a discussion forum is part of the assessment, learners who are not fluent in the language used in the course may feel anxious about sharing their views with a large group. This doesn't mean that participation in discussion forums should not be part of the assessment criteria; instead, setting up small-group forums and reassuring learners with limited language fluency may be a good solution.

#### **Accessibility**

While designing teaching and learning activities, course designers should consider learners with visual, auditory, or physical impairments. Many digital tools provide support in developing accessible activities. In many cases, it is a matter of adhering to best practices for digital accessibility and tapping into the built-in features of a digital application. We provide practical suggestions for making course materials accessible in the phase 4 'Implementation' section.

#### **Learning space**

The learning space should be perceived as comfortable and safe by all learners. Ideally, the setting should allow learners to learn at their own pace and interact with others when and if they want, but this may not be possible in the case of highly structured courses with time-based expectations or deadlines. When we compared an entirely self-paced, self-study tutorial with a facilitated version of the same tutorial, we learnt that some learners prefer the lack of deadlines (that is, in the self-study version). In contrast, others appreciate getting more guidance and having a schedule for their learning journey (in the facilitated version) (Schaeffler 2020b).

There are significant differences in internet access in the Global South. While individuals in some regions have good internet access, those in rural areas may experience significant challenges that can hinder their participation in online capacity development opportunities. To ensure equitable access, the learning design must accommodate flexible learning pathways. For example, some learners may prefer to download the learning materials and study offline. An example of such an adaptation is our capacity development initiative in Sierra Leone, when we modified the contents of our self-study critical thinking tutorial so that they could be used for learning in WhatsApp groups while students were at home due to the Covid-19 pandemic (Schaeffler 2021). In fact, the pandemic has increased the demand for flexible learning spaces around the world (Bryant 2021; Egbo 2021; Pastore et al. 2021).

Learning design needs to ensure that any kind of harassment, be it of a sexual, racial, or other nature, is prevented or addressed without delay. INASP has a safeguarding policy that explicitly includes digital safeguarding; learners are introduced to this policy as soon as they create an account on our LMS. Some of this information is then repeated in the induction section of each online course. For peer-assessment activities, learners are given guidance on providing constructive feedback and not belittling fellow participants. In our MOOCs, guest facilitators keep an eye out for any inappropriate posts. In courses where we do not provide any moderation or facilitation, such as self-study tutorials, there is usually no discussion forum available to avoid any risk of inappropriate behaviour going unnoticed. Finally, learners should be aware of how they can contact the course moderator privately to report anything that has made them feel uncomfortable.

In summary, our advice would be to build in aspects of inclusivity, diversity, and gender responsiveness in the quality assurance process.

## **Recommendations**


## *Phase 4: Implementation*

The learning design phase concludes with a set of outputs, which may include documents, images, videos and internal notes. In the implementation phase, these outputs are converted into interactive learning materials and then put together as a coherent whole to be accessed by the prospective participants from a LMS (Learning Management System) or a different digital application.


We have already discussed the importance of the right choice of tools in supporting successful interaction. It is advisable to prioritise tools that the learners are already familiar with or that are easy to learn to use. Defining activities that learners will engage in should precede the choice of tools to support the activities.

The choice of a learning platform will also be influenced by sustainability plans – who will own and deliver the course in the long run and what technology will best support it. Aspects of accessibility, usability, and familiarity play a significant role when selecting learning technologies with partners.


## **Authoring tools**

We strongly recommend using e-learning authoring tools to develop content for online learning. While PDFs are omnipresent on the internet and are the most common format for articles for which the 'version of record' is important, they don't work so well for instructional content. Learners who take a course may be motivated to learn about the topic at hand, but they may baulk at having to read long texts. In other words, the idea of learning can be motivating, but the process of learning can get tedious. How can text or multimedia content be presented in a form that engages the learners?

Many LMS applications have built-in authoring tools to present content. For example, at INASP we frequently use the Book and H5P authoring tools in our Moodle LMS to present content with text, images, videos, and interactive elements such as drag-and-drop exercises or multiple-choice questions embedded in the text.

It is also possible to use standalone authoring tools, the outputs of which can be presented on a website or brought into an LMS. At INASP we have used an open-source authoring tool called eXeLearning to develop the interactive content for our research writing MOOCs.

#### *Is an LMS needed to offer an online course?*

It is possible to run an excellent online course without using an LMS. A combination of email, digital tools such as Padlet, Ideaflip, or Slack, live video sessions, and offline work (e.g. the learner preparing something and submitting it by email to the teacher), all within a clear and reasonable timeline, can result in a positive learning experience. This kind of model can work between individual trainers and small groups of learners, but it is unlikely to be scalable. In our opinion, an LMS is essential to implement online learning at scale, or at least to allow for flexibility and growth, even if scaling up isn't a necessary criterion at the outset.

As in the case of many other industries, the LMS market is dominated by a few organisations. Instructure Canvas, Blackboard Learn, D2L Brightspace, and Moodle are considered the Big Four in North America (IBL News 2019). Data on LMS market share on a global scale do not seem available, but the Moodle LMS is certainly common around the world: as of February 2022, there were 178,000 registered Moodle sites from over 200 countries.25

At INASP we opted for the Moodle LMS in 2011 when we started offering online courses, and it has served us well over the years. While we would recommend Moodle as an LMS, we would add that it's not so much about the software but how it is used. At INASP we always look for features in our Moodle LMS – or available through other tools that can be integrated with our LMS – that will best support the implementation of our learning design. For example, in 2015 we started using the Book tool in Moodle to present content in a standard e-learning format: text spread over multiple web pages with images and videos. We also wanted to include interactive elements such as interactive maps, drag-and-drop exercises or multiple-choice questions. This was not a feature provided by the Book tool, but we found an external tool called H5P to implement the interactive elements. We were able to create those elements on the H5P website and embed them in the 'Book' resources. (The H5P tool later became a Moodle plugin and then it was integrated into the core of Moodle.)

<sup>25</sup> https://stats.moodle.org/sites/

Organisations typically choose an LMS application and stick with it for the long run, as migrating to a different LMS can be a big project.

#### *More about the Moodle LMS*

The Moodle LMS has been maintained by the Moodle organisation in Australia (Moodle Pty Ltd) since 2002. The Moodle LMS – often simply called Moodle – is open source, which is one of its strengths. It has a large community of techies and users who can be found on the moodle.org forums, help with testing the software, shape future developments, contribute to the code, and present at 'MoodleMoot' conferences. A new version of Moodle is usually released every six months, with long-term releases supported for three years.

Because of its long history and open source nature, the Moodle LMS has an extensive set of features. While this is an advantage to users and organisations looking for an LMS with lots of potential for adaptability and flexibility, the downside is that new users can feel daunted or confused. There is undoubtedly a learning curve to using Moodle and it is not the most userfriendly of applications. However, Moodle version 4.0, released in April 2022, is a significant step forward in Moodle user experience.

Organisations often opt for Moodle because it appears to be free. It is open source, and there are no licensing or per-user costs. Still, to use Moodle well, at least two levels of strong IT skills are required: (1) for installing and maintaining a Moodle site on a web server, which requires expertise in system administration (this task may also be called 'server-side administration'), and (2) for carrying out site administration tasks, which can be numerous and require in-depth knowledge of Moodle settings (this task may also be called 'front-end administration'). In small-scale setups, a single individual may be able to do both back-end and front-end administration, but often these roles are played by different people or teams, as the skills needed for each role can become highly specialised with increasing size or complexity of the site. Naturally, there is a cost associated with compensating the people who play these roles as well as the infrastructure costs of renting or buying a server.

The Moodle Academy26 is an excellent place to learn how to use Moodle LMS.

#### *Is the Moodle LMS suitable for MOOCs?*

The Moodle LMS was launched in 2002, well before MOOCs became popular. While Moodle is often used at educational institutions to complement face-to-face education, it can certainly be used to host MOOCs.27 That said, we should point out that many companies developing LMS applications claim that their LMS is suitable for MOOCs. Whether these claims hold true in practice depends on the pedagogical and technological capabilities of the organisation using the LMS.

#### *What about Google Classroom?*

Is Google Classroom an LMS? Some think it is, and some don't. In February 2021, the official Google blog shared an observation that many in education technology might agree with: 'As more teachers use Classroom as their 'hub' of learning during the pandemic, many schools are

<sup>26</sup> https://moodle.academy

<sup>27</sup> https://moodle.com/moodle-for-moocs/

treating it as their learning management system (LMS) … While we didn't set out to create an LMS, Classroom is committed to meeting the evolving needs of schools' (Lazare 2021)*.*

Google Classroom differs from other LMS applications in its approach. In our understanding, it is geared towards teachers and students based at the same institution (especially if the institution uses the Google suite of products) and to complement face-toface learning. Other LMS applications, such as Moodle, are not based on such assumptions. They can be installed and used by any organisation to reach learners worldwide. That said, at INASP we have extensive experience only with the Moodle LMS.

#### *Examples of how INASP has used its LMS*

Here we present some examples of how we have used our LMS:


## **Making course materials accessible**

The Moodle LMS has achieved a high rating for accessibility: in November 2021 Moodle version 3.11 received WCAG 2.1 Level AA accreditation.28 However, the accessibility of the materials in a particular course depends on the way the learning design is implemented. For example, at INASP we do the following to make course materials accessible:


<sup>28</sup> https://docs.moodle.org/en/VPAT

The Moodle user documentation contains specific Moodle advice on accessibility.29

Beyond Moodle-level implementation, a good source of advice is the Web Content Accessibility Guidelines (WCAG) which cover a wide range of recommendations (Caldwell et al. 2008). Helpful information is also provided on the European Union page about Web Accessibility,30 including the EU Web Accessibility Directive. It is easy to get a bit lost in the abundance of information; therefore, we want to share a few things that we have found helpful when designing for accessibility:

First, several interdependent factors influence a learner's ability to participate in an online course, such as health (mental and physical), socio-economic situation, and physical location. Designing for accessibility means making web content more usable for learners in general, not just for a defined group of learners.

Second, following the WCAG principles, designing for accessibility means enabling the learner to:


Concrete measures for accessibility during the implementation of an online course include providing text alternatives for images and ensuring consistent navigation.

Third, accessibility scanners are useful to determine the extent to which web content is accessible. At INASP, we use the WP ADA Compliance Plugin31 (the basic version is free) and Wave.32

Fourth, the learning design process should be participatory so potential learners can share feedback on their requirements and ideas for improvement.

Fifth, it is better to start with simple steps for accessibility and improve the design over time to avoid getting overwhelmed by the abundance of guidelines at the outset. We admit that we are also at the beginning of our journey towards greater accessibility and still have much to learn and do.

#### **Recommendations**


<sup>29</sup> https://docs.moodle.org/dev/Accessibility

<sup>30</sup> https://digital-strategy.ec.europa.eu/en/policies/web-accessibility

<sup>31</sup> https://www.alumnionlineservices.com/accessibility/scanner/

<sup>32</sup> https://wave.webaim.org/


## *Phase 5: Piloting and Review*

Once an online capacity development initiative is designed and developed, it should be piloted to ensure that the intended learning experience matches reality. In this section, we share our experience in piloting technology-enhanced capacity development interventions, collecting and analysing feedback, and revising the course materials.

## **Initial thoughts**

INASP usually relies on volunteers when piloting its online courses. We ask the pilot participants for honest feedback on the design, including the activities, content, and overall learning experience. We stress that the course is in its beta version, and getting feedback is crucial for assessing its completeness, relevance, and appropriateness of pedagogical and technological choices. The volunteers who attend a pilot course are not fully representative of the target audience, even though we try to mirror the diversity of learners in the pilot group. It is possible that pilot volunteers for an online capacity development intervention will have higher digital literacy skills and will be more curious and willing to try out new methods and experiences than other members of the target audience. Another cause of bias could be the phenomenon that research participants try to 'satisfy the perceived needs of the researchers' (McCambridge et al. 2012). This means that the pilot participants could report their learning experience more positively than those who later enrol in the course.

Further, learner needs and the context can change slightly from one rollout of the course to the other. For example, INASP offers its research writing courses as MOOCs and also invites specific groups of researchers to collaborate in private spaces within the MOOC. Dedicated facilitators provide additional support for these groups, such as more focused discussion in the forums and personalised feedback on assignments. The characteristics of the group members can differ from the average MOOC participant. All this impacts the learner experience and can lead to different feedback on the course compared to a pilot run under different conditions. Furthermore, external context changes can occur: the Covid-19 pandemic and the increased demand for online learning due to the lockdowns is an example of global context change. At the start of the pandemic, we added further feedback questions to our courses and interviewed our partners to find out how Covid-19 influenced the learning situation and needs of our target audience.

Finally, with time pressures to develop courses in short timelines, learning designers may skip over a few digital principles such as 'Design for Scale' and 'Build for Sustainability'. Designing for scale means thinking beyond the pilot from the beginning 'to overcome traditional obstacles which may be financial, technical or managerial' as Khaled Ben Driss explains in his blog (Driss 2021). He mentions that 'the history of digital is littered with digital corpses, solutions abandoned at the end of the action of the funder, only because the operating and maintenance costs are disproportionate compared to the added value or not adapted to the context'. This situation can be avoided if designers consider the sustainability aspects of a course from the very beginning of a digital project, that is, from the scoping phase.

#### **Running a pilot course**

We consider the following aspects before running a pilot: (a) the sample of the pilot participants, (b) course aspects to evaluate, (c) learning questions to answer, and (d) data collection methods and instruments. We also advise considering what stakeholders, apart from the target audience, could provide helpful feedback on the course. For example, course facilitators could also provide valuable feedback on course strengths and weaknesses.

#### *Example 1: Piloting an online course in research writing*

In 2011, INASP sought to develop an e-learning component in the AuthorAID project to build on the success of AuthorAID workshops in research writing. These workshops had been inperson events in a few countries in Africa and Asia, and INASP recognised a widespread need among researchers across the Global South to improve their research writing skills. However, we were unsure that the e-learning mode would work in our context. That year, some members of the AuthorAID team were at the University of Rwanda to run a workshop. They used this opportunity to discuss the possibility of involving university researchers in piloting an online course in research writing. The buy-in from faculty at the University of Rwanda was a key factor in motivating us to develop and pilot our first online course.

Once the course was ready, our contacts in Rwanda spread the word, which resulted in 28 Rwandan academics joining the course. The participants' positive experience in the course and the high completion rate (Murugesan 2012) encouraged us to adopt online learning as one of our capacity development approaches.

#### *Example 2: Piloting an online course in monitoring and evaluation of e-resource use*

For many years, INASP provided advisory and training support to library consortia in Africa and Asia, including Ghana, Zimbabwe, Uganda, Sri Lanka, Kenya, Malawi, Tanzania, and Ethiopia. Monitoring and evaluation of e-resource use (MEERU) was an important training topic for librarians. The goal of this training was to enable librarians to collect data about the use of electronic resources provided by their library and to analyse these data to meet the needs of their library users.

In 2015, following several face-to-face MEERU workshops and seeing interest from library consortia in online training, INASP decided to develop an online course in MEERU, which would draw on the content and experience from workshops and tap into the advantages of online learning.

We selected five of the consortia we worked with to pilot the course. Twenty-three librarians from five developing countries joined the course. Following the pilot, we conducted one-to-one video and audio interviews with some course participants, both those who had completed the course and those who hadn't, allowing us to learn from different experiences (Wild et al. 2016). The participants' feedback helped us improve the learning materials before the course was offered to wider audiences.

#### *Example 3: Piloting an online course in learning design*

From 2018 to 2021, INASP was the lead partner in the project Transforming Employability for Social Change in East Africa (TESCEA). We worked with institutional partners in Tanzania, Uganda, and Kenya to create an improved learning experience for students in higher education. TESCEA was a capacity development project with a strong training-of-trainers component. One of the main goals of TESCEA was to develop a scalable model to support universities across East Africa to produce graduates with critical thinking and problem-solving skills. From 2018 to early 2020, we held several face-to-face training workshops with academic teaching staff. An online learning component was planned for the project's final year in 2021, but because of the Covid-19 pandemic, we were compelled to rework our plans and introduce online training and mentoring much earlier.

For the pilots of previous INASP courses, we usually spent six to twelve months carrying out the first five phases of a TECD project (scoping, planning, learning design, implementation, piloting and review). But in TESCEA, we had to do it in a much shorter timeframe because (a) the Covid-19 pandemic put a stop to training-related travels, and (b) the project had firm deadlines to meet.

Thankfully, we encountered this challenge in a mature stage of our technology-enhanced capacity development work after having designed, piloted and maintained many courses in the preceding years. A core team collaborated intensively for about three months to scope out, plan, design, and implement an online course.

We piloted the course twice. The first pilot was offered to teaching staff from the partner universities who had already gone through some of the TESCEA capacity development interventions and were able to compare face-to-face and online approaches and give us honest feedback on what works about each one and what improvements we could make. The second pilot was offered to teaching staff entirely new to TESCEA methodology so that we could assess the relevance and clarity of the materials and activities for the participants with no previous knowledge of the topic. Feedback from these pilots informed the design of the final version of the course.

## **Methods to collect data**

Part 1 of this book explains INASP's approach to monitoring, evaluation, and learning (MEL). What does it mean in practice for online courses?

First, we often use a mixed-methods approach to collect data from the pilots of online courses. We complement pre- and post-surveys with semi-structured interviews to delve deeper into selected aspects of courses. It is often easier to draw out reflections from respondents in a one-to-one chat; therefore, we invite a sample of respondents for 30-minute audio or video interview.

The data collected are analysed and the results are interpreted to answer the learning questions defined for the pilot, followed by a list of recommendations for course improvements. We often involve critical friends and other relevant stakeholders to help interpret the data and decide on course modifications.

#### *Feedback surveys*

Pre- and post-course questionnaires should be diligently thought through and kept as short as possible. If a survey is too long, participants will likely stop giving honest answers as they will only want to get to the end. At INASP, we have developed a standard set of questions to include in pre- and post-course surveys in our online courses (we share some of these questions in 'Resources'), which we combine with course-specific questions.

We use Moodle's Questionnaire tool to develop and administer the surveys. This tool allows multiple-choice, dropdown, free text, and matrix-type questions. The completion of the feedback survey (or post-course survey) is part of the completion criteria for any INASP course – participants should complete the feedback survey in addition to meeting the rest of the completion criteria if they want to receive a completion certificate.

Collecting feedback data only from course completers doesn't result in a complete or honest picture. Therefore, we also reach out to participants who participated in courses but haven't completed them. One major point of learning from a 2021 survey was that time constraints are the most common reason for participants not completing an online course (see Assumption 5 in Part 2 of the book).

#### *Interviews*

Interviews are an excellent way of delving deeper into learners' experiences of some aspects of a TECD intervention. We use semi-structured interviews with small numbers of participants to complement the survey data. We usually analyse the data from pre- and post-course surveys before conducting interviews. This helps us to identify areas where we need to deepen our understanding of participant experiences and formulate the right questions. We also try to make sure that we have an opportunity to speak to those participants who haven't completed a course or another learning initiative. This is an excellent way to understand barriers to engagement better.

#### *Learning analytics*

'Learning analytics' is a term frequently used in discussions about educational technology and online courses. At the First International Conference on Learning Analytics and Knowledge in 2011, learning analytics was defined as 'the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs' (Siemens and Long 2011).

The Moodle LMS platform comes with a number of reports and settings that provide learning analytics. At INASP, we implement learning analytics for all the courses we offer; below are a few examples of the kind of data we collect:

• We use the 'Activity Completion' feature in Moodle to set completion criteria for the resources and activities in the course. For example, we may set the achievement of a passing score on a multiple-choice quiz as the completion criterion for that quiz, and make an entry in a wiki-like activity as the completion criterion for that activity. At the end of the course, we download the 'Activity Completion Report' to identify learners who completed the course by meeting the course completion criteria stated at the outset.


Moodle plugins are available to analyse or visualise data, but we tend to download learning data as spreadsheets and analyse the data using Excel.

The collection and analysis of learning data comprise only one part of learning analytics, and it can be thought of as the 'monitoring' phase. It should be followed by an 'evaluation' phase involving the interpretation of data: what worked well in the course, what can be improved, which improvements are critical, etc.

A key point to note here is that learning analytics are not just for after-the-fact analysis of a course. Instead, learning analytics should be employed throughout the duration of a course. If a course has a weekly schedule, the progress made by learners should ideally be checked at the end of every week. If, for example, some learners have not made a start on the course by the end of the first week, the course moderator should ideally take some action: perhaps by sending a gentle reminder and encouraging words to the non-starters.

## **Revising learning materials after the pilot**

Piloting a course is a three-part process that comprises:


In practice, there may be a tendency to fast-track the third phase if the pilot appears successful based on a reasonably good completion rate. However, it is good practice to allow time for the third phase and make revisions to the course before offering it to a wider audience.

## **Recommendations**


## *Phase 6: Delivery and Sustainability*

This section will discuss aspects to consider when re-running a course several times over a longer time.

## **Timetabling**

As discussed in Part 2 (Assumption 5), our findings align with other studies: timetabling is challenging for both synchronous and asynchronous learning initiatives. Time issues can affect participation and, consequently, the learning outcomes. When delivering a learning initiative, we must be aware of different time zones and academic year schedules and acknowledge that participants have other commitments. Learners may need support with their time management.

We usually finalise the dates for our MOOCs at the start of each year and announce them publicly at least one month before a MOOC is scheduled to start. Our MOOCs reach a wide audience around the world, so we do not adjust the dates for a group of learners in any particular institution or country. However, when we offer a course for a more specialised or local audience (such as the TESCEA learning design courses), we ascertain that the course dates are convenient for the learners.

#### *Time zones*

*Tip:* Record synchronous events so that participants who aren't able to join because of schedule conflicts can at least watch the recording at their convenience.

When running global learning initiatives, participants may live in different time zones, so it may not be possible to find a convenient time for everybody when delivering synchronous sessions. Synchronous sessions may need to be repeated a few times, especially if they are critical to the learning experience, so that most participants get the chance to join.

#### *Academic year schedules*

*Tip:* If possible, offer a course multiple times during an academic year (instead of just once) so that prospective learners can join the course offering that works for them.

Academic year schedules can strongly influence the preparedness of potential learners to participate in a capacity development activity. For example, if such an activity coincides with exam times, researchers with teaching duties or who themselves need to sit for exams might not have time or the enthusiasm to join it. For global technology-enhanced capacity development interventions, we face the same dilemma as with time zones. It is good practice to explore the schedules of the participants when timetabling the intervention. Flexible deadlines can also help learners to submit their work before or after busy periods.

#### *Other commitments*

When thinking of participants' other commitments, there are pros and cons of synchronous versus asynchronous events. Synchronous events have the advantage that the learners' time is more or less protected. When joining such an event, most learners will have kept this time free of other commitments. However, synchronous events can raise equity questions since many learners cannot join such an event during work hours and people with caring or domestic duties can find it especially difficult to carve out time to join an event online.

*Tip:* When offering a learning initiative to members of staff at an organisation, explain the benefits of the initiative to the senior managers so that the members of staff are supported in their time management.

On the other hand, asynchronous capacity development interventions, while giving the learner more freedom regarding when to participate, can also be challenging as they require self-discipline. It is not easy for everybody to prioritise time for learning and participation if they may have many other duties competing for their time.

#### *Time management*

*Tip:* Send automated reminders to learners if they have not completed a task within a recommended timeframe. Learning management systems like Moodle come with features to send such reminders.

Some participants may need support with time management during a technology-enhanced capacity development initiative. The kind of support they need depends on the mode of delivery. For synchronous events, consider sending out a meeting invite by email so that the event is automatically added to the learners' calendars. In asynchronous learning initiatives, learners have different preferences regarding how much freedom they want in terms of time management. Feedback on one of INASP's self-paced tutorials showed that some participants appreciate a self-study tutorial without externally imposed deadlines (Schaeffler 2020b). However, other learners may need such deadlines to feel sufficiently motivated to complete a course. Participants in facilitated courses might receive additional time management support from a course moderator who sends reminders and shares tips on how to progress in the course.

#### *Facilitators' time*

Unless a course is offered in a self-study format, support may be provided to learners in varying degrees, from basic technical support to intensive facilitation. Whoever is providing support to learners should be aware of their role in terms of required time commitment and availability. For example, if facilitators are expected to respond to learners' questions on the course forums, the response time is important even if the facilitation work itself isn't very time-consuming. In general, a course facilitator with forum responsibility should be willing to keep an eye on the forums daily or per a schedule agreed upon with the course leader.

## **Announcing or promoting the course**

In most cases, the prospective participants need to receive instructions on how to join the course. These should be sent out at least four weeks before the start date.

When publicising a course more widely (e.g. a MOOC), it is worth considering where most of the 'traffic' goes: a website, blog, mailing list, Facebook page, Twitter account, instant messaging group, physical notice board, etc. It is likely a combination of some of these. It might be a good idea to make the full course announcement available via a public link, for example, on a website or a Google Docs document, and write up summaries of the course for different media with links to the full announcement.

The course announcement should mention the following items:


## **Inducting participants and facilitators**

Once participants enrol on the course, they need some orientation to its structure, layout, and navigation. This can be done through an induction video or resource. The induction resource should reiterate some of the information from the course announcement: learning outcomes, learning time per week, schedule of live (synchronous) sessions, major deadlines, how to download resources to learn offline, who to turn to for what kind of support (technical vs course content), and completion criteria.

We advise providing a social forum for participants and facilitators to introduce themselves and have a space to interact about things that are not necessarily related to any of the course topics. This should be accompanied by clear guidelines on the appropriate use of this forum.

Facilitators on the course should receive a separate induction resource, including facilitation guidelines. We provide an example of facilitation guidelines in 'Resources'.

## **Moderation, facilitation and technical support**

In Part 2, we discuss the importance of social interaction for the success of online learning. Much of this interaction comes from or is catalysed by course moderation and facilitation.

Before getting to the practical aspects of online moderation and facilitation, we discuss a few models and frameworks that have inspired our work.

#### *Community of inquiry – D. Randy Garrison, Terry Anderson and Walter Archer (1999)*

In the Community of Inquiry framework, first proposed in 1999 by Garrison et al. and later expanded and reviewed (Garrison et al. 2010; Garrison and Akyol 2013; Garrison 2016), the learners' experience is shaped by three kinds of 'presence': cognitive, social, and teacher presence.

'Cognitive presence' means the learner's ability to construct knowledge and negotiate meaning by engaging with the learning resources or content. This can be enabled by designing resources at the right level of challenge – not too difficult, not too easy – and encouraging active participation rather than spoon-feeding information. It is also good practice to factor in any challenges learners might encounter when engaging with content. For example, suppose learners with English as a second language participate in a course delivered in English. What is the level of language they should have to be successful on the course, and is the content at this level?

'Social presence' means the learner connects with fellow learners and can engage in collaborative learning activities without face-to-face interaction. Discussion forums that support asynchronous, text-based interaction have been around since the early 1980s. In an online course, such forums offer valuable opportunities for social interaction, as long as the dialogue is encouraged and facilitated. The lack of such provisions in online courses is part of the reason for a widespread assumption that TEL does not support participants' interactions well (see Assumption 4 in Part 2).

**Figure 16: Community of Inquiry framework showing the three presences. Reproduced from Peacock and Cowan (2016), published CC-BY.**

Finally, 'teacher presence' refers to the role played by the teacher or facilitator, who provides instruction, facilitates activities, and offers support to the learners. While a teacher giving a lecture is one form of teacher presence, holistic teacher presence goes well beyond that and includes the teacher's or facilitator's role in guiding learners in learning activities, providing support or answering questions, and validating learners' viewpoints. We have used the Community of Inquiry Framework to guide us in developing the research writing MOOC. This framework helps create meaningful, interactive, collaborative learning experiences even when accommodating a large group of online learners in the same learning space.

#### *Online facilitation roles – Zane L. Berge (1995)*

We have often found it helpful to think of the four types of roles for online learner support, as identified by Berge (1995).


We often reserve managerial and technical roles for the course moderator, whereas course facilitators focus on social and pedagogical roles. At least, this tends to be the case in our bigger online courses, including MOOCs. Some of our courses are moderated but not facilitated (that is, there are no subject-matter experts for the pedagogical role). In this case, the course moderator is also responsible for the social aspect of the course.

## *Capabilities of online facilitators – Tony Carr, Shaheeda Jaffer and Jeanne Smuts (2009)*

Carr et al. (2009) list five capabilities of online facilitators:


Each capability is described in three stages (or levels of performance): beginner, intermediate, and expert. For example, a facilitator at the expert level in the 'social skills' capability 'creates a welcoming and enabling environment with ease and builds trust easily amongst participants' (Carr 2009 et al., p. 87).

We have found this model of capabilities and stages useful to develop our own guidelines for facilitators and to recruit them for our courses.

## **Moderation – handling the managerial and technical aspects of an online course**

Course moderators are responsible for the smooth delivery of an online course. They are accountable for all stages of the course delivery: enrolment, onboarding, delivery and wrapup. Below we describe the key tasks under each stage.

#### *Enrolment*

Learners can enrol in an online course in many ways. In a college or higher education environment, a group of learners may be enroled on a course as part of a programme of study. In professional development, individual learners may decide to take a course and follow enrolment instructions. In Gilly Salmon's 5 stage model of online learning (Salmon 2011), the first stage is 'access and motivation' in which the online learning environment is set up for easy access and the learners are welcomed into a course. We have found it useful to map our enrolment process to this stage.

#### *Onboarding*

There is often a broad variance in the digital skill levels of participants enroling on an online course. Spending time to ensure they become familiar with the learning platform and other tools is an important part of onboarding. In addition to providing an induction resource (as discussed in the 'Inducing participants and facilitators' section above), a synchronous video session can be an effective approach to onboarding. The course moderator and facilitator(s) should ideally appear on camera so learners can put faces to their names. An onboarding session could include trust-building exercises and allow time for questions and answers about the course. Live sessions should be recorded and made available to those unable to attend.

#### *Course delivery*

One of the key roles of a moderator is to remind the participants of the course schedule and deadlines. For example, if the course has a weekly structure, the moderator can write friendly announcements at the start of each week with an overview of the topics, activities, and deadlines for that week.

If the course is facilitated, the moderator should liaise with facilitators to ensure they can access the learning space and understand their role. In all INASP courses, we also create a private facilitator forum. This creates a team space to discuss any challenges or changes in course delivery (e.g. extentions to deadlines). Course moderator will also set up synchronous sessions and support the technical side of delivering them (e.g. assigning participants to breakout rooms) so the facilitators can focus on discussing the course contents.

Finally, the moderator needs to provide technical support to the participants for the entire duration of the course. We recommend creating a dedicated forum for technical questions.

Periodically, the course moderator should follow up with those lagging behind via personalised emails.

#### *Wrap up*

LMS applications such as Moodle provide data that can be useful to analyse course participation and determine which learners have met the completion criteria. At INASP, we also combine the participation data with data from the pre-course survey to look at correlations, for example, between gender and course completion.

Once the course ends, course completers receive completion certificates. We use the Custom Certificate plugin in the Moodle LMS to design the certificate template. Learners who have met the completion criteria can access and download their certificates from the LMS. A unique code is automatically added to each certificate so that the certificate's authenticity can be verified online.

Facilitators of the course are awarded certificates of acknowledgment in recognition of their contributions.

## **The art of facilitation**

We believe that good facilitation – especially in the pedagogical and social aspects – is an art and requires commitment and practice. A good online facilitator must be deeply connected to the online course they teach and master the art of building and deepening a rapport with and between the participants.

#### *Guest facilitation in INASP MOOCs*

For a small online course, for example, with 20 to 30 learners, providing facilitation is not difficult. However, in a large online course with hundreds or even thousands of learners, it can be challenging to provide adequate levels of facilitation. Facilitation is not just about making announcements or occasionally responding to queries from learners. Referring to the Community of Inquiry Framework, can adequate 'teacher presence' be provided in a MOOC?

INASP's research writing MOOCs have been offered regularly since 2015 (usually two to three times a year), attracting a few thousand participants every time. The topic of research writing lends itself to much discussion, for this is not simply a 'body of knowledge' to be transmitted or acquired; instead, it involves making sense of a constantly changing landscape, particularly in the areas of accessing research literature and identifying target journals. To address learners' need for interaction with peers and facilitators, we have developed a guest facilitation model: our MOOCs feature advice from experts who are drawn from an international team of over 50 volunteers in the AuthorAID network. We usually have 15 to 25 guest facilitators in each MOOC.

Guest facilitators play a crucial role in our online courses, providing expert advice and moderating discussions to engage participants. Many of our guest facilitators have worked across several courses, demonstrating a high commitment to strengthening the capacity of researchers, librarians and academic teaching staff in the Global South, whilst acknowledging the opportunity for learning and skills development that facilitation affords. This strong teacher presence differentiates our MOOCs from others, providing greater scope for social interaction and, we believe, contributing to our significantly higher completion rates than typical (Murugesan et al. 2017).

Below, we share tips on building rapport with the course participants from two of our guest facilitators:

• 'One thing [as an online facilitator] is to be friendly and informal. And also, not to show that you're the one who knows everything. And to learn from the course participants as well. If you have learned something from them, it's good to say, "OK, I have learned from you," rather than just trying to give knowledge.' (Professor Dilshani Dissanayake, Faculty of Medicine, University of Colombo)

• 'When you are addressing someone, make sure that you are addressing them by their name. That creates a kind of a connection, "Someone who knows me," you know?' (Dr Ismael Kimirei, Director General, Tanzania Fisheries Research Institute)

Dr Kimirei has also shared how he fits online facilitation into his daily schedule and what kind of forum posts he focuses on:

• 'In most cases [I do online facilitation] normally after my work hours, from 4 in the evening. But sometimes you are a little bit bored at work and I chip in sometimes during that period. I also try to respond to people who have received no response [from other facilitators].'

Course participants highly appreciate the guest facilitation aspect of our MOOCs. After the 8-week MOOC on research and proposal writing in 2021, participants were asked to rate, on a scale of 1 to 5, the usefulness of different aspects of the MOOC. The statement 'Reading the facilitators' feedback/responses in the discussion forums' received an average usefulness rating of 4 out of 5 from 1453 respondents.

#### *Recruiting facilitators*

A helpful framework for recruiting facilitators is 'capabilities of online facilitators' by Carr et al. (2009, p. 87). This framework is mostly about how the facilitator should create and maintain an effective learning environment; it's not so much about subject-area expertise. Of course, subject knowledge is essential if the facilitator will be expected to answer questions about the content, but at the same time a facilitator's role is much more than that. In fact, even when facilitating knowledge construction, an expert facilitator 'stimulates questioning, provides generative feedback to participants, explores ideas by stimulating debate and knows when to be silent' (Carr et al. 2009, p. 87)

When recruiting or appointing facilitators for a course, the following factors should be considered:


Facilitators will need clear guidance on facilitating a course. Over the years, we have developed and refined guidelines for facilitators in many of our courses. Some points are common in the guidelines for all our courses, whereas others are specific to the course at hand. The 'Resources' section presents facilitation guidelines that can be reused and adapted.

#### *Synchronous or asynchronous facilitation?*

INASP's online courses are developed with a sensitivity for internet bandwidth, as many of our learners live in countries where internet access can be expensive. Communication in our courses is primarily asynchronous and in a written form. This format often invites more thoughtthrough contributions that accumulate over time (usually a few days) and can be revisited by the learners. We complement this type of interaction with live 'drop-in clinics' for Q&As, peer-topeer support, and facilitator feedback. Live video sessions are recorded and made available for those who did or could not attend. We have found that regular live sessions help participants keep the learning momentum and encourage those lagging behind to catch up. A combination of synchronous and asynchronous facilitation is most beneficial, considering each has its strengths. A learning designer can help decide what format is most appropriate in what situation.

## **Maintaining the course and planning future rollouts**

This section focuses on maintaining an online course on a learning management system.

#### *Longevity of concluded courses*

At INASP, we tell course participants that they can access the course materials in read-only mode for at least one month after the course ends, and we give instructions on how to save the course materials. In reality, our course participants have had perpetual access to concluded courses. A participant who took an online course with us a few years ago, can access those course materials today. While long-term access to completed courses is beneficial, it may not be possible due to technological or cost constraints. When using an LMS application such as Moodle, whether old courses can be retained depends on the server specifications and the size of a typical course, among other factors. The server administrator of a Moodle site can advise on the costs involved.

#### *Setting up new instances of courses*

If learners lose access to a course after it has concluded, it is possible to reset the course to offer the same learning materials to a new cohort. The Moodle LMS offers a straightforward way to reset a course, essentially removing users' data and enrolments.

On the other hand, if a concluded course should remain available to the learners, it is possible to set up a new course instance to offer the course to the next cohort. We do this at INASP using a course copy or course import feature on the Moodle platform.

After setting up a new course instance, the course team should decide whether any updates or changes are needed in the course materials. Editing can be done in the new instance of the course.

We recommend the following steps when setting up a new instance:


Once the new instance is set up, the delivery process can begin: timetabling, announcing the course, and so on, as discussed in the preceding pages.

#### *Checking external links in the course content*

If an online course has links to external websites or embedded items (such as videos from YouTube) as part of the content, it is important to watch out for 'link rot', which is when hyperlinks cease to lead to their intended web page or file.

We recommend checking external links in the course content at least once a year and, optionally, providing a way for course participants to report broken links. Links can be checked manually by going through the course content and clicking every link that points to an external resource. However, an automated checker such as the W3C Link Checker from the World Wide Web Consortium is a more efficient way to check links.

#### *Updating the learning content with new information*

If an online course is going to be offered over a few years, some or all of the learning materials may need to be updated to incorporate the current state of knowledge or practice.

Textual content is easier to update than multimedia content. If the content in a video is no longer relevant or up to date, the entire video may need to be produced again.

At INASP, we have offered more than 20 research writing courses since 2011. In the past eleven years, we have done significant updates to the course content three times and minor updates about once a year.

## **Sustainability**

INASP's Learning and Capacity Development framework (see Figure 1, Part 1) emphasises that 'good capacity development enables individuals and institutions to independently and sustainably work towards their desired changes in policy and practice beyond the life of a project.'

We co-design TECD interventions with our partners to ensure their reuse, customisation and local ownership. Licensing course materials under one of the permissive Creative Commons licences (such as CC-BY or CC-BY-SA) is usually a pre-condition for allowing reuse. However, our experience shows that our partners in the Global South need further support if they want to integrate our courses into their capacity development programmes.

In 2020, we conducted a series of semi-structured interviews with partners who had attempted to embed research writing courses in their training programmes. The objective of this study was to find out about our partners' capacity development needs in this area and what challenges they faced when reusing our courses.

The study revealed three main areas of support needed for the successful institutionalisation of TECD intervention locally:


#### *Customisation of the TECD intervention*

We use our Scoping and Design Decision Tool (see 'Resources') to understand the context and needs of our prospective partners who want to embed a TECD intervention, for example, an online course, in their organisation (we call them 'embedding' partners). The aspects that we explore include among the others:


#### *Organisational capacity analysis*

INASP has developed a checklist for local implementation of an existing online capacity development intervention (see 'Resources'). The checklist consists of a set of drivers and criteria presented as a series of questions to help ascertain how ready an institution is to successfully embed an online course and continue to run it in the future. It covers human and organisational drivers, finance aspects, technology criteria (such as system administration and internet connectivity), and factors that need to be in place at the end of the implementation project to make the learning initiative sustainable in the long term. Organisations tend to have solid foundations in some areas but need to strengthen their capacity in others. The checklist helps us and our partners identify areas of support. Our experience shows that partners without experience with online provision can struggle with setting up and maintaining their learning platforms. In such situations, we offer a dedicated space on our own Moodle platform to allow our partners to develop system administration and maintenance capacity. At the end of a project, the partners can decide whether they feel comfortable setting up their learning platform or want to keep their courses in the 'Learning Commons' section of INASP's Moodle for a fee. For example, a significant challenge for one university was the poor reliability of the course website (or LMS) the first time they ran the course. This prevented some course participants from accessing the website. So fixing IT issues became a priority for the university.

#### *Promotion of the learning initiative*

The Triple A lens is a valuable tool at this stage. AAA stands for authority, acceptance, and ability. INASP also uses it within political economy analysis (Hayter 2021). The AAA lens is a way of starting a conversation with our partners about the key stakeholders and persons that need to become involved to (i) align the TECD intervention with organisational and/ or national strategies and policies, (ii) increase acceptance of change, and (iii) ensure the feasibility of the initiative during the implementation stage and sustainability in the long run.

#### *AAA analysis example: Learning initiative led by INASP's partners in Uganda*

In this AAA analysis, INASP's partners in Uganda discussed responsibilities and ideas using the AAA framework in several areas: learning platform development, onboarding and partnership building among institutions, capacity building, awareness raising, digital access support, national policy development, and curriculum revision.


#### **Table 9: Example of AAA analysis for the area of learning platform development**

The licensing of learning materials should be agreed upon at an early stage. INASP's course materials are licensed under CC-BY-SA (Creative Commons-Attribution-ShareAlike). Anyone reusing the materials must give appropriate credit, link to the licence and indicate if changes were made. The reused or modified learning resources should be made available under the same licence.

## **Recommendations**


## **Conclusions**

In Part 3 of this book we have provided a comprehensive overview of underpinning principles and factors that contribute to creating successful technology-enhanced capacity development interventions. We have identified the key stages of the design and implementation process, suggesting specific approaches and tools that can be used at each stage. The implementation phases are relevant to all kinds of technology-enhanced capacity development interventions. However, when sharing the specificities, we have focused mainly on online and blended learning courses as these require a substantial effort which, in our experience, is often underestimated and can lead to poor learning experiences and low completion rates.

In concluding Part 3, we would like to refer back to the Learning and Capacity Development Framework described in Part 1 of the book and stress the importance of any capacity development intervention being part of a coherent journey across multiple levels of change. Whatever the initial entry point of a capacity development intervention, connections need to be made with other levels so that we can support a progression of learning and change.

In Part 4, we share a collection of 16 case studies that further illustrate how we apply the principles of technology-enhanced capacity development in practice.

## **References**


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


# PART 4 **Case Studies**

In Part 4, we detail 16 case studies that capture, in summary form, the context, approach and results of tried and tested examples of TECD at INASP. The case studies address key elements of INASP's approach, including the design of TECD, the selection of platforms, the tools in use, the audience, the pedagogy, the engagement of learners and the learning contexts. Each case study demonstrates the outcome of over a decade of accumulated learning in each theme area.

## **Case study 1: In-built flexibility in INASP's MOOCs leads to increased participation**


## **Context**

The participation of learners in a course is influenced by various factors such as gender, qualification and level of technical skills, where people live, their countries' infrastructure and (online) education provision, and (dis)ability. This case study examines how the flexibility in the design of INASP's MOOCs has led to the increased participation of some of the groups traditionally under-represented in face-to-face training.

## **Approach**

The flexibility offered to participants in MOOCs includes the following:


## **Results**

#### *Worldwide representation of learners*

Across MOOCs delivered by INASP between 2015 and 2020, participants have come from 147 different countries. This includes countries that are affected by conflict or are harder to reach, and some refugee academics. MOOCs have included participants from Sierra Leone, Somalia, Yemen, Iraq, Afghanistan, Syria and Palestine (Harle and Bottomley, 2018). The first two Scientific Research Writing MOOCs included 115 researchers from countries considered to be politically 'fragile' (Murugesan et al. 2017).

#### *Participation of women*

Women make up four in ten starters in INASP's MOOCs and sometimes more than half. This percentage represents a significant increase on the estimated global number of women researchers, which is estimated to be under 30% (UNESCO, 2019), and is higher than the percentage of women who register as mentees for the AuthorAID mentoring programme (37% – source, AuthorAID website data). This may suggest that women are less likely to perceive or face barriers to accessing and participating in online learning than they are in other areas of their professional lives. Moreover, women are slightly more likely than men to complete our MOOCs.

#### *Learning when it is convenient*

The responses in the feedback surveys of the three MOOCs offered in 2020 (a total of 2862 respondents with a nearly 50:50 gender balance) showed that 57% of the respondents engaged with the course 'after office hours' and 72% 'during weekends or off days' (Murugesan 2021). Men were more likely than women to study in multiple time periods, especially before office hours, after office hours, and while travelling or commuting. However, overall only a small percentage of respondents (less than 20%) engaged with the course in these three time periods. This data indicates that most participants are unable (or unwilling) to make online coursework a part of their regular workday. They make time for it outside work hours. Perhaps a contrasting observation may be made with face-to-face training programmes which are typically held during normal work hours, and participants often make time for their regular work before or after the daily programme.

For more information, see:


## **Case study 2: Designing MOOCs for low-bandwidth environments**


## **Context**

Participants in INASP's MOOCs predominantly come from the developing world, where the quality and affordability of internet connections can be highly variable. This case study examines how course materials have been made accessible for participants with low-bandwidth or limited-data connections.

## **Approach**

Underpinning the success of the MOOCs are choices regarding the most appropriate software to use to maximise the effectiveness of the courses, whilst ensuring they remain accessible and relevant to as many participants from countries of low and middle income as possible:


## **Results**

Feedback from recent MOOC participants testifies to the success of these approaches. Around half of the participants consistently identify the low-bandwidth approaches as factors that helped them to successfully learn during the course.


#### **Table 10: Participant feedback on low-bandwidth aspects of the course**

For more information, see: INASP. (2017b). 'Creating a Successful MOOC for Academics in Low-resource Settings: Lessons from Running Large-scale Online Courses in Research Writing'.

## **Case study 3: Selecting the most suitable platforms to facilitate online journal club participation**


## **Context**

The 'journal club' concept is popular at institutions in the Global North: research scholars get together to discuss published papers of interest and better understand the research literature in their field. This concept is relatively new at institutions in the Global South, and is usually held face-to-face. In 2019 INASP established four online journal clubs to facilitate the reading, analysis and critical discussion of journal articles on a larger, more international scale than had been attempted previously.

## **Approach**

The initial four online journal clubs employed different technological tools. Two of the clubs launched on WhatsApp (with one also using LinkedIn for formal feedback), one launched on Facebook, and one began on AuthorAID's discussion forum. Most groups used video discussion to supplement the text-based discussions and activities; these featured participants presenting their thoughts on academic papers in live sessions. Three sessions where authors came to present their papers to the groups were conducted on Zoom and uploaded to YouTube so that those who could not watch the live sessions could view them later.

## **Results**

In the piloting of the online journal clubs, it was found that some platforms were more effective than others, both in terms of generating participation and encouraging engagement.


As a result of these experiences, the group which began on Facebook was moved over to WhatsApp, where engagement rapidly increased. Attempts were made to re-boot the group originally run on AuthorAID's discussion forums using the platform Slack. When the decision was taken to set up a fifth journal club, it was set up on WhatsApp, which led to immediate engagement and discussion.

Cultural factors underpin the comparative success of these different platforms. INASP's review of the initial online journal clubs concluded that, 'many researchers in Africa and Asia already communicate in groups using mobile apps like WhatsApp, Telegram, Viber and WeChat, and are much more likely to be comfortable and familiar with communicating in this way than "traditional" internet forums and email listservs (a method of communicating with a group of people via email)' (Nobes 2019).

Three journal clubs remained active in 2020, mainly on WhatsApp with occasional live sessions on Zoom. Case study 5 discusses the international networking benefits of journal clubs.

For further information, see: Nobes, A. (2019). 'Online Journal Clubs Spark Active Discussions and New Ways of Exploring the Literature'. Blog post.

## **Case study 4: How the use of digital tools in face-to-face workshops can enhance learning: experiences from TESCEA**


## **Context**

The Transforming Employability for Social Change in East Africa (TESCEA)33 programme involves the training of 'multipliers' (teaching staff in universities in Tanzania, Uganda and Kenya), trained in course re-design or gender-responsive pedagogy, who will ultimately deliver training in these areas within their own institutions). This case study examines the impact of the digital tools used in the training.

## **Approach**

INASP's workshops with TESCEA multipliers, conducted face-to-face in recent years until they were moved online in 2020, employed a variety of digital tools including Google Classrooms, Mentimeter34 and the open-source Learning Designer tool.35

#### **Results**

Workshop participants attributed specific advantages to each technological approach (e.g. Google Classrooms as a repository for materials for the course, and Mentimeter for enabling rapid evaluation by participants).

However, what they clearly valued most overall was the familiarity they gained with unfamiliar digital tools which they would later be able to utilise in their own work, both in the planning of courses (this was particularly the case with Learning Designer) and in their interactions with students (Google Classrooms and Mentimeter). This advantage was

<sup>33</sup> www.transformHE.org

<sup>34</sup> https://www.mentimeter.com/

<sup>35</sup> https://www.ucl.ac.uk/learning-designer

emphasised in the following feedback from TESCEA multipliers who attended the face-toface workshops:

Some of the tools were there on the internet, but, you know, the things were there, but once we had these trainings within TESCEA, it kind of wakes me up. And yeah, these are tools which were there and I never thought of using them but here we are. I can apply them.

(having observed his mentor using a digital tool in their mentoring sessions) I guess I just asked because she was sending us … reading materials using the Padlet. So then, on a one on one … I got interested … as to how I could learn how to use it. She took me through.

In other words, the use of digital tools in face-to-face workshops not only enhanced the quality of teaching and learning (by improving access to materials and interaction), it also prompted participants to develop skills that would ultimately equip them to lead blended or online learning within their own institutions.

## **Case study 5: Participants value international interaction in online journal clubs**


## **Context**

As mentioned in case study 3, INASP established four online journal clubs in 2019 to facilitate the reading, analysis and critical discussion of journal articles. This case study focuses on how participants valued the international nature of interaction at these journal clubs.

## **Approach**

See case study 3 for the approach used.

## **Results**

Over 800 individuals, mainly from Africa, registered for the journal clubs. The level of discussion and interaction was quite energetic, with some individuals stepping into informal co-facilitator roles, a process which was largely organic (Nobes 2019). The feedback participants provided concerning the interaction with peers from across the globe facilitated by the online journal clubs was extremely positive.

• 50% of participants agreed that they were more connected with their peers, as a result of participating in the online journal clubs (just 5% disagreed).


For further information, see: Nobes, A. (2019). 'Online Journal Clubs Spark Active Discussions and New Ways of Exploring the Literature'. Blog post.

## **Case study 6: Online group mentoring in TESCEA: the value of peer-to-peer interaction**


## **Context**

The Transforming Employability for Social Change in East Africa (TESCEA) programme involves the training of 'multipliers' – teaching staff in universities in Tanzania and Uganda who receive training on course re-design or gender-responsive pedagogy and who will ultimately deliver training in these areas within their own institutions. Workshops to train multipliers were moved online in 2020 due to the Covid-19 pandemic. This case study examines the value of peer-to-peer interaction in online workshops.

## **Approach**

Two online workshops were conducted, each over a six-week period. They had a narrow focus on course redesign and involved an element of mentoring to replicate some of the learning that would have taken place in a face-to-face context. Participants were organised into small groups with five or six participants and a member of INASP or AFELT staff as a mentor. AFELT stands for Association for Faculty Enrichment in Learning and Teaching, a professional association in Kenya and a TESCEA partner. Each mentoring group was assigned a separate discussion forum visible only to the members of the group. In training, participants worked on an authentic assignment which involved creating a detailed learning design for a course they teach. The participants were encouraged to submit subsequent parts of the assignment for feedback from their mentor and other members of their mentoring group. A discussion forum facilitated asynchronous exchanges and, once a week, a Zoom drop-in clinic was organised where the mentoring groups used breakout rooms to showcase and discuss the progress of their work.

## **Results**

INASP's evaluation of the group mentoring found that, as with any group activity, group dynamics varied from group to group and engagement between participants from different universities could have been better (an issue that was complicated by each university operating with different Covid-19 restrictions in place). However, overall, feedback was positive and participants overwhelmingly said they would like more of such engagement in the future (Buchner and Dryden 2020).

Multipliers who participated in TESCEA online group mentoring described how this specifically benefited their learning, particularly where multipliers were mentored in groups – which enabled them to learn both from the mentor and their peers who had been tasked with similar roles to them in different institutions. Individual multipliers emphasised several advantages of such an approach, for example:


For further information, see:


## **Case study 7: Critical thinking: the impact of light facilitation on outcomes**


## **Context**

In 2020, INASP's Critical Thinking course was launched both as a self-study tutorial (involving no interaction) and as a course with light facilitation and a moderated forum. This case study compares the outcomes of the former and the latter.

## **Approach**

The 'light facilitation' model involved supporting participants through announcements about their expected learning progress once or twice a week, answering technical questions about the learning platform through a dedicated technical discussion forum, and encouraging participants to share their ideas and questions in a content-related discussion forum with fellow participants. Moderation was mainly restricted to keeping an eye on the posts to ensure that discussions were respectful and relevant.

## **Results**

A comparison of feedback data for the two iterations of the course enables us to identify the added value of such light-touch facilitation.

**Figure 17: Outcomes of critical thinking course (self-study versus facilitated version)**

Participants in the facilitated version of the course reported slightly more positive outcomes than those who completed the self-study version. More markedly, however, was that those who began the facilitated course were almost twice as likely to complete it compared with those who undertook the self-study version. While the introduction of light facilitation can be seen to have reduced the proportion reporting that the lack of moderator/facilitator support was a problem (from 16% to 3%), it may have had wider impacts for a majority, particularly in terms of encouraging them to persist with and complete the course.

For more information, see: Schaeffler, V. (2020b). 'Strengthening Critical Thinking Skills through Online Learning'.

## **Case study 8: Approaches to encouraging interaction in researchwriting MOOCs**


## **Context**

INASP's MOOCs in research writing were launched in 2015 and have been run regularly since then, usually two to three times a year. The pedagogy of the MOOCs is based on Garrison's Community of Inquiry Framework (Garrison et al. 2007). This case study examines how this pedagogical approach has encouraged interaction among course participants.

## **Approach**

Garrison's Framework focuses on three 'presences': teacher presence, cognitive presence, and social presence. INASP's MOOCs aim to achieve these presences in the following ways:

#### *Friendly, open and responsive 'teacher presence'*


#### *Deeper learning of participants through 'cognitive presence'*


#### *Connected learning through 'social presence'*


## **Results**

Feedback from recent MOOCs indicates these approaches have been positively received:

#### *Forums*


#### *Teachers/facilitators*


#### *Peer assessment*

• Participants in the three 2020 MOOCs were asked to rate the usefulness of peerfeedback activities on a scale of 1 to 5. Among the respondents (n=2862), the average rating for usefulness of feedback *received* was 4.27 out of 5. Participants also found it a useful learning experience to *give* feedback to others, evidenced by an average rating of 4.58 out of 5 for this statement.

For more information, see: INASP. (2017b). 'Creating a Successful MOOC for Academics in Low-resource Settings: Lessons from Running Large-scale Online Courses in Research Writing'.

## **Case study 9: Self-study tutorials give participants flexibility around timing**


## **Context**

INASP launched its first self-study tutorial, focusing on Search Strategies, in July 2019. With the onset of the Covid-19 pandemic, two more tutorials of this type, covering Grant Proposals and Critical Thinking, were launched in 2020. This case study examines how participants responded to the increased flexibility offered by the self-study tutorials.

## **Approach**

INASP's self-study tutorials are entirely self-paced, although there is a recommended schedule for each tutorial. The content is almost entirely text-based, with videos (if any) as optional resources to make the tutorials accessible to learners with low-bandwidth or limited-data connections. The content, however, is interactive, with embedded questions and prompts for reflection.

There is no direct or personalised support available to tutorial participants. To compensate for this lack of support, a detailed introduction resource addresses common technical queries the learners might have. In addition, a resource on how to prepare to learn online, geared towards first-time online learners, is made available.

While participants can learn at their own pace in these self-study tutorials, a system of automated reminders encourages participants to progress through the tutorial as per the recommended schedule. Once they meet the completion criteria, a completion certificate is automatically generated for participants.

## **Results**

Participants in the Grant Proposal Writing and Search Strategies tutorials viewed the approach towards scheduling very positively and regarded it as having facilitated, rather than limited, their outcomes. Around nine in ten identified 'being able to work on the tutorial in their own time' as something that had helped them – this was the most common aspect identified in both instances. More than half found not having to meet deadlines helpful, while around one in ten identified this as a problem. The balance of opinion regarding scheduling is the most positive across all of INASP's TECD activities.

## **Figure 18: Attitudes to time of participants in Grant Proposal Writing and Search Strategies selfstudy tutorials**

For further information, see:


## **Case study 10: Scheduling of INASP's Editorial Processes for Journal Editors course**


## **Context**

In 2018 INASP began to offer its Editorial Processes for Journal Editors course, aimed at journal editors and members of editorial boards of journals. This case study examines the evolution of the structure and scheduling of the course.

## **Approach**

The Editorial Processes course was initially run in 2018 as a long course covering seven modules. However, in 2019, the course was split into two parts: the first covered four modules, and the second covered three. Offering all seven modules at once was found to require too great a commitment of time from the editors, who tend to hold senior positions in their institutions and therefore have many other professional and teaching commitments. When the course was run in its extended version (i.e. all seven modules in one go), there was a dropoff in participation and submission of action plans in the later modules. Additionally, the action plan assessments required a significant time commitment from INASP.

The first part of the course is typically delivered in November and the second in April/ May. The course requires around three to five hours of work per week from participants and involves weekly deadlines, primarily around submitting action plans.

The two-part Editorial Processes course has been offered three times so far, in 2019, 2020 and 2021.

## **Results**

For the latest iteration of the Editorial Processes course, delivered in 2020, almost half of the participants (45% for Part 1 and 46% for Part 2) said that 'having enough time' helped them to learn successfully during the course. The proportions that identified 'not having enough time' as a challenge were considerably smaller (25% for Part 1 and 34% for Part 2). Moreover, the experience of not having enough time was frequently linked to problems with internet availability, the Covid-19 pandemic, and personal issues, rather than an inherently unrealistic timetable.

In our view, the division of the course into two parts has worked well, making the time commitment much more manageable for both the participants and the facilitators.

## **Case study 11: Developing teaching of critical thinking in Sierra Leone: responding to a local and changing context**


## **Context**

INASP's main role in the Assuring Quality Higher Education in Sierra Leone (AQHEd-SL)36 programme was to help lecturers in Sierra Leone identify new teaching and learning approaches around critical thinking. This case study examines the development of a critical thinking course suitable for the local context.

## **Approach**

The original plan was to use an existing online course, Questioning as we Learn: Introduction to Critical Thinking, as part of a blended learning approach. It was envisaged that the online course would be hosted on a new Learning Management System in Sierra Leone, with the

<sup>36</sup> https://www.inasp.info/project/aqhed-sl

configuration of an open-source e-learning platform Moodle for the seven higher education institutions involved.

However, ongoing work with partners in Sierra Leone has shown significant differences in levels of digital skills among students and teaching staff, as well as substantial problems with stable access to the internet. As a result, the approach was adapted to include the use of MoodleBox devices to enable local access to the online course from within traditional classrooms.

The onset of the Covid-19 pandemic, resulting in the closure of universities in Sierra Leone, changed the context again and meant that new solutions were required to deliver the course material electronically in a way that accommodated internet constraints and was not technically complex. INASP piloted two new approaches, reusing the online course material in small 'snippets' through the online communication tools WhatsApp and Zoom. These approaches enabled lecturers to innovate and tailor the way of teaching critical thinking skills to the environment and their students' needs.

#### **Results**

WhatsApp was identified as having two distinct advantages: (a) many students and lecturers in Sierra Leone were already familiar with the tool and (b) the internet bandwidth impact of using the tool is lower compared with an online Moodle course, thus reducing barriers relating to frequent internet disruptions and high costs of internet bundles in Sierra Leonne.

Given this context, it is unsurprising that early feedback regarding challenges encountered with the 'snippets' approach is dominated by discussion of difficulties relating to technical infrastructure, with individual institutions identifying the facts that, 'students have been complaining on internet availability' and that, 'working online is challenged by lack of good internet facility, low online usage appetites amongst tutors and learners'.

Despite these challenges, feedback provided as part of the monitoring and evaluation of this activity suggests that it was well-received by some lecturers and students and achieved its original aim of contributing to the development of critical thinking skills. One institution involved in implementing this approach remarked, 'those who have been committed have shared life experiences in applying CT [critical thinking] skills in identifying false information circulated, by judging source, content and applicability of such information. Generally, it has been an eye-opener, and people benefiting from the CT course are learning to critically analyse information and situation.

A review of the students' WhatsApp discussions around 'snippets' points to students applying their learning and practising their critical thinking skills.

For further information, see:


## **Case study 12: Capacity development workshop in Vietnam: considering the cultural context when scheduling training**


### **Context**

In its work to embed the Scientific Research Writing course in the partner institution in Vietnam, INASP found that running an online session in combination with a face-toface workshop is a successful strategy to motivate and engage participants. This case study examines how this form of blended training was implemented in Vietnam, considering the local context.

## **Approach**

In training programmes that use a blended approach, the online component can precede or follow the face-to-face component. When designing a capacity development programme in Vietnam in 2016, INASP learnt that participants needed to build rapport before working effectively online. It was especially the case because of the large size of the partner institution – Thai Nguyen University (TNU) – which meant that the training participants, who came from various departments of the university, were unlikely to have pre-existing working relationships. Therefore, the face-to-face workshop was scheduled before the online session.

## **Results**

Twelve participants attended the initial face-to-face workshop and progressed to the online session on developing facilitation skills for online courses. Of the 12 participants, 10 completed the activities in the online session and made impressive contributions to the practice forums, with many exceeding the completion requirements for the session. Comparing Vietnam with the (opposite) scheduling approach undertaken in the RPFC37 workshop in Colombo (see case study 13), INASP's evaluation concluded that the TNU group did better than the RPFC group in the online session. It was perhaps because the face-to-face workshop provided context for the online session and motivated participants (Murugesan 2017b).

TNU participants went on to develop their training programme in three blended components (a) a half-day, face-to-face 'kick-off' session to lay out the elements of the course and have everyone get to know each other, (b) the online Scientific Research Writing course, spanning 5 to 6 weeks, and (c) a two-day workshop where participants could work on the more practical aspects of research writing, building on the knowledge they had gained in the online course.

For more information, see: INASP. (2017). 'Embedding Online Research-writing Training in Africa and Asia'.

<sup>37</sup> The Research Promotion and Facilitation Centre (RPFC) of the Faculty of Medicine, University of Colombo.

## **Case study 13: Developing a bespoke online embedding programme in Colombo, Sri Lanka**


## **Context**

The Research Promotion and Facilitation Centre (RPFC) of the Faculty of Medicine at the University of Colombo (Sri Lanka) was involved with INASP's embedding programme from 2014 to 2018. After delivering a series of short face-to-face workshops between 2014 to 2016 and implementing a mentoring-based writing club, the staff at RPFC became interested in taking on the AuthorAID online research writing course and integrating it into their existing training and support for their researchers. This case study examines how INASP supported them in achieving their goals.

## **Approach**

Given RPFC's interest to integrate the AuthorAID course into an existing research support programme, it was clear that a bespoke embedding intervention was required. INASP developed and delivered an 'implementation-oriented' programme in 2016. The programme involved a three-day face-to-face session preceded by a three-week online component (with around five hours of work required each week). It was intended that the outputs of the online session would feed into the face-to-face session and that the face-to-face session would produce detailed plans for the integration of the AuthorAID course into RPFC's existing programme.

## **Results**

As envisioned, the online workshop resulted in outputs that determined the agenda for the face-to-face session and provided the participants (all faculty members) with the opportunity to practice their online facilitation skills. Then, the on-site, face-to-face working session ended with a plan for a new, integrated support programme on research communication, with delivery dates. This programme was piloted in late 2016, less than two months after the conclusion of the on-site session. Thirty research scholars participated in the 'integrated research support programme' offered at RFPC, which comprised face-to-face workshops, mentor-mentee interactions, and the online research writing course. Twenty participants completed the research writing course, agreeing that it was relevant to their needs (INASP 2017).

INASP's analysis of the embedding programme in Colombo concluded that 'the session was considered a success in its collaborative style, and in leading to tangible outputs. The participants were impressed by the opportunities afforded online and they were keen to move forward' (INASP, 2017).

From INASP's perspective, this type of working session, comprising online and on-site components, proved to be a promising format for supporting partners to take on and implement a complex initiative while contextualising learning in relation to their individual situations.

For further information, see: INASP. (2017). 'Embedding Online Research-writing Training in Africa and Asia'.

## **Case study 14: Online mentoring in AuthorAID: Providing facilitated and unfacilitated mentoring to a global network**


#### **Context**

Online mentoring for researchers has been part of the AuthorAID project since its inception in 2007. The vast majority of mentoring activities have been conducted on the AuthorAID website via an online mentoring and collaboration system, which includes a Find a Researcher tool, a private messaging system, and a learning agreement facility. This case study compares the default, unfacilitated mode of mentoring (where mentees and mentors are free to connect without intervention from INASP) and the pilot of an organised, facilitated mentoring cohort.

## **Approach**

As of 2021, more than 25 000 individuals were registered on the AuthorAID website, including 13 000 mentees and 850 mentors. Mentoring covers a broad range of support, from language editing and data analysis to PhD applications and career mentoring.

#### *Unfacilitated mentoring*

The online mentoring system is 'unfacilitated' and relies on mentors and mentees seeking each other out on the website using the 'Find a researcher' tool. Communication can take place entirely using the private online messaging system, or participants can choose to share contact details and take the discussion offline via email, phone, or other tools.

Over two years, during 2019 and 2020, a total of 2728 requests were sent (asking for a mentor or offering help to a mentee), and 858 were accepted (31%). Of those, a total of 100 were reported as completed (12%).

#### *Facilitated mentoring*

Because of the relatively low completion rate for unfacilitated mentoring (and acknowledgment that some mentoring conversations may be carried on outside the system and not be reported), a pilot project was conducted in 2020/2021 for a more organised, facilitated mentoring cohort.

Ten mentors and mentees were selected from an application process and matched together by an INASP panel. Pairs were formally introduced via email and then given the opportunity to attend an orientation session via Zoom with expert guidance on best practices in mentoring and an opportunity for Q&A. Pairs were given feedback from the INASP panel on their learning agreement and milestones. At a 'mid-point review' session, there was an opportunity to meet and report on progress. Breakout rooms were used so mentors and mentees could discuss challenges with their peers.

## **Results**

After six months, 7 out of 10 pairs in the facilitated mentoring cohort completed the programme, with 57% of respondents reporting that they had achieved all their milestone objectives. Eight-six per cent of mentees said that the programme had equipped them with additional skills and understanding.

Challenges and preferences:


Overall, a more structured facilitated 'hands-on' approach, including regular support and check-ins (including at least two live sessions for all participants) resulted in a higher success rate than the unfacilitated online mentoring programme. While problems with outside pressures, time zones, electricity and internet were still apparent, the combination of a structured, supported programme with flexibility to rearrange synchronous sessions on a oneto-one basis, meant that progress was still possible, even during the Covid-19 pandemic.

For further information, read the full report at: https://www.authoraid.info/en/mentoring/ mentor-cohort-2020/

## **Case study 15: Handing over the INASP Research Writing course to the Open University of Tanzania**


## **Context**

Between 2013 and 2018, INASP collaborated with the Open University of Tanzania (OUT) to embed the INASP/AuthorAID course in research writing at the university so that it could be contextualised for Tanzanian researchers and reach a large number of early career researchers at OUT centres across Tanzania. This case study presents a summary of the approach and results.

## **Approach**

At the outset, two faculty members from OUT attended a training-of-trainers workshop in Dodoma, Tanzania, in 2014. A year later, the first OUT–AuthorAID online course in research writing was offered on INASP's Moodle platform with facilitation provided by faculty members at OUT. To move towards sustainability, we worked with the technical staff at OUT to migrate the course materials to OUT's LMS, which is also built with Moodle. Since then, the course has been run on OUT's LMS with no support from INASP.

## **Results**

One of the leaders of this project, Professor Emmanuel Kigadye of OUT, shared his reflections in 2018, resproduced here verbatim: 'At the Open University of Tanzania (OUT), we piloted the AuthorAID online course in 2015, with both the proposal writing and research writing components. We have been running it annually since then. We have recently incorporated the AuthorAID course within the Soft Skills Enhancement Programme at OUT as a result of staff and student demand. This is a postgraduate programme containing a series of research methodology and research communication courses which are mandatory for all postgraduate students doing Masters by thesis and PhD. A proposal was presented to the OUT Research Publication and Postgraduate Committee to include the AuthorAID course within this programme, and the OUT senate approved the proposal. I believe that the AuthorAID course is now more sustainable at OUT, as it is a formal part of an important programme that will benefit all postgraduate students. Further, as part of a capacity building exercise, we recently trained several faculty members to take on the facilitation role for future offerings of the AuthorAID course at OUT. For those who are interested in scaling up the AuthorAID course at their institution, I have the following recommendations:


## **Case study 16: The Transforming Higher Education for Social Change partnership**


## **Context**

The Transforming Higher Education for Social Change project sought to achieve two overall goals: (1) to affect change within individual universities, to improve the relevance and quality of undergraduate teaching and learning, and embed a 'teaching for critical thinking' approach; and (2) to use the process of change within four universities to test and refine an approach to change that could be scaled more widely, and that could be distilled and documented into a series of digital toolkits and courses to support other universities and academics seeking to effect similar changes in their teaching and learning. This case study presents a summary of the approach and examples of technology used.

## **Approach**

The project brought together four universities: Uganda Martyrs and Gulu University, Uganda, and the universities of Dodoma and Mzumbe in Tanzania; a network of Kenyan faculty developers, and the regional hub of the international network of social entrepreneurs – and INASP.

The partners recognised that important change was needed at the individual and organisational levels – especially relating to the teaching philosophies of academic lecturers, the learning styles of students, and closer alignment of university programmes with the needs and realities of their communities, economies and wider society. The partnership also recognised that it could only accomplish this by bringing more diverse expertise into conversation with university teams, their leadership, and their students.

Each university convened representatives from national employer organisations, government, local business leaders and community representatives to guide the process – and to engage them in discussions about higher education, to influence their own thinking. These became known as Joint Advisory Groups (Wild and Nzegwu, 2022)38 – and the partnership brought together the expertise of academics, faculty developers, social entrepreneurs, and those with experience in designing and facilitating the process of change across four countries.

<sup>38</sup> Wild, Joanna and Femi Nzwgwu. 2022. 'How Joint Advisory Groups have Supported Educational Transformation in the TESCEA Project'. Learning brief. https://www.transformhe.org/\_files/ ugd/027d0a\_7d764f41766d41379b717ac13a50708c.pdf

In doing so, the partnership was able to create a collective interest in effecting change to teaching and learning, in which not only academics and students were invested, but also partners from business and the local community.

## **Results**

The first phase has generated a set of collectively developed outputs, but more importantly a collective vision about teaching and learning, and a community of practitioners who want to advance this work.

Technology has been woven throughout the process:


Along the way, the partner universities convened stakeholders in online discussions on Zoom and other platforms. Gulu University made innovative use of Zoom to connect listeners and participants from across the world with a conversation held by several elders around an open fire one evening to discuss the value of the knowledge generated by the community.

More about the project's resources can be explored on www.transformHE.org and in the project's evaluation at https://www.inasp.info/publications/transforming-employabilitysocial-change-east-africa-evaluation.

## **Conclusions**

*Jonathan Harle, Femi Nzegwu, Joanna Wild*

Capacity, learning and change are complex fields of practice. In the preceding pages, we have sought to show how digital technologies can help enhance these efforts, drawing on our own experiences – successes and disappointments – to show how we have done this. We have also demonstrated the limitations of learning with digital technology and discussed where they work best when blended alongside physical or face-to-face interactions between people.

For individuals, there are now many opportunities to learn online, develop new skills and knowledge, or interact and learn with peers across the world. For organisations that want to develop their people, the challenge often lies in how to make the best use of tools and opportunities, and how to guide and support their staff to do so. Many individuals are learning in their own time, below the radar of formal organisational training or development programmes and open courses like MOOCs often enable that very well.

The greatest contributions that organisations can make may simply be to recognise this mode of learning and make sufficient time available to staff to take advantage of it. For organisations that want to go further, digital tools can support institutional learning at the point of need, enabling teams to connect, learn and solve problems. Beyond single organisations, digital tools can help people come together across a wider system, think and learn, challenge each other, and formulate new interventions or ways of working, and in doing so, significantly change how the whole system works to achieve and sustain change.

In all cases, and across all levels of change, all capacity goals, and all uses of digital tools, time is perhaps the most important factor determining what is possible and what can be achieved. While it may be possible to conceive a sophisticated, multi-dimensional intervention, that identifies where change is needed across several levels, and how this can be enabled by a raft of approaches, and a range of tools, we always have to meet our learners, and each other, where we are.

A simpler, first-generation approach might be needed, while partners and colleagues familiarise themselves with the problem or problems, and the methods and tools that might be used to address them. Once confidence and understanding are built, a second-generation, or second-phase approach, might allow more sophisticated approaches to be introduced. A complex challenge doesn't need a complex solution – or at least not from the outset.

Our partnership in East Africa has demonstrated that clearly. The challenge of transforming university teaching and learning is complex, requiring relationships to be built across sectors to bring new ideas into universities, and requiring change within individuals, within organisations, between networks of organisations and wider ecosystems. Some of that complexity could be foreseen at the outset, but some of it had to emerge in the process of learning together, as a partnership, and of discovery, as individuals started to see their roles differently and come to new understandings.

There are also occasions where wider changes – or in some cases shocks – open up radically different possibilities. These could be changes in policy and regulation – which suddenly allow or require new approaches (e.g. new qualification requirements for teaching staff or the appointment of a new senior leader in a university who sets a new agenda for the institution or encourages staff to develop new skills) or an exogenous change (e.g. the online shifts necessitated by the Covid-19 pandemic). While we and others had been adapting digital tools and methods for many years, face-to-face learning was often seen as preferable, and most workshops and exchanges necessarily involved bringing people together in the same room, often at significant cost – both financial and environmental. While it remains important to sit around the same table, and share the same space, the pandemic has created a new sense of what it is possible to do well remotely, digitally and online, while also highlighting some of the particular advantages of coming together face-to-face. In short, it has enabled us to think differently about how online and physical can be best combined – or blended – to achieve the best results.

While there were many negative impacts of the rapid, emergency pivot to working, meeting and learning online that the pandemic induced, the experiences also demonstrated what was possible, and what could be done with a level of basic connectivity, whether synchronously or asynchronously. While imperfect in some ways, digital ways of learning and working are now more accepted than before, their shortcomings better understood, but their advantages and the possibilities they offer are also clearer.

The challenge now is to build on the quick and sometimes short-term fixes of Zoom workshops or training sessions, identify how we can incorporate digital tools more effectively, and see how they enable us to do things that we couldn't before.

Of course, experimenting with new ways of doing things brings both successes and failures. We have learned from our mistakes: people excluded, voices not heard, needs overlooked, or unmet learning outcomes. We have described those lessons learned in Part 2. These failures have pushed us to refine our digital approaches, and to keep thinking hard about how we can take digital learning further. We need to be alert to mistakes, whilst not allowing them to prevent us from innovating. We need to be bold in experimenting with new approaches, while keeping our eyes firmly on equity, and without abandoning a concern for careful design and high-quality experiences.

## **Resources**

This section of the book provides resources and tools useful in planning, developing and delivering an online or blended capacity development intervention. These are meant to serve as initial guidance and source of ideas to reuse, adapt and build on.

## **Scoping areas – question bank**

*Joanna Wild and Veronika Schaeffler*

As explained in Part 3, scoping areas aim to improve understanding of the audience, the broader socio-economic and cultural context as well institutional factors that may influence learning. The questions below can be used to conduct a scoping exercise.


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*



## **Design decisions – question bank**

*Joanna Wild and Veronika Schaeffler*

Design decisions are based on the results of a scoping activity. Scoping results inform the choice of course content, format and mode of delivery; the type of support provided to the learners; and opportunities for social interaction. Below is a list of decisions to be made at this stage.


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*



## **Publicising your course: A template**

*Ravi Murugesan and Josie Dryden*

Below we share a template for publicising a capacity development intervention (here we focus on online courses) and inviting participants to enrol.


## **Guidelines for facilitators: Some points to reuse and adapt**

*Ravi Murugesan, Andy Nobes, Josie Dryden*

Over the years, we have developed and refined guidelines for facilitators on our courses. We present below some points from these guidelines that can be reused and adapted as relevant.

	- replying to the participants' posts/questions on the forums;
	- initiating discussions on the forums on relevant topics (e.g. by presenting scenarios or examples from your own experience and asking participants to share their views); and
	- sharing news or resources related to the course topics.

## **Announcements for the participants of an online course**

*Ravi Murugesan and Josie Dryden*

When a largely asynchronous online course is run on a schedule, an important role of a course moderator is to make regular announcements or news posts in the course to keep participants informed about the current focus areas. It is a good idea to set up an announcements forum for moderator use only and to make sure that all participants receive an email copy of posts made there. This way, these announcements will reach everyone (even participants who haven't recently signed into the LMS). Here is an example of the first weekly announcement in a recent INASP course.

#### Hello everyone,

#### Welcome to the course!

My colleagues and I will use this announcements forum throughout the course to keep you updated with the course and activities. To begin, here is some information on how you can get started on the course:

Getting started: what do I need to do in week 1 (7 to 13 September)?

#### **Introduction and induction**


#### **Unit 1: Understanding previous research**


There are no deadlines for this first week but try to go through the lessons and pass the quiz by the end of the week.

#### **Guest facilitators**

We are fortunate to have a great team of more than 20 guest facilitators for this course [LINK]. They are from our network of experienced mentors and researchers trained by AuthorAID from around the world. Some of these facilitators are also previous course participants who we have 'promoted' because of their knowledgeable and friendly forum posts!

They will be active in the discussion forums to help you with advice and tips on different areas of research writing and communication.

#### **Technical support**

Because of the large number of participants on the course, we are not able respond to individual email requests for support. Limited support will be available on the Technical Support forum [LINK] in the course.

We hope you enjoy the first week! Look out for further updates from the moderation team as the course continues.

Best wishes,

[NAME]

## **Learning about the Moodle LMS**

#### *Ravi Murugesan*

Below we provide a compilation of useful resources to learn how to use the Moodle Learning Management System which is one of the most popular LMSs globally.


## **Checklist for local implementation of an online capacity development intervention**

*Ravi Murugesan, Joanna Wild, Veronika Schaeffler*

INASP has developed a checklist for local implementation of an existing online capacity development intervention. The checklist consists of a set of drivers and criteria presented as a series of questions to help ascertain how ready an institution is to successfully embed an online course and continue to run it in the future.

#### **Abbreviations**

Q – stands for 'question', Y – stands for 'yes', N – stands for 'no'

#### **1. Drivers**

A variety of 'drivers' are required to reuse and adapt an existing online capacity development intervention. These drivers are made up of needs, policies, institutional buy-in and the energy of the people who plan and implement the course. The questions below help establish whether these drivers are in place.

#### **Institutional need**

Q. Is there any existing support in your institution for capacity development in X?

#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


capacity intervention?


#### **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

*Enabling learning and supporting change*


## **Budget** Q. Will the embedded course have a continuing dedicated budget? Y. How long will this budget continue to be allocated to the course? Y. What kinds of costs does it cover? Salaries? Technology? Training? What else? N. What would happen when any current funding runs out? How would the continuing costs of running the course be covered? **Succession** Q. Will the embedded course have continuity and succession planning? Y. Who will be responsible for leadership and management of the course? What kinds of plans will be put in place to ensure that this leadership will continue in the longer term? Y. How will core team members be recruited and refreshed over time? What kinds of plans will be put in place to engage new people? How will new team members be trained to fulfil their roles? N. How will you keep the course running in the longer term?

## **Questions to include in the pre-course survey and postcourse survey**

*Love Calissendorff, Ravi Murugesan, Femi Nzegwu, Veronika Schaeffler, Joanna Wild,* 

INASP's online courses typically include both pre-course and post-course surveys. The pre-course survey is part of the course induction section, and it helps us get some baseline knowledge before participants start learning on the course. The post-course survey (sometimes called feedback survey) aims to help us understand the participants' learning experience. Here we suggest questions to include in the pre-and post-course surveys of online courses in a TECD context.

## *For both pre- and post-course surveys*

At present, what is your level of confidence in carrying out the below tasks related to <name of the course>? Rate your confidence from 1 (lowest) to 5 (highest).


*Note: The above question – if asked at the start of the course and again at the end of the course – can provide useful data on the change in confidence as a result of the course.*

What is your gender?


What is your age?

Which country do you currently live in?

Do you consider yourself to have any of the following?


*For the post-course survey, the above question could be followed up with this question:*

Did you face any challenges participating in this course because of any impairment or difficulties? If so, please share details so that we can look into making the course more accessible.

## *For post-course surveys*

On a scale of 1 (lowest) to 5 (highest), indicate to what extent you disagree or agree with the following statements.


Which of the following challenges, if any, did you face while participating in the course? You may choose multiple answers.


If you faced any challenges, how did you deal with them? (free text)

Which of these time periods best describes when you mostly worked on this course? (You can select more than one option.)


What kind of device did you primarily use to study on the course?


Can you suggest any ways to improve this online course? (free text)

Please tell us if you have any other comments on any aspect of this online course. (free text)

## **Glossary of terms**

**Technology-enhanced learning (TEL)** – effective and creative use of digital technology to optimise the learning experience. This definition emphasises that the main goal of using technology is to create the best learning experience possible – whether the learning environment is a traditional classroom (face‐to‐face), an online space (online learning), or a mix of both (blended learning).

**Technology-enhanced capacity development (TECD)** – a term coined by INASP. It refers to the entirety of approaches that use technology within a capacity development context (see Part 1 of the book for a definition of capacity development).

**Learning design** – a process for planning learning activities within a TEL or TECD initiative (described in detail in Part 3 of the book).

**Learning management system (LMS)** – a web-based application to develop and deliver online courses or online components of a TEL/TECD initiative. An LMS is usually hosted on a web server and accessed through an internet browser. LMS applications come with a variety of features to share learning resources and orchestrate learning activities. Examples of LMS applications are Moodle, Canvas, and Blackboard Learn. The term 'virtual learning environment' (VLE) is commonly used as an alternative to LMS.

**Facilitator** – someone who leads or facilitates a TECD initiative (for example, on online forums that are part of the initiative) takes an interest in how participants engage with the initiative and provides pedagogical support to the participants (e.g. by answering questions related to the content). While this person may be called 'teacher', 'tutor', or something else, at INASP we prefer to say 'facilitator'.

**Moderator** – while frequently the terms 'moderator' and 'facilitator' are used interchangeably, at INASP we use 'moderator' to refer to the person who posts regular announcements during the course and provides technical support to the participants. When a moderator and facilitator work together to lead a course, the moderator focuses on the managerial and technical aspects of course delivery, while the facilitator takes responsibility for the pedagogical and social aspects.

**Online course** – a learning programme developed and delivered using digital and online tools for sharing learning resources, providing opportunities for interaction and collaboration, and assessing learners' progress and performance. An online course is usually hosted on an LMS, and it commonly has some structure, for example, start and end dates and a weekly schedule of activities.

**Blended course** – a learning programme with both online and in-person elements, for example, a programme that starts with a one-week in-person workshop followed by a threemonth online course.

**Open educational resource (OER)** – the term 'educational resource' refers to discrete or standalone learning resources such as documents and videos, but it can also be used to refer to an online course. The 'open' descriptor means that such a resource is available via an open licence such as a Creative Commons licence.

**Massive open online course (MOOC)** – a course that is open to everyone for enrolment. There is no application or selection process. While participants may be advised to have some pre-requisite knowledge, the course provider usually does not check or verify this. An open course is free of cost to participants, but sometimes the course provider may charge a small fee to issue a completion certificate, or additional learning opportunities may be available to those paying a fee. Many open online courses attract a large number of learners, and since the early 2010s it has become common to use the word 'massive' to describe such courses. While a course with less than 100 learners can hardly be called 'massive', there is as such no particular threshold above which a course becomes 'massive'. Some MOOCs have more than 10 000 learners, while others may have only a few hundred.

**Hyflex learning** – a relatively new term in TEL. The core of hyflex learning is giving learners choice in how to engage with a learning programme: online or offline, in class or remotely, in real time or asynchronously.

**Monitoring, evaluation and learning (MEL)** – processes to collect, analyse, and learn from data on how participants engage with a TEL/TECD initiative.

**Completion rate** – the number of participants in an online course who meet the completion criteria for the course, usually expressed as a percentage (that is, the number of course completers divided by the total number of participants in the course).

**Accessibility** – in the TEL context, accessibility is usually equated with web accessibility. According to the World Wide Web Consortium (W3C), web accessibility 'means that websites, tools, and technologies are designed and developed so that people with disabilities can use them' and it 'encompasses all difficulties that affect access to the Web, including auditory, cognitive, neurological, physical, speech, and visual'.

**Synchronous communication** – a form of online communication in which participants exchange messages or converse in real time, for example, through voice or video calls or meetings.

**Asynchronous communication** – a form of online communication in which participants provide their input in their own time without requiring others to be engaged simultaneously. Email and online discussion forums are typical examples of asynchronous communication tools. Instant messaging apps are also geared towards asynchronous communication, but they allow participants to communicate in real time if they happen to be online at the same time.

## **About the editors**

**Joanna Wild** joined INASP in March 2016 to develop the organisational strategy for online and blended learning and to introduce new approaches to using digital technology to enhance capacity development. Since joining INASP, Joanna has collaborated closely with partners from Africa, East Asia and Latin America to understand context-specific requirements and co-design equitable and sustainable digital capacity development approaches. Before joining INASP, Joanna worked at the Educational Enhancement Team at the University of Oxford researching into aspects of online course design, evaluation of digital learning experiences, communities of practice (CoPs) and open educational resources (OER). In 2014, Joanna cofounded a consultancy company advising on design and evaluation of effective, inclusive, and sustainable online and blended learning experiences. The company has worked with clients and partners like Tate Britain, BBC Learning and The Open University UK. Joanna has more than 20 years' experience in advising, research and practice in technology-enhanced learning in higher education in the UK, EU and Global South, and has been funded by the ESRC, EPSRC, FCDO, SIDA, GIZ and EC.

**Femi Nzegwu** is an assistant professor of monitoring, evaluation and learning (MEL) at the London School of Hygiene and Tropical Medicine, and the MEL lead for the UK Public Health Rapid Support Team (UK-PHRST). She is a social researcher, MEL and international project management specialist with more than 25 years' experience in these fields, including institutional learning, institutional capacity strengthening and strategy development. She is highly multi-disciplinary and hold degrees in post-colonial studies, public health, sociology and economics. Before joining the UK-PHRST she was head of monitoring, evaluation, research and learning at the International Network for Advancing Science and Policy (INASP) in Oxford and before that head of research, evaluation and impact at the British Red Cross. She has worked as a regional adviser for the United Nations and as an advisor/consultant to many UK and international agencies and governments on research, MEL, policy and institutional capacity strengthening and sharing – conceptually, strategically and practically. She is the author of numerous policy research, evaluation and learning papers and books.

#### **Digital Technology in Capacity Development**

Enabling Learning and Supporting Change

This book focuses on digital approaches to capacity development, reflecting the greater interest in how digital tools and platforms can be used for capacity development in the 'Global South'. While Covid-19 demonstrated some of the benefits of online learning, the widespread, often uncritical adoption of online tools driven by necessity has left many with an experience of 'emergency online learning'. This book aims to assist in the design of technology-enhanced capacity development by sharing evidence of practices that are principled rather than rushed; inclusive rather than creating new digital divides.


We have worked to evidence how technology can be leveraged effectively to enhance or strengthen capacities of individuals, teams or systems. We make clear that there are no magic bullets, that online approaches are not simply quicker or cheaper substitutes, and that solutions need to be selected carefully, designed well, and significant time invested if it is to work well.

We hope *Digital Technology in Capacity Development* will be of interest to researchers and practitioners in a range of institutions, whether they are directly responsible for designing, delivering or evaluating new initiatives or whether they are advising or funding those who do.

Cover image: Steve Johnson | unsplash **DIGITAL TECHNOLOGY IN CAPACITY DEVELOPMENT**

Digital

Technology

in Capacity

Development

**Enabling Learning and Supporting Change**

**Edited by Joanna Wild & Femi Nzegwu**

**Foreword by Professor Laura Czerniewicz**