«Abstract One of the reasons that research is conducted is to build the evidence base to inform strategic or policy directions. In this context, the ...»
Summary of literature review The strongest grounds for generalisability in qualitative research begin with rigorous attention to the definition of what is meant by the term itself. It is probably not necessary to seek new language. The qualitative paradigm has long since come of age;
it is in a position to use terms like generalisability without apology and in its own right. Defining terms or priorities (Metcalfe 2005) however, is always a good idea.
Much of the writing surveyed in this literature review is in agreement that qualitative studies may form a basis for understanding situations other than those under investigation. The strength of this basis depends again on rigour—that of a study’s design and methods for gathering and analysing information-rich data (Yin 2003a, b);
its attention to validity, reliability, and triangulation (Patton 2002); and a welldeveloped theory emerging from the findings (Johnson and Christensen 2004).
Three illustrative cases
Case 1: Three converging case studies of rigorous sampling and micro-empiricism A new and generalisable theory of learning was generated from three intensive case studies (Falk and Harrison 1998, 2000; Falk and Kilpatrick 2000). The research, funded by the Australian National Training Authority (ANTA) in 1998 (Falk and Harrison 2000) analysed community interactions to show aspects of the quality of the processes that build social capital. The research was theory-building, using the principles of grounded theory as in Glaser and Strauss (1967), Lincoln and Guba (1985) and Strauss and Corbin (1990), rather than theory-testing. The theory so developed stands as a generalisable model for interactive learning processes.
The methodology was qualitative, using a three case study structure with ethnographic techniques for data collection and a range of analytic techniques discussed below. The
-5three sites were selected for their different features (size and nature of industry base, degree of community organization activity), though each was a whole ‘small community’ of between 5–10,000 people. This type of multiple case study design is what Yin (Yin 2003a:47) describes as a replication design from the basis of a ‘theoretical replication’. In this way, the focus of the study, which was on the nature of the interactive outcomes between community members, could be related to the variables of the employment base and community organizational dynamic in action while at the same time providing more solid grounds for generalisability. In each of the three sites, the sample of participants was identified through a purposeful technique checked with socio-demographic variables.
Triangulation was provided in a number of ways. There were three layers of validity checks: (a) the use of multiple theoretical and conceptual lenses to examine the issues and parameters involve before beginning the research; (b) the depth and extent of the sampling processes and feedback, member-checking and other data collection mechanisms; and (c) the multiple data analytic techniques used to align interpretations and test for consistency and categories across the data sets. All of these provided the bases for warrantable generalisability.
Case 2: A case of mixed methods An example of a mixed methods approach is drawn from a study conducted for the Northern Territory Council of Social Service (NTCOSS) in 2004 (Northern Territory Council of Social Service 2004). The purpose of the research was to investigate how pathways to employment and training opportunities in the Northern Territory can be created and improved for employment disadvantaged groups. The research involved several components: 1) an extensive international literature review; 2) a national review of ‘what works’; 3) development of a statistical profile for each of nine employment disadvantaged groups; and 4) a series of 70 semi-structured interviews and focus groups among stakeholders across the Northern Territory. The research findings were used to develop principles of ‘what works’ in the Northern Territory and recommendations for strategic policy implementation.
In this case, generalisability was applied to the Northern Territory context and to a quite specific target audience. The integrated mixed methods approach supported and underpinned the formation of practice principles, which in turn were applied to the strategic policy context of the Northern Territory. In terms of outcomes this project is being used as a basis for the Northern Territory Government’s Employment Disadvantaged Pathways Project (Northern Territory Department of Employment Education and Training 2006) and has helped shape further research conducted by stakeholder groups (Morton et al. 2006). The authors of this paper are also working on another project using the NTCOSS methodology to build knowledge and understanding of the role of vocational training in the Australian Government’s Welfare to Work strategy (Guenther et al. 2007). These outcomes demonstrate how applicable—indeed generalisable—the findings of projects based on this methodology are.
Case 3: A case of multiple case studies A final example comes from the ANTA funded Role of VET research conducted by the Centre for Research and Learning in Regional Australia (CRLRA) (2001). The research involved a 10 site program of research conducted over two years, which used
-6case studies of the role of vocational education and training to consider principles of effective delivery in regional areas of Australia. While this research could rightly be described in terms of a ‘mixed methods’ approach because it relied on triangulation with internal (quantitative surveys) and external quantitative data (site statistical profiles), the breadth and depth of the qualitative data stands out by itself. Sites for this research were selected from regional centres across Australia: two in New South Wales and Queensland; one in Victoria, Western Australia, South Australia, Tasmania and the Northern Territory; and one New South Wales–Victoria cross-border site. The 10 case studies involved more than 700 semi-structured interviews with identified VET stakeholders.
In the Role of VET research, interviews were transcribed, coding and initial thematic analysis was conducted using qualitative data analysis software, and detailed site-bysite analysis of the emerging themes was carried out using a standardised framework of categorisations based on an OECD (1982) set of social indicators. While the data did show the uniqueness of each site in a variety of ways, several themes appeared across all or several sites. The consistency of some of these thematic patterns gave rise to a synthesis of findings, from which generalised principles were derived. While we were careful at the time to say that these principles should only be applied to the sites concerned, it has been interesting to note that many of the findings and principles have been replicated in other more recent research, using the same framework of categorisation (e.g. Guenther 2005; Guenther et al. 2006).
So what can we conclude from this discussion?
The foregoing discussion has several implications for generalisability in qualitative research, and we forward these knowing that the field of VET research has been proactive in fostering qualitative research and using its outcomes.
First, generalisability is possible from qualitative and mixed research methods. It is possible partly because of the replicability of the findings across several populations.
So if, using the same methods, we can demonstrate the same findings in several (like or even unlike) population groups then we can correctly assert that the findings are generalisable beyond the initial one or two cases. This process of replication is based on assumptions not too dissimilar from those used in quantitative methodologies, which rely on representative samples as the basis for extrapolation to a broader population group. The idea is akin to Yin’s (2003b:49-53) ‘literal replication’ and finds support in several examples from case study practice. Smith and Henry (1999) for example develop a set of generalisable case study ‘protocols’ so that duplicated case study methods are replicated to enable comparability of findings across a number of scenarios or sites. Similarly, CRLRA (2001), in the series of 10 Australian case studies discussed above, established standard methodologies for each case study site and were able to ‘quantitize’ the findings according to an agreed framework. In both these examples the ‘protocol’ or ‘framework’ is built on a set of guiding parameters that ensure the integrity and comparability of the findings and which enable a synthesis of findings based on a robust methodological design.
An extension of this sees the outcomes of a series of case studies as a result of a type of qualitative ‘hypothesis test’, not dissimilar to an empirical scientific experiment that sets out to demonstrate or prove a scientific theorem or law—we can describe this as a ‘deductive’ (as opposed to inductive or theory building) method (Johnson and
-7Christensen 2004:18). The difference of course is that ‘proof’ of the law in scientific terms is most often associated with probabilities and repeatability of numerical results under set conditions. In qualitative research, while it is possible to ‘quantitize’ text based findings—‘converting qualitative data into numerical codes that can be statistically analysed’ (Miles & Huberman 1994, cited in Tashakkori and Teddlie 2003:714)—this is not the same, in part because generally it is impossible to reconstruct the conditions under which the ‘experiment’ is undertaken. However, we argue that the same methodological principle applies: that is, a robust methodology allows us to test, prove and/or disprove a theorem regardless of whether the method is qualitative or quantitative.
Second, generalisability is also possible on the basis of theory building—that is, the ‘inductive’ approach. For example, as patterns of behaviour are observed across multiple and potentially contrasting research objects, conclusions may be drawn about factors that contribute to those patterns—that is, how and why the behaviour occurs. It is possible through a ‘theoretical sampling’ process (Charmaz 2000:519) to build theory so that across a range of scenarios, patterns of behaviour are predictable (and therefore generalisable). In terms of case study methodology, this could be described as a ‘theoretical replication’ (Yin 2003b:49-51). Again, this approach has a corresponding cousin in scientific (quantitative) methods. In science this process is used when a series of observations are made to explain and predict patterns of behaviour (Johnson and Christensen 2004:19). An example of this is the development of Darwin’s theory of evolution.
Third, generalisability is possible because of the receiving audience’s perceptions.
This, on the surface, appears to be a dangerous statement to make because it challenges the notions of true, objective, scientifically valid research and may be interpreted as research that appeases the intended audience. Several counters can be made to this argument. First, much so-called scientific quantitative research can be tailored to suit the perceptions of the intended audience. Consider for a moment science based research reports on a number of issues: smoking; nuclear power; forest practices; farm nutrient discharges into environmentally sensitive areas. A ‘spin’ can be placed on any of the findings to say whatever the audience wants to hear. Second, many of the generalised findings of quantitative research, which are extrapolated to a larger population on the basis of representative sampling schemes, simply do not apply to many sub-population groups and seemingly disregard the context of these particular groups. A case in point to illustrate this is the recent release of the Australian Bureau of Statistics (ABS 2006) Measuring Australia’s Progress report, which highlights generalised improvements across a number of indicator bands for Australia as a whole. Because the focus in this kind of methodology is on ‘generalised’ findings and the audience is assumed to be interested in just these, a large amount of important findings which are not ‘generalised’ are disregarded. The report itself acknowledges the limitations of the findings especially for Indigenous people. This illustration highlights the need for the applicability of any research findings (qualitative or quantitative) that address the context of the receiving audience. Therefore, while we often rightly note the limitations of small-scale qualitative research studies, in some cases the relevance and generalisability of the findings from a purposefully selected sample, to similar groups in an intended audience may be recognized for its credibility by researchers (who understand both
-8the sending and receiving contexts) and the audience (who apply it to the receiving context).
Fourth, generalisability is possible through a combination of any or all the above. In most of the examples given in this paper, including the three cases discussed in more detail, the methods are mixed. And here, let us avoid becoming confused about mixed methods as a mix of qualitative and quantitative—and a mix of different techniques within a solely qualitative framework as in triangulation. Here we are including both these options. In the kind of research methodologies we are concerned with here, considerable degree of warrant for generalisability is built through the care the researchers have taken to account for detail, inclusion of variation in sample, triangulation of the methods and techniques and in reporting and considering outliers and limitations. Readers are usually left with the impression that, even though this is qualitative and we are not supposed to generalise from it, we are inclined to do so.
Our own principles of logic tell us that we can do so, and with a degree of confidence.