ContentJust Search pageLJF site navigationLeft navigation links
LJF Logo
Publications sectionJustice Awards sectionResearch sectionGrants sectionPlain language law sectionNetworks section
Just Search
Research Report: The outcomes of community legal education: a systematic review, Justice issues paper 18
cover image

The outcomes of community legal education: a systematic review, Justice issues paper 18  ( 2014 )  Cite this report

Print chapter
View PDF

Appendix 1: Further detail on methodology

The systematic review process

Systematic reviews are a method for identifying and synthesising research evidence in the literature to answer a specific question. It involves using a systematic and transparent process to search for literature, and using clear criteria to assess the relevance and quality of the literature reviewed (Karras & Forell, forthcoming).

Over the past few years the Law and Justice Foundation of NSW has been exploring how to best conduct systematic reviews of research in the legal assistance sector, which often includes qualitative and mixed method research. This review was based originally on that developed by the Joanna Briggs Institute in Adelaide to review health related research data (Joanna Briggs Institute 2008; for previous reviews see Forell and Gray 2009; Forell et al. 2011). The process is ongoing, given the significant challenges of conducting this activity in the legal assistance sector where, for example, there is no single repository of relevant studies and most evaluations are focused on process, given the challenges in identifying and attributing outcomes.

Research protocol

Before commencing the review, we developed a research protocol that defined the parameters of the systematic review, and set out the research questions, the inclusion criteria and our search strategy. The inclusion criteria were as follows.

Target group — community members or workers

The review covered studies on CLE programs that targeted either adult community members (aged 18 years or above) who have or are at risk of facing legal issues — who are usually disadvantaged — or workers who support these community members. Students (in law or related disciplines) were excluded from the review.

Interventions – face-to-face CLE

We defined CLE as education that has a primary legal focus and is provided in a face-to-face format (either to a group or one-on-one) such as a lecture style or interactive workshop, role play, legal theatre, either in person or via teleconference. Further, the intervention needed to be legal education, as distinct from legal advice (i.e. tailored to the particular person’s situation) (Gander 2003, p.4).

The CLE could have a primary focus on one or multiple legal issues, for example, criminal law, family law, court processes, housing, tenancy, credit/debt, fines, road laws, domestic violence, drug laws, policing, or intellectual property. It could include providing legal information relating to that problem and/or guidance about how to (and/or encouragement to) respond to that problem however the provision of legal information such as pamphlets or DVDs without accompanying education (as defined here) was excluded. Financial education programs for disadvantaged community members were assumed to have a primary focus on legal issues (in terms of preventing legal problems such as debts and bankruptcy), even if this was not explicitly stated.

Types of studies

The review included published and unpublished studies in English dated from 2000 to 2012, where the aims, research questions or topics were clearly evident. The initial search included any study design or methodology, such as quantitative and qualitative studies, mixed methodology studies, economic/cost-benefit studies, and systematic reviews of literature or meta-analyses (i.e. which extract and analyse data across studies). However, only studies using an experimental or quasi-experimental design were included as the research question was refined to focus on effectiveness.

Literature search

The search strategy used (including the search terms drawn from our inclusion criteria) and the databases and websites searched, are set out below. We believe that our search strategy was both broad and wide enough to have identified at least most published literature that met the study criteria.

Our initial search of relevant databases and websites identified 160 potentially relevant documents which were then entered into an Endnote database. An initial review of the documents on scope culled this down to 47 studies.

Search terms for first round of searches (2011)

The search strategy involved a broad but shallow review of academic databases (legal, socio-legal, cultural studies, health, education and social science) and key websites using pre-defined search terms. This was carried out by one researcher and two librarians. A total of 160 articles that appeared most relevant to CLE were entered into a referencing library. This was part of a broader search for any research and evaluation into public legal assistance services. The search terms used are set out in Table A1.

Search terms for second round of searches (2012)

The relevant terms used to search both databases and websites in the second round of searches are listed in Table A2.


AreaSearch terms
Legal arealaw, legal, rights, debt, tenancy, domestic violence, consumer, financial, divorce
Interventionseminar, workshop, community education, education, information, public legal education, programs, awareness raising, campaigns
Evaluationevaluation, review, assessment, effectiveness, what works, outcome*, impact*

Databases and websites searched for literature relating to CLE

Website strategy

The following websites were all searched between March and May 2011. Those sites that were found to be most useful in identifying literature were again searched between September and October 2012.

Legal services sites

• All Australian public and non-government legal assistance service sites in each state and territory, including Legal Aid Commissions, Community Legal Centres and peak organisations, Aboriginal Legal Services and Family Violence Prevention Legal Centres

• NZ, Canadian and UK public and non-government legal assistance service sites

Government justice and legal departments and agencies

• All Australian Commonwealth, state and territory government justice departments and agencies, including Attorney Generals and Justice Departments and agencies (including Australian Securities and Investments Commission); The Human Rights Commission and State Anti-Discrimination Boards; and complaints handling bodies and tribunals, and Families, Housing, Community Services and Indigenous Affairs (FaHCSIA)

• All NZ, Canadian and UK government justice departments and agencies

• USA Federal Department of Justice

• USA pro bono, public interest and poverty legal service sites

• Other relevant USA sites to which links were provided on the above sites

Legal service research sites

• Access to justice/legal needs research agencies/centres or faculty sites in English

• Australian law faculties

Education websites

• Websites with broader community education information (i.e. adult education, multicultural education, Indigenous education)

• Education research centres, foundations, institutes

• Social marketing websites

Research databases

English language socio-legal databases, including legal issue/interest group focussed databases, including:

• Australian Domestic and Family Violence Clearinghouse

• Australian Centre for the Study of Sexual Assault

• Indigenous Justice Clearinghouse

• financial/debt services related databases

• National Cannabis Prevention and Information Centre

International social policy and social science databases

• Social Science Research Network (SSRN) (US)

• Institute for Public Policy Research (UK)

• Social Services Abstracts

• Evidence Network (Kings College London)

• ASSIA (Applied social science index & abstracts)

• Australian Policy Online


• Australian Institute of Health and Welfare

• Australian Centre for Youth Studies

Other specialist research databases

• Campbell Collaboration

• EPPI Centre

Education databases/websites

• British education index

• CBCA education (Canada)

• ERIC Education Resources Information Center

• Sociology of Education Abstracts

• National Evidence in Education Portal

• National Foundation for Educational Research (UK)

• Ebscohost, education abstracts

• Canadian Research Index

• Expanded academic ASAP (GALE)

• Proquest research library


• Cinahl

• Psychinfo

(Other) grey literature databases and sources

• CrimDoc

• Greysource

• HMIC (Healthcare management information consortium)

• National Criminal Justice Reference Services database

• PAIS international database

• Rutgers University grey literature database

• Google Scholar

Searches relating to health education

Given the limited amount of literature relating to the effectiveness of CLE, and the greater culture of evaluation in the health field, we also collected a body of literature relating to the evaluation of the effectiveness of health education. The search process for this literature is set out below. Only systematic reviews and meta-analyses were included, since these studies had already assessed the appropriateness of the methodology employed. The 21 systematic reviews of literature were appraised for relevance, and then for methodological appropriateness. Cochrane reviews were assumed to be of sufficient methodological quality for inclusion. Non-Cochrane reviews were assessed using the AMSTAR assessment tool for systematic reviews. The papers included scored a minimum of 5 points out of 11 using this tool, in all cases meeting what were assessed here as the most critical criteria relating to methodological appropriateness. There were 14 papers included in the study.

Search strategy used

The titles of articles published between 2007 and 2011 in the Medline database were searched using the terms:

• education or educational or training, and

• trial or study or randomised or randomized or effectiveness or review or outcome or outcomes.

The search identified 3,963 papers.

Initially, article titles were examined for potential relevance, then the abstract of any paper with a potentially relevant title.

We excluded from further consideration any reports that:

• did not include outcome data

• were about a small-scale pilot study, a preliminary evaluation, a qualitative study, or a needs analysis

• involved sample sizes which were likely to be too small to enable detection of outcome effects (<50 participants)

• were about educational or training programs which involved repeated contact over an extended time period (e.g. weekly education sessions for a semester)

• involved training of health professionals with hands-on practice of specific diagnostic or other clinical skills (e.g. with virtual reality simulators)

• involved participants who were school-age or younger children

• were about physical or exercise training programs

• were about counselling or psychotherapy processes

• were about behaviour modification interventions (e.g. health behaviour change, chronic disease management, weight loss)

• involved the training or rehabilitation of patients who were psychiatrically or intellectually disabled (e.g. following a stroke)

A further cull was conducted, so that only papers which provided a systematic review of literature were included.

Overview of the literature appraisal process

Table 1 illustrates how we filtered studies through both our search and appraisal processes. The literature appraisal process included three stages relating to scope and methodological appropriateness.

Ideally, systematic reviews involve two researchers independently assessing the relevance and methodological/evidence appropriateness of the studies, and if there are any inconsistencies, discussing these further to come to a shared position (Karras & Forell, forthcoming) in order to ensure that the assessment process is as objective and rigorous as possible.

For the current project:

• In the initial stage involving appraisal for document relevance, the primary reviewer conducted an initial cull against the criteria, and discussed only the ‘borderline’ studies with a second reviewer to develop a shared position.

• The appraisal for methodological appropriateness phase of the project included a second stage where a sub-sample of 15 studies that were closer to the appropriate methodology underwent a further appraisal process (discussed further below).

Literature appraisal — scope

A rigorous assessment was conducted of the 47 studies against the study inclusion criteria described above, and of these, 30 studies were assessed as meeting the inclusion criteria.

The main reasons why studies did not meet the inclusion criteria were:

• the CLE did not have a primary legal focus

• the aims of the research were not clear

• the studies did not include any data relating to the types of outcomes of relevance to this study (i.e. changes in attitudes, skills, motivation to act or behaviour).

It should be noted that at least some of the studies assessed were largely descriptive and did not aim to provide evidence of reach or outcomes, and for some studies where the CLE component was part of a larger project, there was only a small amount

The fact that only 30 of the original 160 studies identified (i.e. 21%) met the study criteria illustrates the point that a well-constructed search strategy for a systematic review may identify a very large number of potentially relevant studies, however on closer examination, it may be that very few of the studies are both relevant and methodologically acceptable to include (Karras & Forell, forthcoming).

Literature appraisal — methodological appropriateness for our study questions

Stage 1

In the first stage of the process to assess the appropriateness of the studies’ methodology to answer our study questions, the 30 studies that met all of the inclusion criteria were assessed by the primary reviewer using the specified criteria.

The data appraisal form used asked three broad questions of the evidence presented:

• Was the data/evidence collection methodology appropriate given the (apparent) questions or topics?

• Was the data/evidence collection methodology appropriately executed given the (apparent) questions or topics?

• Were the data reporting and data analysis procedures appropriate for an effectiveness study and adequate to avoid significant bias?

An appendix to the data appraisal form set out specific questions that the reviewer needed to ask in order to assess whether the methodology was appropriate, depending on the specific nature of the methodology (qualitative, quantitative, economic/cost-benefit research or systematic reviews/meta-analyses).

The reviewer was also asked to provide their assessment of the overall trustworthiness of the document’s data.

As a result of this first stage of the methodological appropriateness appraisal process, studies were divided into two categories:

• 15 studies that clearly did not meet the specified criteria

• 15 studies that were closer to the appropriate methodology

The systematic reviews were assessed using the AMSTAR tool for systematic reviews (Shea et al. 2009).

Stage 2

In the second stage of the methodological appropriateness appraisal process, the sub-sample of 15 studies that were closer to the appropriate methodology for our study questions were subjected to a more detailed review process. These studies were appraised by two researchers working together, who reached a consensus on the quality. For a small number of studies, a third researcher was also involved.

As a result of this second stage, two studies were identified for inclusion in the systematic review and a total of 28 were excluded. One of these studies was itself a systematic review and scored 10 on the AMSTAR tool.

Studies excluded from the systematic review

Almost all the studies that were appraised against the methodological appropriateness criteria — 28 of the 30 studies — were excluded from the systematic review. The key reasons for are summarised in Table A3.

A separate paper by the Law and Justice Foundation of NSW will discuss options for evaluating CLE.

In considering the above limitations of the literature, it is also important to bear in mind the nature of the literature reviewed, which was a very ‘mixed bag’ of material (see Table A4).

Most studies were either:

• evaluation reports produced by external private consultants for the organisation funding the program being evaluated typically after the program had been implemented (when there would have been very limited capacity to influence the type of data which could be collected), or

• studies produced internally by organisations running CLE programs, many of which were not formally described as evaluations or resourced to explore outcomes.

Only six studies were articles from refereed academic journals (i.e. journals in which articles to be published are selected by a panel of experts in the field).

The two studies that were assessed as being methodologically appropriate were both published in refereed journals.

Of the 30 studies reviewed, 12 (40%) were reports on a broader initiative that included one or more other elements as well as the CLE (e.g. distribution of written materials, legal advice or establishing interagency networks). These studies therefore tended to have less (and/or less rigorous) data on the CLE component than studies that had this as their sole focus.

Lastly, the selection of a very small number of ‘rigorous’ studies from a large pool is not unusual for a systematic review. The vast majority of systematic reviews (if not all) find only a small proportion of studies reach the methodological appropriateness standard in the health sector (Young et al. 2011), and even fewer do so in the justice sector (Forell et al. 2011).

Data extraction and synthesis

For the two studies selected for inclusion in our systematic review, we identified and extracted key findings nominated by the studies’ authors, and evidence that supported those findings. Evidence is the data from which the findings are derived. The evidence for each finding was ranked according to three categories:

• ‘Supported’ — findings that are clearly and directly supported by the data in the body of the report.

• ‘Credible’— findings that can be logically inferred from the data, but for which there are other plausible explanations.

• ‘Not supported’ — findings that are not supported by the reported data.

Any findings which were not supported were excluded from the review. Findings that were credible were only used in collaboration with other supported findings.


Wilczynski, A, Karras, M & Forell, S 2014, The outcomes of community legal education: a systematic review, Justice issues paper 18, Law and Justice Foundation of NSW, Sydney.