How to Tell the Difference between Good and Poor Research?

Banner
Applying research to practice is an integral part of a learning professional. However, there is a lag in this for a variety of reasons. To use evidence-informed learning design we need to be able to be aware of our own confirmation biases, and then recognize those in other researchers and stakeholders. We need to be aware that any recommendation with citation does not lend itself to accuracy and efficiency in performance outcomes. In this article, I briefly tap on the issue of distinguishing between good research and poor research. Because…


Not all science is ‘good science’!

To tell the difference between good and poor research, you should be able to identify the credibility of a paper. Below, I share some tips to evaluate a research paper (an empirical study1) and welcome others’ ideas as well.
1. Start with the Abstract: See if it’s clear to understand, has no ambiguity, and describes key research elements, such as research design, participants, and findings or scope very well. See an example of a good abstract here.
2. Goal of the study: Is the purpose of the study clearly stated at the beginning? This will give you a clear understanding of the focus of the paper and the gap it intends to bridge through the study. A reader needs to know why they should read it, how it benefits them, and how coherent the paper is.
3. Check the Methods: See if authors have clearly described the research design (such as participants, approach, how samples were recruited or randomly assigned, what were they tasked to do, and how data were collected) See an excerpt of the Method section below:

Method-#3
4. Research Questions/Hypotheses: Are questions/hypotheses clearly stated and well-formed to be specific enough? Does the paper even have research questions or hypotheses that are linked to previous research? Using very general questions that are hard to measure would lead to bias as well. Check out the following examples:

Hypotheses in one study: According to the previous studies, we hypothesized that: (a) the psychological well-being of mothers of normal-functioning children is higher than that of mothers of autistic and blind children, and (b) mothers of blind and autistic children are different in terms of their psychological well-being.

Hypothesis in another study: Based on this analysis, the proposed research broadly predicts that students who view an instructor draw diagrams during a concurrent oral explanation will perform better on a transfer test than students who view the equivalent static (i.e., already-drawn) diagrams while listening to the same oral explanation.

Research Questions in one study:
When solicited during an interview, are eight graders able to express epistemic reflection along the four dimensions identified in the literature?
Are patterns of epistemic metacognition identifiable when students’ answers are clustered for their levels of sophistication?
Is epistemic metacognition related to individual differences such as prior knowledge, study approach, and domain-specific beliefs about science?
Is learning online information from multiple sources influenced by epistemic metacognition in context and the individual differences examined?

Research Questions in another study:
Which level(s) of learner factors – epistemology (epistemological beliefs), attitudes (attitudes toward technology use), and strategies (approach to learning – deep learning and surface learning) influence higher-order thinking?
How do these factors directly or indirectly affect higher-order thinking?

5. Tally the Sample Size and Results to detect bias: Sampling error is a common issue that can lead to bias. Do authors clearly explain how they sampled a population? Also, how large a sample size is? A sample should represent a population. Read here to learn more about sampling errors.

6. Check out the Results: Do you easily understand what the authors are reporting as their findings? Are data and numbers consistent throughout the paper? Do you see complicated equations or confusing figures/tables/graphs? Are they using too many arcane technical terms that are hard to understand (poor writing style)? Did they report the effect size2?

While some knowledge of statistics would be good to verify statistical reports and appropriate statistical analysis, you won’t need to fully rely on this on your own (the reviewers of a rigorous journal would notice it). Afterall, a poor paper will avoid reporting statistical reports or uses very complicated and misleading ones (and yet, they do get published!). See an excerpt of a good Results sections (d refers to effect size):

Results-#6
7. Look for sweeping claims in Discussion: These might give you a hint that there has been confirmation bias and authors might have easily generalized their findings according to what they’ve been aiming to find.

I admit that the above tips are not exhaustive and I would recommend reading as many papers as possible and comparing them with each other. Here are some tips to identify a poor journal article.

Some More Examples
There are more examples of good abstracts https://psycnet.apa.org/record/2018-58542-001
https://psycnet.apa.org/record/2011-18448-001

Final Thoughts
Lastly, we need to admit that one of the major challenges of most learning professionals is a lack of access to online databases and journal articles. Unfortunately, many organizations do not appreciate the fact that learning professionals should have access to updated research. It would be great if companies could sign up to a few journal articles that have rigor and make them available to their L&D staff.

Final-Thoughts
Book Recommendations
Here are couple of suggested good reads if you are interested to learn more about biases and judgments:
1. “When can you trust an expert” by Daniel Willingham. I wrote an overview of it a few years ago, but it’s worth reading the book on your own. In this book, Willingham points how persuaders might use “research” to sell their ideas and how our judgments are influenced by different factors.
2. “The structure of scientific revolutions” by Thomas Kuhn. In this book, Kuhn highlights what ‘normal science’ is and how it can be affected by people’s views (or biases) and go off-track. He introduces ‘revolutionary science’ which leads to paradigm shifts and changes in the direction of scientific research.
3. “Thinking fast and slow” by Daniel Kahneman. Kahneman introduces the two systems in our brain which influence our decision-making and judgments. System 1 thinking is intuitive, operating automatically with no sense of voluntary control, and System 2 thinking is complex, relating to our conscious, attentive, and reasoning side.
4. “Noise” by Daniel Kahneman. Kahneman makes a distinction between ‘noise’ and ‘bias’ and dives deep into how they occur in organizations and our personal judgments.

I end this with a quote from Daniel Kahneman’s books, Noise:
“If there is conclusion bias or prejudgment, the evidence will be selective and distorted: because of confirmation bias and desirability bias, we will tend to collect and interpret evidence selectively to favor a judgment that, respectively, we already believe or wish to be true.”

1 An empirical study is a type of research that gathers evidence through observation or experience.
2 The effect size is a measure that shows the strength of a statistical claim.



1 #showyourwork - Building a Community of Practice

With the onset of the pandemic, all universities had to shift to online format. For some, in-person courses had always been the main format and the sudden transition to online courses posed a huge challenge to those involved, especially, faculty and students. When I was asked to provide support to faculty, I saw this a great opportunity to put research into practice and build a community of practice, rather than me being the only person to save them. The following is what I did to support highly anxious faculty, some of whom were intimidated by technology. Read More…

How Can we Make Failure Productive & Avoid Unproductive Success in Learning Design?

Many times, learning professionals and stakeholders tend to rely on common practices or fads rather than evidence in their design approach. One of these design approaches is providing content and then adding quizzes at the end with a pass or fail outcome. Read More…

How Can We Motivate Employees to Learn?

How can an organization motivate its staff to learn and improve their performance? This has always been a question during my career as a learning professional. Read More…

The Impact of Guided Discovery vs. Didactic Instruction on Learning

Initially posted in www.learningscientists.org

Previous research has identified didactic instruction an effective approach for learners who lack prior knowledge. The evidence suggests that the degree of guidance should vary with the age of learners. Direct instruction can be more beneficial for younger learners (e.g., elementary and middle school children), whereas older ones gain more with non-directive guidance or guided discovery [1]. Different research findings indicate that guided discovery is more effective than lecture-based instruction in that learners develop a deeper understanding of concepts and their underlying structure.
Read More…