Critical Appraisal for Research Papers

Research Papers’ Critical Appraisal

Part I: Quantitative

Introduction

Steglitz, Warnick, Hoffman,  Johnston, and Spring (2015, p.332) define Evidence-based practice as a methodology that is aimed at improving the research process by facilitating obtaining or achievement of high-quality research evidence and its translation into better decisions that improve health. Thus EBP is a fundamental foundational approach to healthcare professional practice (Lehane et al., 2019, 103). The Nursing Council of Hong Kong (2015) indicates that nursing should be an evidence-based and client-focused practice.  A key to the delivery of quality healthcare services is the availability of competent and qualified healthcare professionals. Competence includes the ability to conduct evidence-based research and practice. In that respect, this paper presents a critical appraisal for two research papers that applied different research methods using the checklist provided by tools for the Critical Appraisal Skills Programme (CAPS, 2020).

Paper One: Wong and Wong, 2020.

Is The Basic Study Design Valid for a Randomised Controlled Trial?

1 Did The Study Address A Clearly Focused Research Question?

NO

The study states the research aim and the intervention. The study aimed to evaluate the nurse-led proactive self-care program’s effectiveness on psychological outcomes. However, there is no well-constructed research question (CASP, 2020). A clearly focused research question could have been made through the PICOT format. For instance, it could be there would be greater improvements (Outcome) when a nurse-led proactive self-care program is delivered (Intervention) to older adults with 60 years and above (Population) compared to those who received the usual care (Control) within the three months period (Time). Eriksen and Frandsen (2018, 421) state that a well-constructed PICOT question is essential for high-quality research projects as it helps in searching for precise answers.

The research studied a population of older adults having 60 years and above, but there are no explanations as to why they chose this age and not below 60 years. A well-described intervention was administered in stages, including a loading dose or the intervention dose where there was assessment, intervention, and evaluation, together with was home visits and follow-up telephone calls (Wong and Wong, 2020, 3).  The author described what usual care involved and stated that participants received the usual community services such as regular basic health checks and health talks.  The outcomes were life satisfaction, depression, and mental quality well described, plus the measures used to assess or measure them.

2 Was The Assignment of Participants to Interventions Randomized?

Yes

The study indicates that eligible participants who agreed to partake in the study were 457, and there was randomisation into (n = 230) for intervention or (n = 227) for control groups. Randomization was a suitable approach for an RCT as it helps eradicate bias present during allocation time by ensuring random allocation of participants to either arm (treatment or control) of the study  (Renjith, 2017, 45). The randomization was sufficient to prevent systematic bias. It is indicated that a member of the research team who assigned the participants into the two groups did not have any knowledge of the participant’s identity, and the assignment was made as per the numbers on the computer (1= intervention and 2= control groups).

The allocation sequence was concealed from the participants.  It is indicated that group assignments were sealed using envelopes and were sequentially revealed during randomization (Wong and Wong, 2020, 3). Allocation concealment prevents selection bias through the protection of randomization sequence until the allocation of participants to study arms. This work contains a consort table in the form of a flow diagram illustrating the manner in which they identified the participants and the trial was completed (CASP, 2020).

3 Were All Participants Who Entered The Study Accounted For At Its Conclusion?

Yes

The study had described the follow-up rates and loss to follow-up both in the intervention group and control group .7.83 % of participants from the intervention group declined to continue with the study because of the program’s intensive schedule, while in the control group, 2.64% of participant withdrew due to time constraints. There is also the inclusion of the loss to follow information up at six-month follow-up. The study includes information explaining how the researchers attempted to reduce result bias. This was through applying the intention to treat analysis (Wong and Wong, 2020, 4), which is a strategy that ensures the inclusion of all data and participants that were recruited despite any subsequent deviation or withdrawal from the protocol (McCoy, 2017). However, intention to treat can be successful only in the presence of outcome data for all participants randomised, making it essential to follow up with the withdrawn subjects (CASP, 2020). The study gives reasons for withdrawal by subjects implying that they were followed up.

Was the Study Methodologically Sound?

4 Were Participants, Staff, and Study Personnel ‘Blind’ to Intervention?

No

The paper indicates that the researchers used a single-blinded trial. Usually, it entails blinding one party, either the outcome assessors or the participants, to the intervention.  Blinding means that investigators, participants, or assessors are kept unaware of the intervention assigned to prevent that knowledge from influencing them. In this paper, the blinding of randomization status was only for participants (Wong and Wong, 2020, 3).  However, the authors never explained why they decided to use a single-blinded trial instead of a double-blinded or triple-blinded trial. While unblended participants could give positive feedback even when the intervention had no positive effects, unblinding investigators could promote delivering co-intervention or supplemental care in the experimental arm, and outcome assessors could record more positive outcome variable responses of participants in the intervention arm (Renjith, 2017, 47).

  1. Were the study groups similar at the start of the randomised controlled trial?

Yes

The study had clearly set out demographic characteristics for each group described in tables. It is indicated that most were women accounting for 75.1% with 78 as the mean age, and 88.4% had financial sufficiency. The study reports that there were no significant differences between participants in the control and intervention groups in terms of demographic characteristics that could have effects on the outcomes (Wong and Wong, 2020, 4). Successful randomization and similarity of groups at baseline are essential as they give confidence to the investigators that the outcome differences observed between groups relate to intervention instead of the confounding factors (Renjith, 2017, 45).

6 Apart from the experimental intervention, did each study group receive the same level of care (that is, were they treated equally)?

Can’t Tell

The authors indicate that the trial was founded on a standardized protocol formulated to ensure optimization of the health-social partnership team’s practice (Wong and Wong, 2020, 3).  For instance, there were four follow-ups and home visits for the treatment arm, with each telephone follow-up call taking 20 minutes each and home visits taking an hour each. However, it is not clear if this protocol was applied to the control group. It is indicated that participants in control groups received social control monthly calls, and no home visits are mentioned for this group. Additionally, health workers in this study were not blind to subjects’ treatment status as only participants were blinded. Probably they were objective in the care they provided, as it is not indicated that exact or similar follow-up schedules were followed for both groups. It is essential to treat groups entirely equally.

What Are The Results?

7 Were The Effects of Intervention Reported Comprehensively?

Yes

Power analysis was used to calculate the sample size of this study. The power calculation assumes a power of 80% to estimate the sample size required for each group Murray et al., (2020, 10). According to Holman et al. (2003, 391), requirements for ethical considerations are that as few as possible subjects get exposed to risk and inclusion of enough subjects to produce a reasonable power. In this respect, minimal sample size calculation is essential in demonstrating relevant effects.

The authors have also indicated the measured outcomes as life satisfaction, depression, and mental quality of life.  Binary outcomes reported are the quasi-likelihood and Linear link function (Wong and Wong, 2020, 4). There is also a description of test analysis used, including descriptive analysis, Generalized Estimating Equation (GEE), and Little’s test. Real P values are also reported in the paper. P values provide probability by which differences observed between the compared groups might have occurred by chance. P values below 0.005 imply that differences are unlikely to have happened by chance, and the conclusion is that realm difference exists between the groups.

8 Was The Precision of the Estimate of The Intervention or Treatment Effect Reported?

Yes

The study reported the intervention’s estimate precision by presenting Confidence Intervals (CIs) in a table (Wong and Wong, 2020, 8). The paper used a 95% confidence interval which implies that if the estimation process is repeated with randomly assigned population samples, 95% of calculated intervals would contain true values (Hazra, 2017).  Reporting CIs is essential as it shows the range where true values lie with a particular extent of probability

9 Do The Benefits of the Experimental Intervention Outweigh the Harms and Costs?

Can’t Tell

There is no information or data concerning intervention cost or harm. Phillips et al. (2019, 8) indicate the importance of clear reporting of any harm or adverse effects related to the trial’s intervention as this helps in providing a comprehensive safety profile picture to all relevant parties. The author should have made clarification of information concerning the collection of harm-related data, such as collection method, attribution, timing, and monitoring and cessation rules if relevant (CASP, 2020). The lack of cost data and potential harm or adverse effects observed during the trial makes it difficult to advise patients concerning potential harm or the program’s financial burden.

10 Can the Results be applied to Your Local Population/In Your Context?

Yes

The participants described in the study seem to meet the eligibility criteria set out, and thus the results may be applicable. Although the researcher implemented the program in one city, the study’s issues are those facing older people elsewhere. Thus, this program provides an evidence-based initiative that promotes the psychological wellness of older people elsewhere (Wong and Wong, 2020, 9). However, the authors should have considered measuring the sustained and long-term effects of the intervention. There is a study limitation that can affect the decision to invest in this program. The fact that the subjects never included the ones living at home, not reachable through phone bedbound older adults. Thus generalizability of the study results to the entire aging population is impossible.

11 Would the Experimental Intervention Provide Greater Value to the People in Your Care than Any of the Existing Interventions?

Can’t Tell

The program in this study took three months to complete. However, the paper has no information concerning the finances needed to implement the study. Additionally, it is indicated that they undertook the training of healthcare workers prior to the commencement of the program (Wong and Wong, 2020, 9). However, the authors do not state the time taken for training or the cost of the training.   Because the paper presents inadequate information regarding resources needed to implement the program, it is difficult to disinvest resources from existing interventions to re-invest in this new program.

 

Part II: Qualitative

Introduction

In the increasingly growing field of medicine, the qualitative approach of informing the healthcare policy and practices is highly recognized and, in due time, has become increasingly resourceful. However, the biggest challenge is failing to determine the quality of the methodology of synthesized qualitative studies, which has led to ongoing debates and suggestions on the possible or the favorable assessment. The Critical Appraisal Skills Programme (CASP) in the healthcare field is the basic tool used to analyze health-related qualitative evidence for quality appraisal. The tool has been recommended by various researchers though there are still controversies on where to apply it. The article suggests that some of the factors that we should consider for the CASP tool during the synthesis of qualitative evidence in quality appraisal are suitability and usability (CASP, 2020). The factors will advance the appraisal exercises framed by the tool in the future. In this article, we will reflect on the application of the tool in the synthesis of qualitative evidence and the systematic review. The factors that we will consider in this article are the specific properties of the tool that may probably affect its understanding, the results of the appraisal, and the consequent synthesis. Also, the article proposes some modifications of the tool, the guidance of the users, and the critical areas for the consequent methodological research.

Systematic Review

The systematic review has been regarded as the basic methodology approach since the 1990s within the healthcare practice. The original aim of developing a systematic review was to identify, appraise and synthesize evidence to answer the effectiveness questions. Today, the review is used to analyze evidence with designs that are not comparatively similar, which includes the qualitative approach to reply to different questions.  The difference is due to increasing growth in recognizing the ideal and beneficial contributions of the approach in evidence-based medicine. However, the quality appraisal of the evidence is a common stage to all systematic reviews in qualitative research.

There have been ongoing debates if studies on quality appraisal encompassed in the synthesis of qualitative evidence is possible, suitable, and applicable and if some of the insights in the research can be used to predict the synthesis of the research and, if so, how. The ongoing debates have led to implementing the pragmatic approach towards the quality appraisal issue. Therefore, the addition of qualitative evidence and increased support has progressively led to celebration by the qualitative community (Buccheri, & Sharifi, 2017, 468). Suppose qualitative evidence has to remain a key factor in the scope of decision-making. In that case, the general practitioners and the policymakers have to believe and fully believe in the method.

CASP tool for some time now has been used syntheses consistently and for quality appraisal in health. The tool is recommended for novice researchers who have specialized in qualitative research. However, there are still ongoing manipulations on the tool and extensive guidance on the application. Appraisal results are considered to give a direction about the next synthesis stages by shaping the synthesis to consider the results from the preferably high-quality studies. There are possible solutions to the possible challenges that researchers may encounter using this tool in the course. There are suggested modifications to the CASP tool, comprehensive guidance to the user of the tool, and further insight into the research methodology.

Methodology

Previous research conducted a systematic review and a synthesis of the quality evidence on the experience of women receiving false results during breast screening. Data from eight papers, in this case, were systematically identified, evaluated, and critically synthesized. The primary authors of the papers used the semi-structured interviews in data collection on the subjective feelings, beliefs, and views that the women had on their experience during the breast screening and the services they had received.

The CASP tool

This is a tool used in evaluating the strengths and weaknesses and the loopholes of a methodology used in qualitative research. The tool contains some questions, ten to be specific that each emphasize on a specific procedural feature of a qualitative study. The tool ideally poses the questions that ask the researchers on the credibility of their methods and if the findings presented are well-structured and meaningful. The primary focus for forming the CASP tool was to act as an educational and scholarly tool. The World Health Organization and Cochrane consider the CASP tool the first choice for a novice researcher since it is user-friendly (CASP, 2020). This was the primary reason we considered the CASP tool to do the qualitative evaluation of the research paper.

The CASP tool relatively shows the open-endedness of research and the reporting standards but is limited in measuring the design and conduct of the research. When you compare the CASP tool with the other evaluation methods, the CASP tool indicated a good procedural aspect of the study and the reported details that gave lower agreements between and within the reviewers compared to the other methods (Buccheri, & Sharifi, 2017, 468). The comparison identified the CASP as the least sensitive of the three evaluation tools in the Context of interpretation, evaluation, and validity of the theory. However, the tool did not compare or measure the quality of the intrinsic methodology.

Data Synthesis

The data is organized according to the results of the evaluation. Then data is synthesized according to the approach given by Thomas and Hardens, which is modified to organize the synthesis according to their quality. The data to be synthesized in this case is the primary interpretation of the findings in the article (Chung, 2021, 97). The approach used by Thomas and Harden’s divided into three steps that you can apply in this case to evaluate the article. The stages include line-by-line coding, and the implementation of expressive themes.

Interpretations

When you consider the suitability and the usableness of the tool’s design, its questions are designed to assess the generic features of the research paper. The responses for the original tool’s nine questions include ‘yes,’ ‘no,’ or ‘can’t tell.’ For every query, there is a comment box to explain each response provided for a particular question. However, the last question is unrestricted. Each question contains hints to prompt the researcher why the query is crucial for this case. However, some of the key threats in evaluating the quality are majorly the categorical approaches given in the CASP tool and the hints given for every question. This includes more descriptive answers on the description of the methodological aspects rather than the short-fixed responses given by the evaluation tool. However, when analyzing the article using the CASP, it is evident that the responses are not diverse and therefore saw a need of adding a fourth reply, ‘somewhat,’ which means somehow agrees (Chung, 2021, 97). Some of the responses across the paper were given ‘somewhat,’ which is accounts for approximately 24% of the total responses.

Discussion

The Overview of the findings.

Using the CASP tool in the synthesis of qualitative evidence has helped to scrutinize several issues. We added the additional CASP tool question to facilitate the assessment of the primary approach of study. However, the hints to the questions and the responses to each question have been rigorously explored to determine how these factors can affect the usability of the CASP tool (Chung, 2021, 97). There are a few modifications that have been proposed that will improve the usability of the tool. The novel approach is the approach that is used to analyze the findings of the study to priority to consider first the results from higher-quality studies.

Limitations

The findings and the conclusion, in this case, are made from analyzing a single article where the CASP tool uses structured methods for interviewing only. Therefore, it is evident that had we used a different synthesis method, it is probable that we would have encountered diverse issues (Buccheri, & Sharifi, 2017, 468). Another approach, therefore, is proposed that organizes the synthesis depending on the quality that is suitable for the dataset that contains a mixed-dataset like in our case.

Implications

The team involved in the synthesis of qualitative evidence needs to be experienced and contain expertise in the qualitative methodological field. A robust evaluation of quality heavily depends on the technical background across the methods used in qualitative research and tacit knowledge and subjective judgment (Chung, 2021, 97). Researchers without vigorous qualitative knowledge may find it difficult to understand quality issues or be forced to think that they are using the tool to conduct a comprehensive quality check while that is not the case. As much as the CASP is recommended for most novice researchers, they still need to apply the tool. In other instances, it is not clear if the quality issue was the methodology or the reporting used in the research. Therefore, the evaluation of the quality of a research paper is limited to the contents published in the report.