Global Health Press

British Medical Journal article proposes behavioural interventions to reduce vaccine hesitancy

A report in the British Medical Journal (BMJ) says campaigns to counter vaccine hesitancy need to get more savvy.

Nine authors penned Behavioural interventions to reduce vaccine hesitancy driven by misinformation on social media, a peer-reviewed article published in The BMJ earlier this month.

“Effective population level vaccination campaigns are fundamental to public health. Counter campaigns, which are as old as the first vaccines, disrupt uptake and can threaten public health globally,” the report warns, before admitting, “crises and genuine safety concerns can also lower vaccine uptake.”

The paper uses the World Health Organization’s definition of vaccine hesitancy: a “delay in acceptance or refusal of vaccination despite availability of vaccination services.” The authors acknowledge they blur disinformation and misinformation together as that which “deliberately seeks to mislead or otherwise disrupt understanding.”

The authors said that “anti-vaccine campaigns” during the pandemic had “undeniable effects” including “illness and death,” despite “standard approaches” such as “mandatory vaccination and regulation for health care professionals, incentives, public health communication campaigns, and engaging trusted leaders.”

“Public health officials struggled to keep pace with misleading or inaccurate content online. As guidelines shifted with the emergence of new information, policy decisions were often perceived by individuals and groups who are prone to distrust or refute government messaging as a response not to evidence but to mistakes or lack of expertise,” the paper explained.

“Findings globally indicate that social media dynamics exacerbated the sharing of misinformation, reduced vaccination rates, undermined trust in reliable information, magnified polarisation, and damaged the perceived credibility of institutions. These challenges remain today.”

New tactics to fight online misinformation include “debunking”, “fact-checking” and “pre-bunking,” a behavioural approach in which users are taught about how ‘fake news’ works before exposure.” The authors likened this to “inoculation” against falsehoods.

“It is clear, however, that providing fact based probabilistic information alone fails to meaningfully increase uptake and might even backfire. Factors such as low trust in governments and health institutions are likely to be instrumental in derailing effective immunisation programmmes.”

The publishers of the study call for a “robust research agenda for social media interventions” that consider theoretical, empirical and real world” effects.

“Over time, these would accumulate to…maximising the potential for predictive validity of applications to public policy (for example, greater confidence in anticipated effects). Doing so would exceed the criteria listed in a recent call for a gold standard for trials tackling vaccine hesitancy.”

The authors said New York was “a stronghold of anti-vaccine sentiment before 2020” but “pivoted” with COVID-19 vaccines. In general, only social media interventions that worked were targeted messages through Facebook advertising and “personalized influencer content.” However, such approaches only increased vaccination “among families of medium-low socioeconomic status.”

“Other intervention types include warning (‘inoculating’) people about manipulation tactics using non-harmful exposure as a tool to identify misinformation and using accuracy prompts to trigger people to consider the truthfulness of material they are about to share on social media platforms, without stopping them from posting.”

The authors said “visual imagery” and “simplified language” swayed people. They insisted new campaigns must “correct misinformation to both parents and their children — parents, especially mothers, play a major part in child vaccination.”

“Trust matters: the message, the messenger and the (vaccinated) provider,” the authors wrote.

“Ultimately, the source of the message — whether a healthcare provider, a politician, or a social media influencer — is likely to have a major role in whether individuals and communities deem information to be credible.”

The authors also believe outright censorship won’t work.

“Blanket bans can drive groups and activities underground — broad social media bans of individuals or of specific content can paradoxically result in the spread of misinformation and can galvanise problematic echo chambers by driving discussion into private social media groups or closed forums,” the authors explained.

“Such closed environments are unlikely to include different viewpoints or corrective information, so misinformation is more likely to be reinforced. Rather than rely on outright bans, policy makers and content managers should explore methods that limit the spread and influence of misinformation.”

The authors complained that social media researchers can’t get behavioural data from Twitter (“X”) as easily as they used to.

“Social media companies should be more proactive in dealing with the proliferation of misinformation on their sites. We endorse calls to make data available and to work with researchers and regulators in all countries to enable developing effective solutions,” the authors said.

Source: Western Standard News

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments