Algorithms and their possible threat to public health

Published: 29.09.2022 / Blog / Publication / Research

Anyone who reads the book, including a pediatrician friend of mine, has decided vaccination is not only not right for their own children but purely reckless.

Please do yourself the biggest favor of your life and purchase this book. It will not only shed light on the subject, but it will make you think twice about allowing your child to become a victim of these so-called ´immunizations´.

These two quotes are taken from five-star reviews on a book about vaccines in the global Amazon bookstore. The book itself was among the top 10 search results when searching for books about vaccination in September 2022.

Today, most people rely on search engines and algorithms to find health information and make decisions in relation to their health, including decisions about medical treatments (Juneja & Mitra, 2021). The role of algorithms is therefore not only growing rapidly, but they are also becoming a major actor in facilitating or constraining users’ flow of and access to information, and consequently in shaping behaviours and altering beliefs in relation to health (Juneja & Mitra, 2021; Shin & Valente, 2020). Often powered by methods of artificial intelligence (AI), algorithms shape our interaction with online platforms with the purpose to improve the overall user experience of platforms, and to in some cases improve business conversion and retention rates by increasing sales and keeping customers happy and engaged. There are two major functions that algorithms can serve: search and recommendation. The search algorithms find relevant items upon a user’s request, for instance when searching with keywords, and prioritize the results in the order of importance as defined by the platform in question. The recommendation algorithms again, display information such as for instance books, articles, videos, or websites, that users might be interested in, to maximize user satisfaction and interest. These algorithms take various factors, including the user’s own past behavior, behaviors of other people with similar interests, as well as the characteristics of the items or topics themselves such as popularity, into account when presenting results (Shin & Valente, 2020). Most people navigate online platforms without fully understanding how these algorithms operate, how they filter information or how they create recommendations. In fact, the internal workings, or logics, of algorithms developed by most digital platforms are typically proprietary and thus inaccessible to the users, resulting in a so called “black box”. This can lead to informational asymmetries, as the individuals affected by the algorithm are unaware of how they work, or how they are developed and trained (Kaplan, 2020). Despite this, people actually tend to prefer advice coming from an algorithm, rather than advice coming from a human being (Kaplan 2020; Shin & Valente 2020). The opaque nature of algorithms presents some serious societal challenges, especially from a public health perspective, as platform algorithms are not designed to maximize public good or take into account the credibility and trustworthiness of content, but prioritise revenue (Abul-Fottouh, 2020; Shin & Valente, 2020). The negative impact of algorithmic opacity has been demonstrated in regard to politics and political content, an area that research within this field has mostly focused on. Recently however, research has expanded to public health as well, as there is a growing concern that online platforms are becoming hubs of fraudulent and dangerous misinformation (Shin & Valente, 2020). One topic of research, that with the recent COVID-19 pandemic once again had an upswing in relevancy, is vaccines and vaccine hesitancy.

Vaccine hesitancy refers to delay in acceptance or refusal of common vaccines despite availability of vaccination services, but it also depicts the more general concerns raised about decisions to vaccinate, be it adults or children. Research has shown that exposure to negative information about vaccines can directly and indirectly impact an individual’s attitudes towards immunization (Shin & Valente, 2020). As more and more people navigate the internet for information about health, and vaccines specifically, the impact this can have on health is substantial. In a recent case study, Shin and Valente (2020) found that on the Amazon.com, there were over two times as many vaccine hesitant books (63%) as vaccine-supportive books (29%) available on the first 10 search result pages when searching for books on vaccines. Moreover, they found that the three highest ranked books were all vaccine hesitant. Moreover, as Juneja and Mitra (2021) conclude, Amazon recommends more vaccine hesitant books once a user shows interest in a vaccine hesitant product by clicking on it. This kind of misrepresentation aggravates the problem, as this indicator serves as the baseline for algorithms to produce outputs, resulting in recommendations for more vaccine hesitant books for the user. The situation is similar on other platforms. Abul-Fottouh et al. (2020) found that on Youtube, one of the leading sources of health misinformation, over 70% of the videos that appear when searching for “vaccines”, are vaccine hesitant, or even anti-vaccine. The same is true when searching for videos with the keywords “vaccine safety” and “vaccines and children”, where over 65% of the videos are vaccine hesitant (Abul-Fottouh et al., 2020). The fact that the landscape of the Amazon book market is no different to that of social media platforms like YouTube raises concerns, as in general, books are considered to voice expert opinions and perceived to have more intellectual weight and people have traditionally turned to books when gathering in-depth knowledge about a specific topic (Shin & Valente, 2020).

Even if many tech giants, like Google, Pinterest, Twitter and Facebook have acknowledged their social responsibility in ensuring that questionable health-related content is banned and only high-quality content is presented to users (Juneja & Mitra, 2021), the goal of commercial online platforms is to increase traffic and sales. This also the driving force behind relevance in recommendations and suggestions, often irrespective of whether the product serves credible information or not. The direct influence this can have on public health has made researchers advocate that algorithms are made more accountable, as well as the development of more public-interest, or more specifically public health, minded algorithms to promote social responsibility and societal good. This also highlights the need for algorithmic transparency for algorithmic systems that have serious societal impacts, for instance when it comes to recommending health-related information (Abul-Fottouh, Song & Gruzd, 2020). Moreover, it is important to provide a broader understanding of algorithms to consumers, how they filter information, how consumers’ data are used for designing algorithms, and what are the unintended consequences. In the absence of transparency, opaque algorithmic systems regulate the society and allows an unjustified transfer of power from the public to the private sector (Juneja & Mitra, 2020; Shin & Valente, 2020). Evaluating search and recommendation algorithms from a public health perspective is crucial to strengthen the practices, processes as well as safety in relation to the ethical use of algorithms, and artificial intelligence more broadly, and by these means alleviating the effects this can have on individual and public health.

Assessing and evaluating search and recommendation algorithms on different platforms from a public health perspective requires multidisciplinary collaboration, involving computer and artificial intelligence scientists, health scientists as well as social scientists. Here at Arcada University of Applied Sciences, all these sciences are represented, and one of our core fields of research is artificial intelligence and machine learning, and more specifically ethically sound and trustworthy AI development. By assessing and evaluating algorithms, we can hopefully bring forward some of the problems these present from a public health perspective, and hopefully help avoid or reduce the amount of dubious health information that is presented on different platforms, including the quotes at the beginning of this text.

References:

Abul-Fottouh, D., Song, M. Y., & Gruzd, A. (2020). Examining algorithmic biases in YouTube’s recommendations of vaccine videos. International Journal of Medical Informatics, 140, 104175.

Juneja, P., & Mitra, T. (2021, May). Auditing e-commerce platforms for algorithmically curated vaccine misinformation. In Proceedings of the 2021 chi conference on human factors in computing systems, 1-27).

Kaplan, B. (2020). Seeing through health information technology: the need for transparency in software, algorithms, data privacy, and regulation. Journal of Law and the Biosciences, 7(1), lsaa062.

Shin, J., & Valente, T. (2020). Algorithms and health misinformation: a case study of vaccine books on Amazon. Journal of Health Communication, 25(5), 394-401.

How to monitor the transformation towards circular society?

In March 2024, the project partners for Agile Circular Competence Network organised a three-day Spring School aimed at students, companies, and researchers interested in the circular economy. The focus of the third day was on understanding how businesses can measure their transition towards a circular and sustainable economy.

Category: Publication