Personalisation vs. Manipulation in AI-Driven Strategic Communications

In the tapestry of our societal evolution, we categorise epochs with simple tags— ‘before Google’, ‘pre-Covid’, and now, the phrase ‘before Chat GPT’ has started to make the rounds. As the head of communications at ARK, I’m watching these shifts, pondering their repercussions on the communications industry in general, as well as more specifically in the field of international development. The advent of generative AI marks a watershed moment, ushering in a new era of unprecedented influence that, if not already, will soon permeate our day-to-day lives. While it presents an enormous opportunity, it also seeds a field of ethical conundrums that leave me with a myriad of questions, and few clear answers. One that’s been keeping me up recently is how and when we draw the line between personalisation and manipulation.

The more I integrate AI into our strategic communications interventions, the clearer it becomes that the technology offers significant opportunities to improve the effectiveness of messaging through personalisation. However, this capability also presents ethical risks, notably the potential for manipulation. Personalisation involves using AI to analyse data on individual or group behaviours, preferences and needs to tailor messages accordingly. This approach is not new of course, psychological theories that suggest individuals are more likely to engage with content that reflects their specific context (Hogg & Vaughan, 2014) have been around for years, but AI is allowing us to develop targeted messaging at a pace and on a scale not previously available. So where’s the downside? Well, what happens when an AI-created message or messaging strategy uses its data to manipulate, by selectively presting only certain data to elicit specific emotional or behavioural responses? Susser, Roessler, and Nissenbaum (2019) define this as situations where choices are not "substantially voluntary or sufficiently informed." Examples include disproportionately highlighting certain benefits of a health program while downplaying possible risks.

The issue then becomes how reliant we become on generative AI, and how much we trust it. The main problem for me is around transparency. To fully trust the system and feel confident that messaging is personalised and not manipulative, I must understand how the AI makes decisions. Enter the problem of "black boxes," whereby AI algorithms often do not disclose their data processing and decision-making processes (Ananny & Crawford, 2018). This opacity can undermine efforts to craft strategies that adhere to the principle of ‘do no harm’. First, there is not currently a way to ensure informed consent which prevents us as users from understanding how our data is being used. Informed consent is particularly challenging in diverse international settings, where understanding of and engagement with technology can vary widely. Ensuring that individuals are truly knowledgeable about how their information is used by AI systems is a complex, yet vital, component of ethically deploying AI in strategic communications.

In the mix here, is also the significant risk that AI systems perpetuate existing biases or introduce new ones, given that they often reflect the data on which they are trained. If this is the case, then ethical personalisation should involve mechanisms to identify and mitigate these biases, ensuring that AI-driven messages do not inadvertently skew perspectives or decisions (Jobin, Ienca, & Vayena, 2019). The difficulty here of course is that the technology is developing far more quickly than the human capability to adapt. Businesses, like ours, need to integrate policies and mechanisms to ensure ethical uses of this technology, perhaps the answer is that until you can develop the appropriate systems to stress test AI, you shouldn’t be using it.

The ethical deployment of AI in strategic communications straddles a fine line between effective personalisation and unethical manipulation. Key to navigating this balance is the establishment of robust ethical guidelines that address transparency, consent, and bias. Furthermore, involving stakeholders from diverse backgrounds in the development and implementation of AI solutions can enhance ethical outcomes and foster broader acceptance and trust in AI technologies. Ultimately, as we venture further into this AI-imbued age, the ethics and technology sectors must evolve in tandem to ensure that societal advancement does not compromise individual autonomy. Continuous dialogue and the development of refined ethical practices will be critical for responsibly integrating AI into our communications.

References

•    Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.

•    Hogg, M. A., & Vaughan, G. M. (2014). Social Psychology (7th ed.). London: Pearson Education.

•    Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

•    Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 1(1), 1-45.

Previous
Previous

The Human Cost of Surging Migration Through the Darién Gap

Next
Next

Reflections: Supporting the Haitian National Police