Bringing together 28 countries plus the EU, including the US and China, is no mean feat. At face value, the Bletchley Declaration seems like a promising stride towards global cooperation in addressing AI risks. However, in the immediate aftermath of the inaugural AI Safety Summit, questions have been raised about its success in confronting the imminent dangers of AI and whether it was more show than substance.
Bold promises were made, yet there’s been a notable absence of specifics on how these promises will be put into practice. Journalists in attendance wasted no time pointing out that an event intended to promote AI transparency was shrouded in opacity, limiting access to its most critical discussions. Among the select few privy to these conversations with policymakers were figures like Elon Musk and Big Tech leaders. Naturally, this prompts some to wonder who the event truly benefits and how much tangible action will come as a result.
Looking beyond the recent summit, President Biden’s recent executive order is set to mandate companies to share AI safety research data, while the EU is advancing its AI Act and China is following suit. Yet the UK has firmly stated it has “no plans to introduce” similar legislation in the foreseeable government programme. So, as much as we’d like a unified global approach, even the summit’s attendees do not, as it stands, appear to be on the same page when it comes to implementing regulatory frameworks.
The pressing question now is, where do we go from here?
The bigger picture
Formal regulation will be about more than just markets. AI involves ground-breaking technologies with unforeseeable consequences and far-reaching impacts. Beyond mere rule-setting, legislation possesses the power to reshape the competitive landscape and guide the course of technological progress. Its consequences are poised to ripple through our lives, affecting job security, the proliferation of misinformation, and overall quality of life, especially in a world where AI systems play increasingly central roles in our daily interactions.
For tech companies, navigating this evolving landscape walks a tightrope between rapid innovation and market leadership. Meanwhile, governments face the daunting task of striking the delicate balance between business interests and societal welfare, all while grappling with a technology they may not fully comprehend.
First and foremost, all stakeholders must remember that AI’s development and influence on society are human-made, not predetermined. Our response to it must be likewise human-centric and avoid losing public trust by becoming too detached from everyday human experiences of AI interactions. To effectively regulate, the right approach must be swift, global in scope, and protective of vulnerable populations, reflecting the real-world intricacies of AI technology. However, as of now, reaching this ideal balance feels aspirational.
Communications as an AI mediator
Legislation, though vital, cannot be the only path to a better AI future. Organisations should take a proactive stance and view any policy as a baseline upon which they can – and should – build more thoughtful, responsible, ethical AI practices. Support from the communications industry can significantly contribute to this effort.
The current regulatory landscape is marked by diverse efforts from a multitude of parties. However, to build public trust and ensure that all voices are heard, effective communication is critical. The AI Safety Summit represents a significant step towards open dialogue, but there remains much to be done.
Unlike fiscal policy, AI lacks a central point of control, and innovation is happening across the board. Innovation is not found only in the businesses developing and marketing AI: it is in organisations adopting and applying it in novel ways, in academic and third sector bodies seeking to understand it better, and in individuals’ everyday interactions with it as they discover what it can and can’t do. This fact adds weight to journalists’ complaints about opacity being an evident shortfall of the Safety Summit.
That gives businesses an opportunity to initiate timely and consistent communication with all key stakeholders, including customers, policymakers and researchers, leveraging comms measurement to steer not just how they speak, but how they translate these words into action and reach the audiences that need to hear it.
Communications as an AI user
The communications industry serves as both a builder of trust and a potential catalyst of mistrust. As we await the establishment of global norms, communications professionals must prioritise internal AI readiness. This means a commitment to knowledge building and strategic thinking, replacing any hasty “move fast and break things” mentality with a more deliberate approach.
Consciously overseeing AI deployment and ensuring responsible usage will no longer be a nice-to-have and will quickly become a moral imperative. Any approach should revolve around the key components of transparency, caution, and accuracy. This may entail the designation of internal teams responsible for monitoring AI applications, assessing their ethical implications, and ensuring legal compliance. Through exercising good AI citizenship from the get-go and openly communicating AI’s role in decision-making processes, communications professionals will not only cultivate trust but will in turn influence the broader AI space.
At the same time, however, the communications industry is particularly susceptible to the disruptive transformations that new developments in AI technology promise. That calls for a proactive, exploratory mindset, not just a passive acceptance of change as it comes, if we are to steer towards good outcomes for both our clients and the society around us. Transparency in our own communications should go hand-in-hand with bringing new opportunities to the table for businesses and better interactions to life for audiences. The next summit is set to be hosted virtually by South Korea in six months, with an in-person gathering planned for France next year. We’ll leave you to cast your own judgements on the success of the UK’s summit. In the meantime, we must all do our part in taking action to tackle the risks posed by AI.