A report by market research firm Forrester has identified the challenges that have slowed the adoption of generative AI in the public sector. The paper, entitled 'The State of Generative AI in the Public Sector 2024', describes implementation in the public sector in the US, UK and Australia.
In the latter country, the private sector is taking a more aggressive approach to AI adoption. Around 70% would use the technology for both internal and customer-facing use cases.
According to Forrester's data, public sector leaders are well aware of the potential disruption that GenAI represents. This is even more true than in the private sector (38% vs. 28%).
But when it comes to adoption, they are slightly more cautious compared to the private sector. Their plans to introduce GenAI for internal and external use cases lag far behind those of the private sector (40-50% vs. 90%).
In other words, there is a tension between desire and concern about using generative AI to achieve task-oriented outcomes.
One of the key challenges highlighted in the report is that the public sector needs to align its outcomes with wider social, ethical, transparent and accountable standards. Poorly implemented AI can significantly and potentially irrevocably damage trust.
Given that transparency and accountability are important factors for trust, public sector leaders are more likely to use GenAI for internal purposes (56%) than for delivering services to clients.
Public sector executives cited other major challenges, such as infrastructure integration, governance and risk, or lack of technical skills. In the latter case, the problem is much more far-reaching than in the private sector. After the pandemic, the financial benefits between the two sectors have been equalized, making it even more difficult to attract talent to the public sector.
What governments can do
"Challenges have slowed the adoption of generative AI, but public sector leaders can move forward by introducing and implementing generative AI internally for employee use, increasing familiarity with the technology, and identifying potential pitfalls and mistakes in a low-risk environment," explains Sam Higgins, principal analyst at Forrester.
"Once your institution is comfortable with the technology, you can begin to slowly and incrementally roll it out to user-facing services while constantly monitoring feedback and implementing security measures," Higgins adds.
On the other hand, the analyst points out that it is also crucial for public institutions that want to use generative AI effectively to build the trust of employees and customers.