Gartner predicts more than 70% of government agencies will use artificial intelligence (AI) to enhance human administrative decision-making by 2026. Machine learning, analytics and generative AI will mature over the next two years and combine into a suite of tools that will support improved government service delivery.
Gartner warns that AI encompasses a continually evolving technological landscape in which there are trade-offs between scale, explainability, accessibility, speed, skills and cost.
“This complexity — together with the ambiguity intrinsic to AI’s predictive and generative nature — leads to a lack of clarity around AI’s reputational, business and societal impacts.” Gartner
Gartner VP analyst Todd Kimbriel says government CIOs must find new ways to meet citizen demands for modern, accessible and resilient services, by focusing on sustainable and scalable technology.
Keith Nelson, senior industry strategist for the public sector at OpenText, observes that AI is capturing the mind share of many world government leaders now, who are freeing up budget and directing some of their staff to go out and add AI capabilities to their tasks.
“We advise our public sector customers to experiment with things like Chat GPT, so that they can understand what generative AI can do and can't do, but not to involve any sensitive government data at this stage. We encourage our customers to think about the underlying information first to fully tap into the benefits of AI,” he continues.
The more things change, the more they stay the same
During his days as assistant to the CIO in Federal government, Nelson revealed that the Federal agency had completely decentralized its operations. He recalled how each bureau had installed its own email system with different technologies, brands, and versions. They also had their own contracting team, developing independent websites, and none of those websites even looked anything like each other, and they were all very independently run.
“I'm hearing that something similar might be going on in various government departments with generative AI,” he observes. He cautions that each document created using generative AI should be treated like others in government – a record that is shrouded in requirements and compliance.
“If you don't have strong controls, put in place at the start, both policy controls and technology controls, it's going to be a real problem to try to put that genie back in the bottle,” he warns. Having said that, he believes that the benefits of AI are worth the controls around it.
“Just think about a day when you could tap into AI to develop a hiring strategy that considers your organisation's current and future hiring needs, and then layer those needs against the available talent pool who might be interested in working for the government under certain incentives. We're going to be a much smarter government, with the help of AI.”
Keith Nelson
AI governance framework building
AI is on its path to becoming mission-critical in the enterprise. Gartner says the criticality, scalability and democratization of AI all require governance to balance its value with the new risks it poses.
In 2023, Singapore and the US made their AI frameworks interoperable which mapped IMDA's AI verify against NIST's AI risk management framework. “They pledged cooperation in developing their AI going forward. This is what you need – countries working in partnership,” opined Nelson.
There is a saying that it takes years to build trust and only seconds to destroy it. In the digital world, the building part might not take years but certainly the loss of trust can take only moments to happen, and certainly take time, if possible, to rebuild.
He commented that in AI, reliability matters, but to achieve such depends in part on the underlying data. For that he suggested the need for public and private collaboration.
The rapid rise and wide adoption of GenAI in both public and private was not anticipated. This has forced countries to quickly shift from discussions around norms and best practices to establishing upfront regulatory frameworks, opined Nelson.
He cited as a noteworthy example that of Canada’s AI Data Act – a risk-based, principles-based adaptive and an agile framework.
“Adaptivity is very critical because government must keep up with technology changes,” he continued. “If you hard code anything in legislation today, you're going to be obsolete almost immediately and potentially, you could chill innovation, which nobody wants.”
Nelson conceded that the Canadian government acknowledged that passing AI legislation would take years. “So, in the interim, they set up a voluntary code of conduct for the ethical usage of generative AI, and they've asked companies to sign on, for it opens,” he continued. “It's a voluntary effort that allows Canada to have some commitment on the part of industry, as a legislature works through the process.”
Prioritising AI deployment
The purpose of government is to serve its people, noted Nelson. “Making citizen services better is why you're here. You should use AI to figure out who your citizens are, what they need and what their digital preferences are, and share that knowledge with other sister agencies and at the city or state levels.”
Advice for government CIOs
Like any new technology making its way into an organisation’s operational machinations, CIOs will need to solve the challenge of integrating something new into existing processes and technologies.
Nelson acknowledged that it's a new day with AI, and there's a positive tendency among governments to share with each other.
“My advice is don't try to build your own tools if you don't have to,” he started.
In the early 2000s, Google came in to offer its search capabilities to our government websites and during the evaluation process there was a very loud contingent of government professionals who opposed using Google because they did not disclose their search code. Google called it a black box.
“We can't use that now,” warned Nelson. “I think some of the same rumblings from government today. They want to see the algorithms for each AI tool, and they want to take it, customize it themselves.”
As a new toy, and one for which the government does not have the expertise. Nelson recommends learning from the past. “Let software companies, with a strong history, take the lead on AI (development). Government should set the requirements and put the ethical boundaries on, and then let private sector, help, deliver against those.”
Click on the PodChats player and hear Nelson’s views on how to put the smart in government services.
- What is Artificial Intelligence in the context of government/public services?
- Name some use cases among early adopters of AI in the government/public sector.
- What will be the primary challenges of government CIOs in the AI adoption journey?
- Some governments (in Asia and around the world) are drafting AI governance frameworks. Can you name three best-in-class frameworks to date?
- Gartner’s Dean Lacheca says “a lack of empathy in service delivery and a failure to meet community expectations will undermine public acceptance of GenAI’s use in citizen-facing services.” Do you agree or disagree with this assessment? How should governments go about addressing this?
- In the AI adoption journey, should governments focus on deploying AI on backend processes first or citizen services?
- What is your advice for government CIOs as they look to adapt/adopt AI-centric solutions and practices in the delivery of government services?