Homeland Security Secretary Alejandro Mayorkas speaks during the third annual Axios What’s Next Summit at the Planet Word Museum on March 19, 2024 in Washington, DC. With a focus on the future of the workspace and how artificial intelligence will play a role policy, news and other disciplines, the summit heard from members of Congress, Biden cabinet secretaries and media executives.
Chip Somodevilla | Getty Images
When the White House issued an executive order last October to promote safe and responsible development and use of artificial intelligence in the federal government, one of the first agencies to say it was on board the AI train was the Department of Homeland Security. The agency is making good on that commitment by recently announcing a roadmap detailing its plans for AI this year and the launch of three pilot projects to test AI technology.
The roadmap offers visibility into DHS’s approach to AI and its aim to foster relationships with the private sector, academia, other government entities, and other partners to accelerate the development and deployment of AI solutions tailored to the unique challenges it faces.
Three pilot programs are aimed at enhancing immigration officer training; helping communities build resilience and reduce the burden for applying for disaster relief grants; and improving the efficiency of law enforcement investigations.
In one pilot, Homeland Security Investigations (HSI) will test AI to help in investigative processes focused on detecting fentanyl and combatting child exploitation. AI will strengthen its investigative processes by introducing a large language model-based system designed to enhance the efficiency and accuracy of summaries investigators rely on, the department said. The LLM-based system will use open-source technologies to allow investigators to more quickly summarize and search for contextually relevant information within investigative reports.
In another pilot, the Federal Emergency Management Agency (FEMA) will deploy generative AI to create efficiencies for the hazard mitigation planning process for local governments, including underserved communities. The pilot will support state, local, tribal, and territorial governments’ understanding of how to craft a plan that identifies risks and mitigation strategies.
And in the third pilot, United States Citizenship and Immigration Services (USCIS) will develop an interactive application that uses gen AI to improve the way the agency trains immigration officer personnel.
The unprecedented speed and potential of AI’s development and adoption presents both opportunities and risks, according to Alejandro Mayorkas, Secretary of Homeland Security.
“The DHS AI roadmap and pilots will guide our efforts this year to strengthen our national security, improve our operations, and provide more efficient services to the American people, while upholding our commitment to protect civil rights, civil liberties, and privacy,” Mayorkas said in a statement.
What DHS learns from the pilot projects will be beneficial in shaping how it can effectively and responsibly use AI across homeland security moving forward, he said.
A few of the key priorities that DHS has set for AI, such as immigration and fentanyl issues at the border, are also at the center of Republican impeachment efforts — seen by many as highly politicized and lacking constitutional law merit — against Mayorkas. Republican lawmakers in the House, which voted to impeach in February, delivered the articles of impeachment to the Senate on Tuesday.
AI talent search
In the meantime, the agency — like many other organizations — is on the hunt for AI talent. Earlier this year, it launched its first-ever hiring sprint to recruit 50 AI technology experts to create teams that help DHS better leverage AI through its pilot programs and other initiatives.
DHS said its goal is to build an “AI Corps” to bolster its workforce with experts in AI and machine learning technologies, models, and applications. The AI experts will be part of the DHS office of the chief information officer and will work on a variety of projects advancing AI innovation and use. They will provide expertise in AI/ML, data science, data engineering, program management, product management, software engineering, and cybersecurity.
“We are recruiting faster than ever because the need is urgent,” DHS CIO and chief AI officer Eric Hysen said in a statement. “We are prioritizing recruiting talent who are technologically proficient and eager to leverage recent innovations in AI to transform the way people interact with the government.”
The department’s forays into AI “is a good and necessary step, especially given DHS’s public visibility, reach, and mission, which makes it a perfect candidate to be an early AI adopter within the federal government,” said Alla Valente, senior analyst at Forrester Research.
“AI solves a data science problem, and DHS needs to mine vast amounts of information and data to be able to gain insights to help it proactively and cost-effectively deliver on its mission,” Valente said. “They need AI to help scale, be more competitive, and more proactive in their efforts. DHS has the leadership, the accountability. And with this plan, they also have laser focus to succeed and pave the way for other federal agencies.”
Prioritizing risk management over compliance
To realize the value of AI safely and without sacrificing privacy and liberties, DHS will need to prioritize risk management over compliance, Valente said.
Most federal agencies are in a “wait-and-see period” with AI, Valente said. “Many don’t have the appetite for risk that comes with change,” he said. “Others don’t have the literacy, expertise, and skills to jump in just yet.”
This could significantly change, Valente said, with agency appointments of chief artificial intelligence officers as required by a memo from the U.S. Office of Management and Budget. The memo prioritizes AI expertise and skills over agency experience.
“The Departments of Energy and Health and Human Services likely have more incentive to move quickly on AI,” Valente said. “These two agencies function as critical infrastructure, are public facing, experience major cybersecurity threats and operational disruptions, and are deeply connected with the private sector, much of which is already using AI,” he said.
As the private sector’s adoption of AI “continues to skyrocket, agencies will have no choice to adopt it as well,” Valente said.
Agencies will face some daunting barriers to AI adoption, according to a recent report by consulting firm EY and market research firm Market Connections.
Among the key challenges: a lack of personnel dedicated to data governance (cited by 59% of respondents), budget constraints (58%) and data security (57%). Only 27% of the respondents think the federal government will become a quick adopter of AI technologies in the next year.
On the positive side, agency IT leaders are confident in their organization’s ability to maintain privacy controls and use up-to-date data to make better decisions. And 70% of the agencies are exploring, developing, or using AI for data analysis, document analysis, and predictive analytics.
“Federal agencies are proactively and comprehensively harnessing the power of AI across various components,” said Joe Baptiste, leader of digital transformation projects for EY’s government and public sector practice.
Establishing new AI programming and leadership positions such as the establishment of the CAIO role underscores the federal government’s commitment to leveraging AI, Baptiste said. “For federal agencies to maintain their trajectory in AI utilization, they must foster environments that encourage exploration and adoption, particularly as the dynamics of homeland security evolve,” he said.