For decades, much of the federal government’s security authorization granting process has relied on techniques that emerged in the mid-20th century.
“It’s very manual,” said Evan Lesser, president of ClearanceJobs, a posting, jobs, news and advice site for jobs involving security clearances. “Driving cars to meet people. It’s very old-fashioned and takes up a lot of time.”
A federal initiative started in 2018 called Trusted Workforce 2.0 formally introduced semi-automated analysis of federal employees that takes place in near real time. This program will allow the government to use artificial intelligence to subject employees who are seeking or already have security clearances to “continuous verification and evaluation” – basically, continuous evaluation that receives information constantly, raises red flags, and includes self-report and analysis. human.
“Can we build a system that verifies someone and continues to verify and is aware of that person’s disposition as it exists in legal systems and public record systems on an ongoing basis?” said Chris Grijalva, senior technical director at Peraton, a company that focuses on the government side of internal analysis. “And out of that idea was born the notion of ongoing assessments.”
Such efforts have been used in government in more ad hoc ways since the 1980s. But the 2018 announcement was aimed at modernizing government policies, which typically reevaluate officials every five to 10 years. The motivation for the adjustment in policy and practice was, in part, the accumulation of necessary investigations and the idea that circumstances and people change.
“That’s why it’s so appealing to keep people under some kind of constant, ever-evolving surveillance process,” said Martha Louise Deutscher, author of “Screening the System: Exposing Security Clearance Dangers.” She added that “every day you’re going to do the credit check, and every day you’re going to do the criminal check – and the bank accounts, the marital status – and make sure people don’t run into circumstances where they would become a risk if it wasn’t yesterday.”
The first phase of the program, a transition period before full implementation, ended in the fall of 2021. In December, the US Government Accountability Office recommended that the effectiveness of automation be evaluated (although not, you know, on an ongoing basis).