IARPA Logo (US government agency image in public domain)

The intelligence community, reeling from stories that continue to be published based off documents from former NSA contractor and whistleblower Edward Snowden, is undoubtedly searching for ways to ensure that they are able to predict who will be the next leaker or whistleblower and stop that person before they reveal anything related to United States intelligence agencies to the public. And, perhaps, that is why the Office of the Director of National Intelligence, headed by James Clapper, announced a “challenge contest” to help those in the intelligence community better understand “human interactions that involve trust and trustworthiness.”

The Intelligence Advanced Research Projects Activity (IARPA), which describes itself as an agency, “invests in high-risk, high-payoff research programs that have the potential to provide the United States with an overwhelming intelligence advantage over future adversaries.” The challenge that has been approved and involves monetary rewards for winners is called “INSTINCT” or the Investigating Novel Techniques to Identify Neurophysiological Correlates of Trustworthiness challenge. It is a part of IARPA’s TRUST or Tools for Recognizing Useful Signals of Trustworthiness program, which “seeks to significantly advance the IC’s capabilities to assess whom can be trusted under certain conditions and in contexts relevant to the [intelligence community], potentially even in the presence of stress and/or deception.”

The challenge is described in a posting on Innocentive. It is like something a protagonist in a dystopian story might be told before being pulled further into the darkness of some secret society:

Whom do you trust?  Why do you trust them? How do you know whether to trust someone you’ve just met? The answers to these questions are essential in everyday interactions but particularly so in the Intelligence Community, where knowing whom to trust is often vital. The Intelligence Advanced Research Project Activity (IARPA) TRUST program seeks ways to detect one’s own neural, psychological, physiological, and behavioral signals that reflect a partner’s trustworthiness.  The goal of this Challenge is to develop an algorithm that identifies and extracts such signals from data recorded while volunteers engaged in various types of trust activities.  Cross-disciplinary teaming is encouraged in order to bring together expertise from diverse fields (such as neurophysiology and data analytics) to solve this complex problem.

Innocentive, by the way, describes itself as a “global leader in crowdsourcing innovation problems to the world’s smartest people who compete to provide ideas and solutions to important business, social, policy, scientific, and technical challenges.” They have partnered with “AARP Foundation, Air Force Research Labs, Booz Allen Hamilton, Cleveland Clinic, Eli Lilly & Company, EMC Corporation, NASA, Nature Publishing Group, Procter & Gamble, Scientific American, Syngenta, The Economist, Thomson Reuters, and several government agencies in the US and Europe” and claim to have “rapidly” generated “innovative new ideas” to “solve problems faster, more cost effectively, and with less risk than ever before.”

The goal of this challenge to “develop capabilities to detect, measure, and validate one’s own “useful” signals in order to more accurately assess another’s trustworthiness in a particular context,” according to IARPA. And this challenge builds off prior research:

In a series of recent research studies funded by IARPA, voluntary participants interacted with other volunteers while undertaking a number of tasks that required each of them to assess the other’s trustworthiness.  In turn, each participant had to decide whether they would act in a trustworthy fashion towards their partner.  Importantly, both participants could gain or lose stakes based on the combined consequences of each person’s willingness to trust and each person’s willingness and ability to keep specific promises made to the other. Neural, psychological, and physiological data were collected in parallel with these tasks, with participants’ behavior serving as ground truth (i.e., partners did or did not keep their promises).  The Air Force Research Laboratory (AFRL) has conducted preliminary analyses of these data, and now joins IARPA in inviting Solvers to explore the data in greater depth.

While not specified, let’s presume that one of the key scenarios played out involves whether one would divulge state secrets or classified information.

The US intelligence community already has fifteen agencies which use polygraph testing to determine if someone should have a security clearance. Six of the agencies, according to a McClatchy News investigation, are supposed to focus on only “national security questions, such as whether someone has leaked classified information or has inappropriate relationships with foreigners.” But nine of the agencies find it critical to use polygraph tests to find “applicants or employees who are hiding crimes or deviant or unstable behavior that should bar them from certain jobs. Those agencies ask questions about prior drug use, undisclosed crimes and lying on the security-clearance application form.”

What the US intelligence community likely views as deviant behavior that will determine trustworthiness likely factors into efforts to spot potential insider threats.

McClatchy has further reported that many of these screenings involve ethical or legal abuses. At the National Reconnaissance Office, which is a US spy satellite agency, polygraphers said they earned bonuses for pushing “ethical and legal boundaries.” An NRO polygrapher said he had “felt pressured to interrogate an employee about her sexual abuse as a child during a screening. Separately, a CIA job applicant said polygraphers had asked her about her reported rape and miscarriage.”

Following leaks on cyber warfare against Iran and a CIA underwear bomb plot sting operation in Yemen in 2012, Clapper announced anti-leaks measures to require intelligence employees to answer a question on whether they have leaked “restricted information” to journalists or the news media. Another measure was adopted, according to The New York Times, to allow “investigators” to “call in anyone for a polygraph test about a particular leak, apart from a criminal leak investigation by the Federal Bureau of Investigation.”

Yet another measure granted agencies the ability to issue letters of reprimand or fire suspects in “cases where a suspect is identified but not prosecuted.” And the national intelligence director could now have the person in the newly established position of inspector general for the intelligence community conduct administrative investigations to “ensure that selected unauthorized disclosure cases” were not “closed prematurely.”

According to Clapper, these measures were intended to “reinforce our professional values by sending a strong message that intelligence personnel always have, and always will, hold ourselves to the highest standard of professionalism.” He could easily have said in order to maintain the “highest standard” of secrecy.

Furthermore, Clapper has presided over an intelligence community that has implemented an “Insider Threat” program, which McClatchy also reported. As Marisa Taylor and Jonathan Landay detailed, the “unprecedented initiative” was “sweeping in its reach.” It “extends beyond the US national security bureaucracies to most federal departments and agencies nationwide, including the Peace Corps, the Social Security Administration and the Education and Agriculture departments. It emphasizes leaks of classified material, but catchall definitions of ‘insider threat’ give agencies latitude to pursue and penalize a range of other conduct.”

“Millions of federal employees and contractors,” according to the Taylor and Landay, “must watch for ‘high-risk persons or behaviors’ among co-workers and could face penalties, including criminal charges, for failing to report them.” And, “Leaks to the media are equated with espionage.”

“Experts and current and former officials” that Taylor and Landay spoke to for the story suggested the “Insider Threat” program would likely “make it easier for the government to stifle the flow of unclassified and potentially vital information to the public, while creating toxic work environments poisoned by unfounded suspicions and spurious investigations of loyal Americans.”

Essentially, leaking to the press or public would be treated in a manner similar to how agencies had reacted to leaks to America’s enemies.

Encouraging intelligence employees to identify potential insider threats and report them or else face penalty depends on being able to tell who can and cannot be trusted. There needs to be some reasonable understanding of what to look for. After all, they missed Snowden.

One might wonder how the intelligence community is using the word “trust.” Would it be more appropriate to use the word loyalty? Obedient? Submissive?

It is pre-crime enforcement to train employees to spot people who are “trustworthy” and not “trustworthy.” Any algorithm will never be flawless in its predicting and those false positives—witch-hunts against suspected insiders—may be frequent or infrequent depending on how the algorithm is applied or understood.