There Is Simply No Scientific Backing For TSA’s Behavioral Detection Program

Tech Dirt – by Tim Cushing

The Government Accountability Office has taken a run at the TSA’s Behavioral Detection program in the past. Its findings were far from complimentary. Specially-trained “Behavior Detection Officers” (BDOs) were basically human coin flips. Deciding whether or not someone was a threat came down to a lot of subjective readings of human behavior, rather than proven principles.

In response to this report, the TSA started trimming back the number of BDOs it deployed, converting about 500 of them back into regular TSA officers. But the TSA still believed there was something to its pseudoscience patchwork, so it’s still sending out 2,600 BDOs to covertly stare at travelers’ throats and eyes (no, really) until terrorism reveals itself. 

The GAO’s second report focuses on the TSA’s stubborn insistence that the Behavior Detection program is worth what we’re paying for it. It has made claims to oversight that this program is scientifically-backed and scientifically-based. The GAO’s investigation [PDF] finds almost nothing that backs these assertions.

We reviewed and categorized all 178 sources that, as of April 2017, TSA cited as providing support for specific indicators in its revised list of behavioral indicators to identify the extent to which they present valid evidence. We defined valid evidence as original research that meets generally accepted research standards and presents evidence that is applicable in supporting TSA’s specific behavioral indicators.


Of the 178 total sources TSA cited, we determined that 137 are news or opinion sources, and we took no further action because these do not meet our definition of valid evidence. We determined that 21 of the 178 sources are reviews of studies— sources that rely on the source author’s assertion of support for the indicator rather than original analysis, methods, or data that can be independently used as valid evidence.

Of the 178 sources cited by the TSA (with great confidence, one imagines), only 20 met the GAO’s definition of valid evidence.

The TSA claims its BDOs have the ability to sniff out terrorism by looking for behavioral indicators. There are long lists of indicators to look for but, again, the TSA has nothing in the way of evidence to back these up.

In our review of all 178 sources TSA cited in support of its revised list, we found that 98 percent (175 of 178) of the sources do not provide valid evidence applicable to the specific indicators that TSA identified them as supporting. In total, we found that TSA does not have valid evidence to support 28 of its 36 revised behavioral indicators, has one source of valid evidence to support each of 7 indicators, and has 2 sources of valid evidence to support 1 indicator.

In fact, it appears the TSA may just be Googling for indicators, as its supporting “evidence” is extremely scattershot.

For example, among other sources, TSA cited four separate news articles that provide eyewitness accounts of a single 2005 suicide bombing incident at an Israeli bus station as support for 2 behavioral indicators. In another example, TSA cited a 2004 newsletter published by the New Hampshire Police Standards and Training Council as support for 11 behavioral indicators.

A partial snapshot of the chart created by the GAO shows the utter failure of the TSA’s behavioral “science.”

The individual breakdown of the scientific “support” for behavioral indicators is even more damning.

As a result of our analysis, we determined that TSA does not have valid evidence supporting 28 of its 36 revised behavioral indicators. Specifically, every source TSA cited in support of these 28 indicators is a news or opinion source, a review of studies source, or an original research source that either does not meet generally accepted research standards or is not applicable to the behavioral indicator it was provided to support. For example, TSA cited 105 sources to support the use of indicator number 11, which involves BDOs identifying individuals who seem to be attempting to conceal their normal appearance. However, we found that none of the 105 sources present original research that meets generally accepted research standards. In another example, TSA cited 63 sources to support the use of indicator number 5, which involves BDOs identifying individuals who seem to be sweating heavily. While we found that one of the 63 sources cited is original research that meets generally accepted research standards, we also found that this source does not present evidence that is applicable as support for this indicator.

To which is appended this footnote:

Some reviews of studies sources TSA cited as support for behavioral indicators present conflicting information on whether the use of behavioral indicators and behavior detection more generally is useful in identifying individuals who may pose a threat to security. For example, one article states that indicators of suicide bombers “are drawn from multiple sources and have not been formally or empirically validated.” Another states that indicators used by law enforcement to identify potential suicide bombers are “vague, contradictory, and so broad as to be useless.”

The DHS responded to the GAO’s draft by claiming it still had plenty of research and science justifying the deployment of BDOs. More “evidence” was submitted and found to be just as useless as everything the GAO had already examined.

[I]n its letter, DHS cited passages from a 2013 RAND report as providing support for TSA’s use of behavior detection. However, the report DHS cited does not clearly support TSA’s use of behavior detection. As DHS noted, the RAND report states that there is value and unrealized potential for using behavioral indicators as part of a system to detect attacks. However, the indicators reviewed in the RAND study could not be used in real time in an airport environment, as we reported in 2013. Further, the RAND report refers to behavioral indicators that are defined and used significantly more broadly than those used by TSA. For example, the RAND report includes indicators such as mobile device tracking, monitoring online activity, and changes in lifestyle patterns.

The DHS also said the program was justified because trained BDOs also sometimes saw things even untrained non-professionals might be able to spot.

DHS also stated that there are certain common-sense indicators that TSA cannot reasonably ignore, with or without valid evidence. Specifically, DHS highlighted the indicator “unusual exposed wires or electrical switches on a person.”

Of course, these could (MAYBE) be detected by normal screeners, not specially-trained officers costing the taxpayers millions each year. The GAO agrees BDOs should ignore common sense. But much of what the DHS considers to be “common sense indicators” are highly-subjective takes on normal human behavior.

We recognize DHS’s position that certain common-sense indicators of mal-intent should not be ignored. However, TSA’s revised list of 36 behavioral indicators also includes indicators that are subjective, such as assessing the way an individual swallows or evaluating the degree to which an individual’s eyes are open.

Finally, the DHS protests it should be able to do it because other agencies (CBP, FBI) and other countries do it. The GAO patiently responds that behavioral detection may work for others, but the point is the TSA has yet to prove it works when it does it and can offer no solid evidence to back its claims.

The TSA’s Behavioral Detection program isn’t dead yet, but a few more nails have been driven into the coffin. If the regular TSA screening procedures are security theater, the BDOs are the mimes. A whole lot of moving around, but in the end, there’s really nothing there.

Start the Conversation

Your email address will not be published.