This is the third installment in a four-part investigative series about U.S. Defense-funded programs to spy on activists and Muslims worldwide. The series runs Monday through Thursday this week on Occupy.com. Read the first installment here and the second installment here.
The U.S. Department of Defense's multimillion dollar university research program, the Minerva Research Initiative, is developing new data mining and analysis tools for the U.S. military intelligence community to capture and analyze social media posts. The new tools provide unprecedented techniques to identify individuals engaged in political radicalism around the world, while mapping their behavioral patterns and social or organizational connections and affiliations.
The range of research projects undertaken by Arizona State University (ASU), a National Security Agency (NSA)-designated university, includes the development of algorithms which leading intelligence experts agree could directly input into the notorious "kill lists" – enhancing the intelligence community’s ability to identify groups suspected of terrorist activity for potential targeting via the CIA’s extrajudicial "signature" drone strikes.
Through the Social Media Looking Glass
One Pentagon-sponsored ASU project whose findings were published by the Social Network Analysis and Mining journal in 2012 involved downloading and cataloging 37,000 articles from 2005 to 2011 from the websites of 23 Indonesian religious organizations to “profile their ideology and activity patterns along a hypothesized radical/counter-radical scale.”
This study also found that the automated threat-classification model successfully ranked the organizations with “expert-level accuracy.” Related research has focused on developing data-mining and analytical tools to track political trends and social movements via social media.
According to Minerva chief Erin Fitzgerald, such research is about minimizing conflict. “Insights generated from Minerva research are intended to inform more effective strategic and operational policy decisions by defense leadership," said Fitzgerald. "The end goal is always to prevent future conflict and – if the U.S. must play a role in conflict elsewhere – to help DoD understand how to most effectively engage with partners to mitigate that conflict.”
Long before Edward Snowden’s revelations about NSA surveillance programs, it has been known that the CIA and other U.S. intelligence agencies have actively sought to analyze social media, from blogposts to tweets and from Amazon reviews to YouTube clips and Flickr photos. The NSA’s Open Source Indicators Program, for instance, involves “academics who work at a research branch of the NSA” developing automated analytical tools that mine open source information on Facebook, Twitter, Google and elsewhere to predict future events such as protests, pandemics, resource shortages, mass migrations, and economic crises.
Such open source information is also already being used as “enrichment” data to be integrated with phone and email metadata to create sophisticated graphs identifying the social connections, associates, locations, traveling companions and other patterns of behavior of individuals seen as potential “radicals” or terrorists – whether American or foreign.
This analytical capability is directly mobilized to identify not just potential extremists and their associations, but also to pursue and sanction "high-value targets." In one case in 2012, according to the Washington Post, “a user account on a social media Web site provided an instant portal to an al-Qaeda operative’s hard drive.” A leaked NSA document confirmed: “Within minutes, we successfully exploited the target.”
The National Counterterrorism Center (NCTC), which generates the CIA’s kill lists, draws its information from databases across the U.S. intelligence community, including the FBI, CIA, NSA and Department of Homeland Security (DHS), among others, encompassing and integrating both metadata from private electronic communications and associated data across an individual’s online social media networks.
DHS fusion centers, working closely with private sector corporations, routinely data-mine social media posts of American citizens including, for instance, Occupy activists in efforts to detect threat trends that could constitute a “hazard” to the public.
According to a classified criteria, names of potential “radicals” outside the U.S. linked to terrorist groups or activity are canvassed by the NCTC for potential extrajudicial assassination, and assessed by a White House interagency commission, which narrows them down before a presidential decision as to whether each individual lives or dies.
The problem, according to ASU researchers writing in a separate 2013 paper, is that current technology “cannot find the proverbial ‘needles in a haystack’ corresponding to those individuals with radical or extremist ideas, connect the dots to identify their relationships, and their socio-cultural, political, economic drivers.”
This has led the Pentagon to fund the ASU to create a technology dubbed "LookingGlass," “a visual intelligence platform for tracking the diffusion of online social movements.” Algorithms are applied to “large amounts of text collected from a wide variety of organizations’ media outlets to discover their hotly debated topics, and their discriminative perspectives voiced by opposing camps organized into multiple scales.”
These are then used to “classify and map individual Tweeter’s message content to social movements based on the perspectives expressed in their weekly tweets.” The LookingGlass platform is able “to track the geographical footprint, shifting positions and flows of individuals, topics and perspectives between groups.”
Unlike previous systems, the Pentagon’s appropriation of LookingGlass can provide “real-time contextual analysis of complex socio-political situations that are rife with volatility and uncertainty. It is able to rapidly recognize radical hot-spots of networks, narratives and activities, and their socio-cultural economic, political drivers,” and is able to identify and track specific “radical” and “non-radical” individuals, along with shifts in their beliefs and affiliations to “radical” and “non-radical” movements and organizations.
The Entire Physical and Virtual World is a Militarized Battlefield
U.S. intelligence experts assessing the Minerva initiative strongly disagreed with Fitzgerald’s claims that these Pentagon-funded research projects would contribute to minimizing conflict.
In my earlier report on Minerva, I disclosed an internal Minerva staff email related directly to the ASU’s radicalization discourse project, which confirmed that the program is geared toward producing “capabilities that are deliverable quickly” for application to field operations. Senior Pentagon officials had told ASU staff to develop “models and tools that can be integrated with operations.”
The analytical tools developed for the Pentagon by ASU researchers are directly applicable to the extensive data-mining programs of the National Security Agency revealed by whistleblowers Edward Snowden, Russell Tice, William Binney and Thomas Drake, among others. Billions of pieces of data in the form of phone calls, emails, photos and videos from major communication giants like Google, Facebook, Twitter and Microsoft, among others, are collected and then analyzed to identify national security threats.
Technologies like LookingGlass could dramatically advance the NSA’s capacity to track and analyse metadata in the context of the "open source" online data it mines so comprehensively. Former senior NSA executive and whistleblower Thomas Drake told me regarding the Minerva-funded data-mining projects, “We must remember that the entire world, physical and virtual, is considered by the Pentagon as fair game and a militarized battlefield.”
LookingGlass, along with the other data-mining tools being developed by universities with Pentagon funding, fits neatly into the parameters of earlier intelligence structures such as the Pentagon’s "Total Information Awareness" (TIA) program launched by the Bush administration, described by the New York Times as “the most sweeping effort to monitor the activity of Americans since the 1960’s.” TIA’s function was to use data-mining “to create risk profiles for millions of visitors and American citizens in its quest for suspicious patterns of behavior.”
Under Obama, this has evolved into the global “disposition matrix,” a “next-generation targeting list” which “contains the names of terrorism suspects arrayed against an accounting of the resources being marshaled” to kill them, including the ability to map “plans for the ‘disposition’ of suspects beyond the reach of American drones.” The kill lists, reported the Washington Post, are part of “a single, continually evolving database in which biographies, locations, known associates and affiliated organizations are all catalogued. So are strategies for taking targets down, including extradition requests, capture operations and drone patrols.”
It is therefore no coincidence that Lisa Troy – the Pentagon’s Minerva supervisor for the University of Washington’s project, aiming to “fingerprint” the configuration of “mass political movements” to gauge the determinants of “social change” – is an employee of Bowman Systems Management which is a leading U.S. defense contractor working on drone warfare technology.
Yet as Harvard security technologist Prof. Bruce Schneier has pointed out, the inherently murky and fluid categories used to profile terrorists and potential terrorists mean that the risk of seeing terrorists where there are none is higher the more data is inputted into the data-mining system.
“Depending on how you ‘tune’ your detection algorithms, you can err on one side or the other,” Schneier writes. “You can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots. To reduce both those numbers, you need a well-defined profile.”
The problem is that for terrorism, he writes, “There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.”
This is why the NSA’s eavesdropping program “spat out thousands of tips per month,” said Schneier, and “every one of them turned out to be a false alarm.” Although “useless for finding terrorists,” the NSA’s data-mining is “useful for monitoring political opposition and stymieing the activities of those who do not believe the government's propaganda.”
“What this is really about is furthering the militarization of domestic law enforcement and going further down the rabbit hole into the belief that it is possible to control the 99% if you just have enough surveillance and enough armed force,” said ex-CIA official Robert Steele, speaking about ASU’s LookingGlass platform.
Indeed, although the NCTC’s criteria for generating kill lists remains secret, a leaked NCTC document obtained by The Intercept in July revealed that the criteria to add individuals to watch lists of “known or suspected terrorists” in the first instance was vague, requiring only “reasonable suspicion,” not “concrete facts.” Supporting evidence for doing so could be as thin as a single, uncorroborated social media post – which perhaps explains why 40% of suspects on the main watch list are not linked to any “recognized terrorist group.”
Extrajudicial assassinations known as "signature" strikes, targeting groups of terrorism suspects whose identities are not known, reportedly have far more fluid criteria in determining whether they are plotting or engaged in terrorist activity – so fluid that State Department officials complained to the White House that the CIA’s criteria is “too lax.” They joked that if the CIA sees “three people doing jumping jacks” in a hostile territory, it would be designated a terrorist training camp. Hence, drone warfare in Pakistan and Yemen has frequently targeted “terrorist suspects” about whom the CIA “were not certain beforehand of their presence” and “whose names they do not know.”
“The algorithms being developed at ASU remind me of the algorithms used as the basis for signature strikes with drones,” said former senior NSA executive Thomas Drake. In 2006, Drake leaked information about the NSA’s data-mining project Trailblazer to the press. Although the U.S. government attempted to prosecute him under the Espionage Act in 2010, the case collapsed.
I asked Drake whether the ASU’s algorithms could be applied to fine-tuning the generation of "kill lists" for drone strikes. “Your hunch is right,” he said. “Having the U.S. government and Department of Defense fund this kind of research at the university level will bias the results by default. This is a fall-out of big data research of this type, using algorithms to detect patterns when the patterns themselves are an effect – and mixing up correlation with causality. Under this flawed approach, many false positives are possible and these results can create an ends of profiling justifying the means of data-mining.”
Dr. Nafeez Ahmed is a bestselling author, international security scholar, investigative journalist and regular Guardian contributor on the geopolitics of interconnected environmental, energy and economic crises. This is the second in a four-part investigation being published on Occupy.com throughout this week. The first part can be read here and the second part here. The final installment appears Thursday.
3 WAYS TO SHOW YOUR SUPPORT
- Log in to post comments