Siri can understand what you say. Google can take dictation. Even your new smart TV is taking verbal orders.
So is there any doubt the National Security Agency has the ability to translate spoken words into text?
But precisely when the NSA does it, with which calls, and how often, is a well-guarded secret.
It’s not surprising that the NSA isn’t talking about it. But oddly enough, neither is anyone else: Over the years, there’s been almost no public discussion of the NSA’s use of automated speech recognition.
One minor exception was in 1999, when a young Australian cryptographer named Julian Assange stumbled across an NSA patent that mentioned “machine transcribed speech.”
Assange, who went on to found WikiLeaks, said at the time: “This patent should worry people. Everyone’s overseas phone calls are or may soon be tapped, transcribed and archived in the bowels of an unaccountable foreign spy agency.”
The most comprehensive post-Snowden descriptions of NSA’s surveillance programs are strangely silent when it comes to speech recognition. The report from the President’s Review Group on Intelligence and Communications Technologies doesn’t mention it, and neither does the October 2011 FISA Court ruling, or the detailed reports from the Privacy and Civil Liberties Oversight Board.