MURDER, POLITICS, AND THE END OF THE JAZZ AGE
by Michael Wolraich
The company echoed tech ethicists and employees in its call for restrictions. As Microsoft president Brad Smith put it, “We must ensure that the year 2024 doesn’t look like a page from the novel 1984.”
By Maya Kosoff @ The Hive @ VanityFair.com, Dec. 7
[....] The St. Thomas system is just one of a number of data points that AI Now—a group composed of tech employees from companies including Microsoft and Google, and affiliated with New York University—says exemplify the need for stricter regulation of artificial intelligence. The group’s report, published Thursday, underscores the inherent dangers in using A.I. to do things like amplify surveillance in fields including finance and policing, and argues that accountability and oversight are necessities where this type of nascent technology is concerned. Crucially, they argue, people should be able to opt out of facial-recognition systems altogether. “Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance,” the organization writes. “These tools are very suspect and based on faulty science,” Kate Crawford, one of the group’s co-founders, who works for Microsoft Research, told Bloomberg. “You cannot have black-box systems in core social services.” Equally important, the group argues, is internal governance at tech companies—taking steps like installing rank-and-file employees on a company’s board of directors, say, and allowing third-party experts to audit and publish reports about A.I. systems. [....]