While many people seem to be discussing the dangers of Artificial Intelligence (AI) – many of these discussions seem to focus on, what I believe, are the wrong issues.
I began my formal work with AI while a graduate student at NYU in the mid-1990s; the world of AI has obviously advanced quite a bit since that time period, but, many of the fundamental issues that those of us in the field began recognizing almost 3 decades ago not only remain un-addressed, but continue to pose increasingly large dangers. (I should note, that, in some ways, I have been involved in the field of artificial intelligence since I was a child – by the time I was 7 I was playing checkers against a specialized checkers-playing computer, and trying to figure out both why the device sometimes lost as well as how to improve its performance).
While I will describe in another article why many of the concerns with AI that seem to be commonly discussed in the media should actually not be of grave concern to anyone, I will first publish a series of piece discussing what I DO consider to be the biggest dangers of AI.
So, in no particular order, here is the first:
One of the great powers of AI is its ability to automate translations, something that will eventually, in the not so distant future, enable any two people on this planet to communicate with one another; AI is already well on its way towards effectively establishing the utopian level of communications envisioned by the Bible in Genesis 11: “Now the whole world had one language and a common speech.”
There is little doubt that AI translation technology is already starting to have a dramatic, transformative impact on human society – and that the magnitude of that impact will only grow with time.
As is always the case with new technologies, however, enabling universal communication can be used for good – or for bad; in our world, the power to do good always comes with a trade off.
In terms of offering human beings the capability to communicate unbounded by language and culture, AI is already enabling criminals who might otherwise be constrained by their knowledge of a particular language or set of languages to social engineer people who speak other languages. In the past, translators have been used to create phishing emails – which, naturally, were far from perfectly crafted. Today, however, we already see voice and video translators that can quickly, sometimes in real time, transform oral and visual communications from one language to another – enabling social engineering attacks by phone or even by video call.
To see the power – and danger – of AI-based language conversion, as it already exists, please watch the following one-minute video; the video was generated in just a few minutes by the team at GoHuman.AI – using only the video below it as input.
The original video (unadulterated by AI modification) follows: