
(from left) Ning Wang, Yigal Arens, Wout Brusselaers, Ramsay Brown
Artificial intelligence is on the brink of permeating every major industry, from healthcare to transportation. AI is also becoming a larger part of everyday life, with virtual assistants such as Apple’s Siri already in the hands of millions around the world and the use of self-driving cars on the rise. In a panel discussion hosted by the LAWAC Young Professionals on November 1, four experts in machine learning debated the ethics of AI and what these rapid developments mean for humanity.
One of the greatest concerns surrounding AI is how it will affect employment. But Yigal Arens, Director of the Intelligent Systems Division of the Information Sciences Institute, pointed out that “this isn’t the first time technology has caused massive job loss.” Arens pointed to the Industrial Revolution, when the advent of hundreds of innovative machines created mass unemployment. Concerning the invention of the automobile, he quipped, “Those horses never got their jobs back.” Ramsay Brown, founder and CEO of Dopamine Labs, agreed with Arens and suggested that this could lead to an amazing period of development. “What we do after AI takes over these jobs is more interesting to me,” Brown says. “AI continues to break all the barriers we set. This can propel us to pursue even greater advances in technology that can help humanity.”
Wout Brusselaers, CEO and co-founder of Deep 6 AI, thinks this might be too optimistic. He warned that we should be wary of a time when AI starts taking over jobs in politics or when human lives are at stake. Brusselaers pointed out that AI has been used to take human lives for decades, since the advent of landmines, and is still being used today in drone strikes: “Weaponized AI is okay for localized situations, but on a large scale it will lead to all-out war.”
At this point in the AI revolution, there is still an owner, developer, or company who is responsible for the robots. The panelists also suggested that governments could be involved as well. Arens mentioned that guaranteed minimum income has been suggested as a solution to the job loss and potential accidents caused by AI, with taxes on AI companies helping to build that revenue. If there is an accident, then the company or government regulating AI would cover the health bills.
Digging deeper into the ethics behind AI, the panel discussed the “Trolley Problem”, a commonly used scenario to demonstrate ethical implications of enabling AI to make life or death decisions. In the “Trolley Problem” there is a trolley on a track that forks, on the left there is one person tied to the tracks and on the right there is a group of people tied to the tracks. The AI then has to decide whether the trolley will go left or go right. As Ning Wang, a research scientist at the USC Institute for Creative Technologies, pointed out, humans typically want robots to make utilitarian decisions. Brown argued that as robots are created, we tend to build our own implicit ethics into them, “You can’t expect less of AI than we expect of humans, we should hold them to the same standards.” And while Wout believes that “humans should always be involved in these decisions [involving human life], because we have empathy,” AI can more rapidly understand all ramifications and permutations of actions. So, in this example, the AI could tell who is a criminal and who is a scientist that can cure cancer and base the trolley’s direction on this information.
The future of AI is promising but also presents serious ethical questions that must be addressed. Overall, Wang, Busselaers, Brown and Arens agree that with proper government regulation and corporate responsibility, AI can be incredibly beneficial to humanity. As Wout concluded, “The core attribute of AI is the ability to learn. There’s no reason we can’t eventually develop AI to circumvent almost all human problems.”