Will emotionless programmers create emotionless networks which destroy civilizations emotionlessly?
Are brogrammers too lacking in humanity to save humanity?
Japanese roboticist Hiroshi Ishiguro displayed one of his androids on Sunday at the SXSW Interactive Festival. The android, which is modeled after Ishiguro, held an autonomous conversation in Japanese on stage with an Ishiguro associate. USA TODAY
Marco della Cava, USA TODAY
(Photo: Getty Images)
AUSTIN – have always cut with double-edged swords, capable of both propelling humanity to new achievements while threatening us with potential catastrophe.
That chilling theme was explored by two leading technologists at SXSW Interactive, a festival that has seen its share of humans rising up against the machines.
Unlike last year, no protests rallying to “Stop the Robots” were in evidence. Still, the tech idea conference was rife with provocative sessions such as Can AI Systems Really Think? and and Future Life.
“Siri was chapter one, and now it’s almost like a new Internet age is coming. Viv will be a giant brain in the sky”
In separate talks, the promise and pitfalls of both and artificial intelligence were laid out by Riccardo Sabatini, a quantum physicist-turned-human-genome expert, and Dag Kittlaus, a telcom veteran-turned-entrepreneur. Kittlaus developed the virtual personal assistant Siri and sold it to a persistent in 2010.
“It is important to prevent the bad side,” Kittlaus, 49, said during his cheerfully titled talk, Will AI Augment or Destroy Humanity? “It’s a good idea to keep an eye on this.”
When the moderator, tech author , asked Kittlaus if in fact supercomputers might not take over for entrepreneurs, using their digital brains to create things faster than humans, Kittlaus nodded.
“Yes, it will happen,” he said. “It’s just a matter of when.”
Kittlaus, it can be argued, is hastening the arrival of that day. Later this year, he will unveil Viv, an open source and cloud-based personal assistant that will allow humans “to talk to the Internet” and have the Internet talk back.
“The more you ask of Viv, the more it will get to know you,” he said. “Siri was chapter one, and now it’s almost like a new Internet age is coming. Viv will be a giant brain in the sky.”
Kittlaus said Viv would differ from Siri, Microsoft’s Cortana and Amazon’s Echo by being able to make mental leaps.
For example, asking Viv “What’s the weather near the Super Bowl” would cause it to “write its own program to find the answer, one that first determines where the Super Bowl is, and then what the weather will be in that city,” he said.
Levy laughed. “So,” he said, “if I stumble out of a bar and just say ‘I’m drunk,’ will it call me an Uber?”
Kittlaus smiled. “It might, or it might order you another drink.”
PRIVACY ISSUES LOOM FOR SMART MACHINES
Such levity aside, privacy and security issues pop to mind when considering a cloud-based system that’s gobbling up data to create a digitized picture of our lives.
Apple’s current battle with the FBI over providing code to crack open a killer’s iPhone is one matter; granting access to a thinking machine that is privy to a person’s smallest details would be quite another.
Pepper the robot looks on during a session about smart future tech at SXSW. (Photo: Rick Jervis, USA TODAY)
Kittlaus’ answer to a question about secure data was less than convincing: “It will be up to you to tell it what you want to tell it.”
The issue of machine learning outgunning human brainpower currently is on bold display in South Korea, where an AI machine called AlphaGo thrashed a champion Go player Lee Se-dol for three straight games. AlphaGo is a program created by DeepMind, a British company that was bought by Google two years ago.
Ironically, Kittlaus is working on a novel that features dangerous AI.
“It’s a Siri out-of-control scenario,” he explained with a smile as the packed room laughed. “The machine seems to be right all the time in its predictions, so the question becomes, how do you trust that machine when you don’t know how it’s making its decisions.”
THE DILEMMA OF CREATING SUPERBABIES
On the topic of DNA sequencing, humans will have to bear the responsibility of ethically handling the coming leaps, said Sabatini, 34, a researcher who captivated TED 2016 last month with a lecture that found him hauling 175 thick books on stage – the full genetic make-up of DNA-sequencing pioneer . Sabatini works for Venter’s company, Human Longevity Inc.
“We should as a species get informed, because this is a controversial topic,” said the Italian scientist. “We need to come to an ethical understanding, or we might get to an unhealthy story.”
DNA sequencing pioneer J. Craig Venter, left, shown here with Nobel Laureate Hamilton Smith. (Photo: JCVI)
Sabatini said that as we understand more about our genetic makeup – of which “only about 1% is clear to us” – there will be the opportunity not only to check for potential diseases before they ravage the body, but also to genetically modify a future human to have more appealing traits. Call it man-made .
Specifically, Sabatini said that it is possible based on current genetic sequencing to see what lines of our human code correspond to not just physical features, but also to so-called superpowers that include the ability to sleep just three hours the night and see well in the dark.
Pressed by moderator and entrepreneur about a rogue scientist or state manipulating the genes in fertilized eggs to create a race of superbabies, Sabatini demurred.
“Sure, these are the worst ideas we can have,” he said. “One thing is reading the genome, another is changing it. That is not genetics, it’s selection.”
Not exactly a reassuring answer. But maybe the ethical issues raised by technology granting us access to our genetic make-up aren’t that clear cut.
Italian DNA expert Riccardo Sabatini, right, told a crowd at SXSW that it would be up to humanity to decide how to best handle the gift of full genomic sequencing. He was interviewed by entrepreneur Loic Le Meur. (Photo: Marco della Cava, USA TODAY)
Sabatini made this point matter-of-factly when he told the packed room that a family history of Alzheimer’s made him check his own DNA for the possibility of the disease. And indeed, the results indicated that he is at risk.
“My mother, she did not want to know the answer, because she told me she would feel guilty,” he said. “But that is not the point. The point here is, if she had this information about me when she was pregnant, my mother would have made a difficult choice out of love, and that would have ruled me out of existence.”
You could hear a pin drop as the anecdote sunk in.
“I don’t have the answers,” said Sabatini. “I just know that the decisions must be made collectively as humans.”
Follow USA TODAY tech reporter Marco della Cava on Twitter: @marcodellacava