VR and Chatbots

 

Thanks to Ted Hall for a fascinating – if slightly nauseous – visit to the MIDEN lab.    On the week that Oculus Rift launched, it was great to have a chance to try it out.   The NYT conclusion was that it’s worth waiting before buying, “With about 30 games and a few apps available at Rift’s introduction, there isn’t much to do with the system yet. Oculus will eventually need a larger, more diverse set of content to transcend its initial audience of gamer geeks.”  IMG_8033

The physical after-effects are still an issue, “The Rift has other consequences for the mind and body. I felt mentally drained after 20-minute sessions. My eyes felt strained after half an hour, and over a week I developed a nervous eye twitch.”

IMG_8039 FullSizeRender 9

The 23-year old whizzkid who invented Oculus Rift, Palmer Luckey, believes that Virtual Reality will transform communications, and in particular office meetings.  But he told NPR that ethical use of VR in journalism is key,”People understand the limitations of videos, of pictures and of text.  They understand that they’re subjective takes, and they understand that they don’t capture the entire big picture.  With virtual reality, people are naturally inclined to take something they experienced as if it was real, as if it was something that actually happened in the way that they saw it happen…it really induces that sense that you were an eyewitness, first hand, seeing this thing happen.  We’re going to have to be very careful to make sure that we use the technology responsibly in journalism, to make sure that it’s not used to push propaganda or warped depictions of the facts.”

The most darkly symbolic story of the week comes courtesy of Microsoft’s new Artificial Intelligence chatbot, Tay, who was supposed to learn “casual and playful” conversation through talking to tweeters.   Problem is that as the Verge puts it, “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day”, spewing anti-semitic anti-feminist hate speech almost immediately.

Screen Shot 2016-03-30 at 11.43.24

Microsoft pulled Tay down and scrubbed its timeline, apologising that “it had no way of knowing that people would attempt to trick the robot into tweeting the offensive words.”  Earlier, Tay resurfaced and it didn’t go well again.  This time Tay swore “a lot and boasted about smoking weed“, as Mashable put it, or in CNET’s words “has a druggy Twitter meltdown, dozes off again”.   So much for Artificial Intelligence.

READING

Why Isis is Winning the Social Media War (Wired)

How Isis games Twitter (Atlantic)

At Front Lines, Bearing Witness in Real Time (NYT)