Tetyana Karpenko interviews Sviatlana Höhn, who has a PHD degree in computer science and works on creating chatbots.
Background:
Well, this is not exactly how one would like to imagine the future. Full of Offensive, neo-nazi chatbots. But that is exactly how the future knocked at our door recently - as a nasty dubious bot Tay.
With big pompe Microsoft introduced its chatbot, which was supposed to imitate a tweeting teenager. Everybody was invited to chat with Tay, and a lot of people did use the remarkable possibility. But at a certain moment chatbot started to tweet weird sexist and neonazi claimes. Microsoft put Tay offline and deleted all offensive tweets. But after Tay was switched on again, the same situation happened.
Microsoft said: Tay's interlocutors misused a so called “repeat after me” function. They just forced the chatbot to repeat constantly offencive remarks and she adopted them. Now Tay is put silent again. But despite the embarrassment Microsoft proclaimed that soon we all will be using personal assistants - chatbots like Tay (hopefully, an upgraded version of it).
Audio recording of the Part II of the Interview (unabridged version).
Sviatlana answers the following questions:
- Speaking about that bot, „Tay“, it was supposed to learn from others, and we know it ended up quite ridiculous. Teenagers misused it, asking to repeat different things. They just ordered “repeat after me”. What is your opinion, did researchers really think this was a good way to learn?
- There is a thing in Artificial Intelligence, which is called “deep learning”. In Massachusetts Institute of Technology, “Technology review”, they write that this algorithm is also used in the Facebook novelty - Messenger bots. Facebook bots is actually the topic of the 3d part of our program, but when we speak about deep learning - how can it affect technologies nowadays.