Show simple item record

dc.contributor.authorZemčík, Tomáš
dc.date.accessioned2020-10-19T11:38:44Z
dc.date.available2020-10-19T11:38:44Z
dc.date.issued2020
dc.identifier.citationAI & Society. 2020.cs
dc.identifier.issn0951-5666
dc.identifier.issn1435-5655
dc.identifier.urihttp://hdl.handle.net/10084/142338
dc.description.abstractThis study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay. This view is so widespread that we can identify it as a certain typical cognitive distortion or bias. This study presents a summary of facts concerning the Tay case, collaborative perspectives from eminent experts: (1) Tay did not mean anything by its morally objectionable statements because, in principle, it was not able to think; (2) the controversial content spread by this AI was interpreted incorrectly-not as a mere compilation of meaning (parroting), but as its disclosure; (3) even though chatbots are not members of the symbolic order of spatiotemporal relations of the human world, we treat them in this way in many aspects.cs
dc.language.isoencs
dc.publisherSpringer Naturecs
dc.relation.ispartofseriesAI & Societycs
dc.relation.urihttp://doi.org/10.1007/s00146-020-01053-4cs
dc.rightsCopyright © 2020, Springer Naturecs
dc.subjectTaycs
dc.subjectchatbotcs
dc.subjectartificial intelligencecs
dc.subjectcognitive distortioncs
dc.titleFailure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases?cs
dc.typearticlecs
dc.identifier.doi10.1007/s00146-020-01053-4
dc.type.statusPeer-reviewedcs
dc.description.sourceWeb of Sciencecs
dc.identifier.wos000565477900001


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record