Monday, January 29, 2018

More than a Book Review; Artificial intelligence in conversation

      Sometimes I read a book and it feels like it needs more than a book review. Book reviews tell what you thought of the book. They do not include all of the extraneous things that you thought while you were reading it. I love books that make me think and sometimes I feel like those books deserve more than just a simple review. The book that I am currently reading is such a book.
     The Most Human Human by Brian Christian is an interesting take on what it means to communicate as a human. Mr. Christian has been selected to be a human in an annual Turing test. The Turing test is conducted to judge artificial intelligence (AI.) A panel sits at computers and "talk" to someone who responds back. Responding to the panel are real humans and also computer programs. The test is too try to fool the panel into thinking that a computer program is a real person. From this contest has grown another award, "The Most Human Human." This is, of course, awarded to the responder who seems to be least like a computer. Our author, Mr. Christian's goal is to win the Most Human Human Award.
       It seems that AI conversation is usually created from bits taken from many different conversations. For example, "Hello" is quite often answered by, "Hi, how are you?" When the computer sees the frequency of this reply it can assume that this is acceptable and usable. I think it's pretty easy to see how easy it might be to fake being human on this basis. One of the problems with this type of answer is that by choosing the most popular answer the program has no stance. In other words the computer is not pretending to be a person but is rather pretending to be humanity. To quote the book, "Fragmented humanity is not humanity." The bot is not offering itself as a character, so therefore it is easy for it not understand when it is asked personal questions.  For example you might ask a bot it's gender and it might reply that it was male, when you ask if it has a significant other it might it might reply that, yes, it plans on marrying him next year, but when asked if he is gay the response might be negative. Each time a bot "speaks it is in response to something that was said and it does not track conversations so general themes can be missed. It is interesting that so our much of what we think as conversation to build relationships is really simply response. While reading this book I came across a new word. I had to look up the word fungible. It means that you can substitute one individual for another without noticing the difference. Much of our conversations are fungible. Another point that I thought was interesting was that in negative conversation it is much easier for a bot to take the place of a human, because when we argue our responses are reactionary. They do not depend as much on the whole conversation it is just a reaction to the last statement. Apparently someone programmed a bot for more negative responses and it carried on a conversation with someone for an hour and a half, effectively passing the Turning test.
      While I was reading this book I found these ideas to be credible and I will try to be more present in my conversations and less fungible. Also, the next time I argue with someone I will try to make sure the argument continues to be about the original topic and not about responding to the last comment.

No comments:

Post a Comment