(Reading time: 1 - 2 minutes)

I'm a big fan of voice technology. I think it's an awesome thing to put something on the shopping list by yelling from across the room. I love the potential voice technology has for helping our elderly stay independent. But here is what I don't like. And I really hope this doesn't sound too shallow. I don't enjoy the voices that smart speakers use. I find them unappealing. And I struggle with Alexa's missed intonations and lack of pause.

 

But, and it's a huge but, what if the voices were not robotic? Did you see Google's demonstration of their Artificial Intelligence? (Google I/O 2018: A Google Assistant that will even make calls for you) The voice was so good that it apparently fooled the person it called on the phone. And what a storm that unleashed!  People either thought it was pretty cool, or they cried foul because an Artificial Intelligence had just fooled a human. And it's that second idea I want to discuss. Is the ability of an Artificial Intelligence to fool a human a moral issue?

 

Obviously, the technology is available for smart speakers to have nearly any voice they want. But the companies are currently choosing robotic ones. Is there any morality at all tied to the voices of voice technology? Could it actually be wrong for a robot to convincingly sound like a human? After the reactions from the demonstration, Google now says it will have it's Artificial Intelligence identify itself as non-human so that no humans are fooled. What do you think? Let's start a conversation on Twitter. Use the #TheVoicesOfVoice, or send a tweet to (at) CreateMyVoice. I'd like to hear your thoughts.  I'll look for you on twitter!

Let "Justin (US)" read the post to you :