There’s nothing exciting about a digital blackjack table. All the casinos have it. Even Finland’s casinos have it. Sad.
But what if… There was a robot acting as the croupier?
Yes. You heard me right. We made a digital blackjack table. With a robot as croupier. And we brought it to Slush to show to other people who enjoy tech magic.
Attribution: Futurice, CC BY 4.0
Does this sound like the greatest possible use of a robot? It sure was not the greatest possible use of a robot. The future of robot-human interaction (hopefully) isn’t in gambling, an activity arguably harmful to humans, or simple interactions like facilitating a game, when the social interaction is solely based on a small set of rules. But this experiment was very useful for exploring the possibilities of human-robot interaction, in a controlled setting. We had a great opportunity to test how this future format of digital interfaces would work at Slush.
Our blackjack game ran on a laptop, connected to a tv mounted in a custom wooden blackjack table, and our robot. The laptop sent commands to the robot and the tv as the game progressed. A user could either stay or hit in the game. Things like splitting, insurance, and surrender were excluded from this blackjack game, to keep the game easy to learn for newbies. Stay was signaled to the game by having the user move their hand over a LeapMotion sensor integrated into the table. Hit was signaled by knocking on the table.
There were no sensors integrated into the robot itself. It did not collect input from its surroundings, it only did what the game running on the laptop told it to do. It could be said that the robot’s “brain” was in the laptop, and its “senses” were in the blackjack table. This is important to point out, as people (unconsciously) see humanoid robots like themselves, with the brain and senses integrated into the body itself. People imagine robots as input-output systems, and assume that they have certain skills like speech recognition or machine vision. These assumptions affect people’s behavior toward the robot, and thus affect how the interaction flow must be designed.
Testing human-robot interaction in a familiar situation like playing blackjack proved to be a good platform for research. The role of the human croupier in this situation is one of stoic silence. Unless they’re looking for a tip, croupiers are unlikely to engage in lively flirtation. This made the role of croupier perfect for our robot, who is sadly unable to imitate creative thought. Pre-written lines worked well for this use case.
The venue we were at worked for us. A busy atmosphere with lots of people made it forgivable that the robot’s voice was barely audible over the crowd’s noise. The people didn’t expect to converse with it, because they, quite simply, couldn’t hear it.
Four things I learned when fiddling with this contraption (and observing how people reacted to it and interacted with it at Slush):
Robot gaze direction and hand motion is very effective in directing users’ attention
While this may sound obvious, it was definitely surprising to see how well it worked in practice. Our robot’s head always assumed a resting position looking downward, so that it seemed to be looking at the blackjack table. This was effective in directing people’s attention toward the game, which could be seen clearly when people interacted with the robot. People only looked at the robot when they were listening to it, and looked at the blackjack table when the robot was looking or pointing at it. Effectively, people interacted with the robot as if it were a human.
Use people’s assumptions about robots to your advantage
People were impressed by how smooth the user experience was, and how skillfully the robot was directing the game. They enjoyed how nicely the robot’s hand motions and speech synced with the digital cards’ movements. Users seemed to perceive the robot as directing the game like a human croupier would, through physical interaction with it. In reality, the robot’s brain (located in the laptop) was directing both the physical robot body and the game running in the TV. The effect we created relied solely on how people perceived the robot, and not what the robot was actually doing. We had created an illusion.
The boundaries of a robot are not clearly defined
Unlike humans, a robot does not need to be contained in one body. It can have external brains and external senses, and with the help of wireless communication, those senses and brains don’t even need to be located nearby. This opens up huge capabilities for a robot: it could sense the temperature of the water surrounding an oil rig in Norway, or make difficult calculations by consulting a supercomputer in China. Does this make those sensors and supercomputers part of the robot? Where are its boundaries? What defines a robot? Interesting questions that will be explored in the years to come.
The (near) future of robotics may be in integrated solutions where robot capabilities are synced with mobile phones or tablets
As I detailed above, our robot was not autonomous in the sense that a human body is – it didn’t have its own brain or senses. Wouldn’t it be cool if robots were autonomous, mobile, social beings? It would, but those are probably not the robots of tomorrow. Probably not the next few years either, I’m sorry to say. What we may see is an increase in screen-based solutions integrating with physical interaction interfaces. Maybe your voice assistant will live in a small mascot on your nightstand that shakes its mechanical bum whenever someone booty calls you. Maybe you’ll be able to play your video games against an AI, with a physical manifestation of the AI sitting next to you on your sofa, screaming swears. Maybe your Roomba will be able to read you the news in the morning from your news app (though I’m not sure if you’d hear it over the hoovering sounds). Who knows. But I am convinced that in the next few years, our screens will start reaching out to us in the physical world.