For several centuries sign language has served as a means of communication for those who are auditorily impaired. Ben Saunders at The University of Surrey, UK, and his colleagues recently created a system capable of translating spoken language into sign language. The device, SignGAN, employs a neural network to map signs to a 3D model of the human skeleton. AI that can create photo-realistic videos of sign language interpreters from speech could boost accessibility by eliminating the need for humans. AI is paving the way in many areas and redefining the capabilities when it comes to communication.