India is an extremely populous country with a massive population of 1+ billion residents, and inevitably also has a large proportion of people with disabilities in both rural and urban India. Speech impairment is common with people with hearing loss since birth. Among the total disabled population, about 27% have movement constraints and hence are confined to wheelchairs. This paper proposes the idea of using an AI-incorporated device to help such disabled populace. This device uses the standard principles of Internet-of-things(IoT) fetching data from g-sensors from hand gloves worn by an individual and converting this into speech output. Another use-case for the device could be to help the movement-impaired population by programming a pick and place bot using the same pipeline. The core model behind the device is an artificial neural network - LVQ algorithm to run a multi-class classification algorithm. The basic objective is to use g-sensor data and map a set of input vectors to a specific label which further initiates a bot action or produces a speech response.