In our project .
Now we run the program and show the sample image of Down Syndrome Children .
Then the camera captures the image and this is the captured image .
Then the image was analyzed by the CNN model and the emotion of the Children is detected .
The detected emotion is passed due to oral model to give appropriate emotion layout .
When the app is started , the camera captures the face uncertain layout .
For appropriate emotion is displayed .
The child or individual can access any of the options like puzzles , live songs or live inactions , et cetera .
The chatbot can also be accessed by the individual .