Sign Language Recognition

MIT License

Recognize American Sign Language (ASL) using Machine Learning.

Currently, the following algorithms are supported:

The training images were retrieved from a video, filmed at 640x480 resolution using a smartphone camera.


  • Install Python3 (last tested on Python3.7).
  • Install pipenv.
  • In the project root directory, execute pipenv sync.


You can directly start classifying new images using the pre-trained models (the .pkl files in data/generated/output/<model_name>/) trained using the above dataset:

  python <model-name>

Note that the pre-generated model files do not contain the file for knn due to its large size.

If you want to use knn, then download it separately from here and place it in data/generated/output/knn/.

The models available by default are svm and logistic.

The above workflow can be executed using

However, if you wish to use your own dataset, do the following steps:

  1. Put all the training and testing images in a directory and update their paths in the config file code/common/

    (Or skip to use the default paths which should also work).

    Optionally, you can generate the images in real-time from webcam - python
  2. Generate image-vs-label mappings for all the training images - python train.
  3. Apply the image-transformation algorithms to the training images - python
  4. Train the model - python <model-name>. Model names can be svm/knn/logistic.
  5. Generate image-vs-label mapping for all the test images - python test.
  6. Test the model - python <model-name>.

    Optionally, you can test the model on a live video stream from a webcam - python

    (If recording, then make sure to have the same background and hand alignment as in the training images.)

All the python commands above have to be executed from the code/ directory.

The above workflow can be executed using


  • Improve the command-line-arguments input mechanism.
  • Add progress bar while transforming images.
  • Add logger.