sevue

Sevue

Sevue is a desktop application that turns live sign input into subtitle-like text over video and publishes the result to a virtual camera.

The app is built with PySide6, OpenCV, MediaPipe, and pyvirtualcam. It runs locally on your machine and is designed for low-latency, always-on desktop use.

What the App Does

quick install

  1. Grab the proper file for your OS from the Releases page.
  2. Install the file like normal for your OS. Usually just double-click on the file.
  3. That’s it, sevue will be installed to your system like normal and shows up on the doc / start menu

Install From Source

Requirements

steps

clone the project:

git clone https://github.com/codeweevers/sevue.git
cd sevue

create and activate a virtual env. For example with conda:

conda create -n sevue python=3.12
conda activate sevue

install requirements:

pip install --upgrade pip
pip install -r requirements.txt

Run

python sevue.pyw

Basic Usage

  1. Launch Sevue.
  2. Click Start Sevue.
  3. In your conferencing/recording app, choose Sevue-VirtualCam (or your configured virtual cam target).

Runtime Architecture

Configuration and Persistence

Sevue saves:

Config location:

Models

Keyboard Shortcuts

Default configurable shortcuts:

Also supported:

Packaging

PyInstaller spec is included at:

Windows installer-related assets/scripts are in:

Training (Optional)

Training can only be done on Linux and must use Python 3.10 or 3.11. This process is separate from the runtime app and is intended for creating and exporting custom gesture models.

  1. Set up a compatible environment (Linux, Python 3.10‑3.11) and install the required package:
    pip install mediapipe-model-maker
    
  2. Use the training script located at train_installer_gen/train.py to start model training. Note: edit the script to set the folder path of your dataset folder. The dataset should be structured as folders of images, where each folder name represents the gesture label (e.g., thumbs_up/, wave/, etc.).
  3. Before or after training, run make_none.py to generate the required “none” files; you will need to edit that script to point the dataset folder path to your own data.
  4. When the process completes the exported model will be written to exported_model/gesture_recognizer.task.
  5. Import the resulting .task file using the choose model option in the app’s settings.

This is separate from the runtime app and intended for creating/exporting gesture models.

Troubleshooting

License

See LICENSE.