Sevue is a desktop application that turns live sign input into subtitle-like text over video and publishes the result to a virtual camera.
The app is built with PySide6, OpenCV, MediaPipe, and pyvirtualcam. It runs locally on your machine and is designed for low-latency, always-on desktop use.
clone the project:
git clone https://github.com/codeweevers/sevue.git
cd sevue
create and activate a virtual env. For example with conda:
conda create -n sevue python=3.12
conda activate sevue
install requirements:
pip install --upgrade pip
pip install -r requirements.txt
python sevue.pyw
Start Sevue.Sevue-VirtualCam (or your configured virtual cam target).sevue.pyw: app entrypoint and single-instance lock/activation servercontrollers/main_window_controller.py: main orchestration (UI state, workers, tray, camera/model selection)models/state_model.py: persisted app state, settings, shortcuts, model registry, selected camera/modelmodels/frame_buffer.py: shared frame buffer between camera and AI workersworkers/threads.py:
CameraThread: capture, overlay rendering, virtual camera outputAIThread: gesture inference and subtitle text generationworkers/camera_utils.py: cross-platform camera discovery and metadataservices/model_registry_service.py: model import/validation/registry managementservices/startup_service.py: start-on-login integration (Windows/Linux)views/: Home/Settings pages and UI widgetsSevue saves:
Config location:
platformdirs)data/config.json in project rootservices/model_registry_service.py.task models can be imported from SettingsDefault configurable shortcuts:
Ctrl+Shift+S start/stop cameraCtrl+Shift+M hide/show windowCtrl+Shift+C flip cameraCtrl+Shift+O flip subtitlesCtrl+Shift+H flip hand labelsCtrl+Shift+D toggle hand debugAlso supported:
Esc window hide/show behaviorPyInstaller spec is included at:
train_installer_gen/sevue.specWindows installer-related assets/scripts are in:
train_installer_gen/Install_SevueCam.battrain_installer_gen/Uninstall_SevueCam.battrain_installer_gen/installer scrypt.issTraining can only be done on Linux and must use Python 3.10 or 3.11. This process is separate from the runtime app and is intended for creating and exporting custom gesture models.
pip install mediapipe-model-maker
train_installer_gen/train.py to start model training.
Note: edit the script to set the folder path of your dataset folder. The dataset should be structured as folders of images, where each folder name represents the gesture label (e.g., thumbs_up/, wave/, etc.).make_none.py to generate the required “none” files; you will need to edit that script to point the dataset folder path to your own data.exported_model/gesture_recognizer.task..task file using the choose model option in the app’s settings.This is separate from the runtime app and intended for creating/exporting gesture models.
See LICENSE.