Open Source Whisper Assistant

Release latest is live

Whisper-fast dictation under a starry sky.

OSWispa turns your voice into text instantly with a privacy-first, offline pipeline. Hold your configured hotkey, speak, release, and keep writing.

Local-first

Runs offline by default, with optional VPS routing.

GPU ready

CUDA, ROCm, and Metal support.

Push-to-talk

Fast capture without leaving your app.

Listening Local session

Transcript

"Ship the update, then open the editor. Add a quick note about the new hotkey."

Auto paste enabled. Clipboard ready.

Latency

0.7s

Median on mid-tier GPU.

Model

distil-large-v3

Best speed-to-accuracy pick.

Release

latest

Reliable paste and audio capture on GNOME Wayland.

Release

latest is published.

New: first-run setup wizard detects your GPU, recommends the right model, and downloads it automatically. Available as .deb, .rpm, macOS .zip, or build from source. No terminal skills needed — just download, install, and run.

Status

Alpha

Primary OS

Linux & macOS

Backend

Local + Remote (opt-in)

Hotkey

Configurable

Why OSWispa

Voice to text without the cloud tax.

Built for builders who want speed, privacy, and a simple hotkey workflow.

Local by design

Everything runs on your machine. No accounts, no telemetry, no network dependency.

  • Works offline
  • Clipboard safe
  • MIT licensed

Speed you can feel

Whisper.cpp plus optional GPU acceleration keeps transcription snappy.

  • CUDA, ROCm, Metal
  • Fast hotkey capture
  • Optimized models

Hands stay on the keyboard

Push-to-talk, release, and the text lands right where you are typing.

  • Global hotkey
  • Auto paste ready
  • Multilingual options

Workflow

Record in seconds, edit in context.

OSWispa is made for daily writing, coding, and note capture. It respects your focus and your system.

01

Hold the hotkey

Press your configured hotkey to start recording.

02

Speak naturally

Use your normal voice. You can switch models based on accuracy vs speed.

03

Release to paste

Text is typed into your active window or copied to the clipboard.

System status Ready

Platform

Linux & macOS

Backend

ROCm (gfx1100)

Model cache

1.6 GB

Hotkey

Ctrl + Super

Ubuntu / Debian

Supported

Fedora / Arch

Supported

macOS

Supported

Install

Get running in minutes.

Download a package for your OS, or build from source. First launch guides you through model setup.

Download a package

Grab the right package for your system from GitHub Releases. On first launch, OSWispa detects your GPU and walks you through model setup.

# Ubuntu/Debian
sudo apt install ./oswispa_amd64.deb

# Fedora/RHEL
sudo dnf install ./oswispa_x86_64.rpm

# macOS: unzip and copy to PATH
sudo cp oswispa /usr/local/bin/

# Or build from source
git clone https://github.com/tylerbuilds/OSWispa.git
cd OSWispa
./install.sh
All packages are CPU-only. For GPU acceleration, build from source with ./install.sh.

First-run setup wizard

On first launch, OSWispa detects your GPU and VRAM, recommends the best Whisper model, and downloads it with a progress bar. No manual setup needed.

# Just run it — the wizard handles everything
oswispa
# Detects GPU, recommends model, downloads automatically
Works on NVIDIA, AMD, Apple Silicon, and CPU-only systems.

Models

Pick the model that fits your day.

The setup wizard recommends the best model for your hardware. You can also switch models later in Settings.

Model Size Speed Best for
base.en 142 MB Fast Quick dictation
medium.en 1.5 GB Balanced Everyday writing
distil-large-v3 1.5 GB Fast Best all-around
large-v3 2.9 GB Slow Highest accuracy

Need help choosing?

See the model guide for hardware limits and download tips.

Open model guide

FAQ

Common questions answered.

Troubleshooting tips and setup guidance for OSWispa.

OSWispa doesn't start on login

The most common cause is a missing or unreachable model file. Check your config at ~/.config/oswispa/config.json and verify the model_path exists. Models should be stored locally at ~/.local/share/oswispa/models/.

For reliable autostart, use a systemd user service instead of the desktop file:

# Create the service
cat > ~/.config/systemd/user/oswispa.service <<EOF
[Unit]
Description=OSWispa Voice-to-Text
After=graphical-session.target

[Service]
Type=simple
ExecStart=/usr/local/bin/oswispa
Restart=on-failure

[Install]
WantedBy=default.target
EOF

# Enable and start
systemctl --user daemon-reload
systemctl --user enable oswispa
systemctl --user start oswispa

Model not found error

OSWispa exits immediately if the configured model file is missing. If your models are on a network drive or external mount, they may not be available at login time.

Fix: Copy models to local storage:

mkdir -p ~/.local/share/oswispa/models
cp /path/to/your/model.bin ~/.local/share/oswispa/models/

Then update ~/.config/oswispa/config.json to point to the local path.

App closes when terminal closes

If you're running oswispa from a terminal, closing that terminal kills the app. Use the systemd service approach above, or launch with nohup:

nohup oswispa & disown

The systemd service is recommended for production use.

Text doesn't paste automatically

OSWispa types text directly via ydotool. For best reliability, enable the service:

sudo systemctl enable ydotool
sudo systemctl start ydotool

Text is also copied to the clipboard as backup. Use Ctrl + V if needed.

How do I change the hotkey?

Open Settings from the tray icon, then configure modifiers and an optional trigger key. You can still edit ~/.config/oswispa/config.json directly:

"hotkey": {
  "ctrl": true,
  "alt": false,
  "shift": true,
  "super_key": false,
  "trigger_key": "space"
}

Changes apply immediately.

How do I check if OSWispa is running?

Check the systemd service status or look for the process:

systemctl --user status oswispa
# or
pgrep -f oswispa

The system tray icon also indicates active status.

Can I use a VPS instead of local models?

Yes. In Settings, switch Backend mode to Remote VPS, set your endpoint, and optionally store an API key securely. HTTPS is required by default.

OSWispa remains local-first and can fall back to local models when available.

Contribute

Download, run, done. No dev skills needed.

Pre-built packages for Ubuntu, Fedora, and macOS. First-run wizard handles model setup. Help with Windows support and UI polish.