openai-whisper-api
clawdbot
Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
Quick Install
bunx add-skill clawdbot/clawdbot -s openai-whisper-apiaiassistantcrustaceanmoltyopenclawown-your-data
Instructions
Loading…
clawdbot
Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
bunx add-skill clawdbot/clawdbot -s openai-whisper-apiLoading…
Transcribe an audio file via OpenAI’s /v1/audio/transcriptions endpoint.
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
Defaults:
whisper-1<input>.txt{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
Set OPENAI_API_KEY, or configure it in ~/.clawdbot/moltbot.json:
{
skills: {
"openai-whisper-api": {
apiKey: "OPENAI_KEY_HERE"
}
}
}
Local speech-to-text with the Whisper CLI (no API key).
Use when you need to run Flow type checking, or when seeing Flow type errors in React code.
Use when you want to validate changes before committing, or when you need to check all React contribution requirements.
Use when feature flag tests fail, flags need updating, understanding @gate pragmas, debugging channel-specific test failures, or adding new flags to React.