openai-whisper-api
moltbot
Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
Quick Install
bunx add-skill moltbot/skills -s openai-whisper-apiarchivebackupclawdbotclawdhubskill
Instructions
Loading…
moltbot
Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
bunx add-skill moltbot/skills -s openai-whisper-apiLoading…
Transcribe an audio file via OpenAI’s /v1/audio/transcriptions endpoint.
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
Defaults:
whisper-1<input>.txt{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
Set OPENAI_API_KEY, or configure it in ~/.clawdbot/clawdbot.json:
{
skills: {
"openai-whisper-api": {
apiKey: "OPENAI_KEY_HERE"
}
}
}
Use when you need to run Flow type checking, or when seeing Flow type errors in React code.
Use when you want to validate changes before committing, or when you need to check all React contribution requirements.
Use when feature flag tests fail, flags need updating, understanding @gate pragmas, debugging channel-specific test failures, or adding new flags to React.
Use when you need to check feature flag states, compare channels, or debug why a feature behaves differently across release channels.