MEDIOCRE-MUSIC
An AI tool for generating unique, cutting-edge musical compositions via hybridization.
Haydn crossed with Merzbow. Messiaen crossed with Burial. Babbitt crossed with Muslimgauze.
mediocre-music runs a coordinated pipeline of AI agents -- composer, QA critic, orchestrator, drum arranger, soundfont selector, ornament specialist -- that iteratively generate and refine compositions in ABC notation, then render them to MIDI and WAV. You control how long it runs and how weird it gets.
It's built as a training data generator for audio ML, but the output is genuinely interesting on its own.
Hear It
Stochastic Voltage -- Xenakis x Lightning Bolt
stochastic-voltage-1767426307965.webm
Ionisation Infinitum: Noise Architecture for Orchestral Machines -- Varese x Merzbow
ionisation-infinitum-noise-architecture-for-orches-1771959392556.webm
Viennese Glitch Waltz -- Strauss x Oneohtrix Point Never
viennese-glitch-waltz-1768854783329.webm
Ride of the Hypercore Valkyries -- Wagner x Speedcore
ride-of-the-hypercore-valkyries-1767583620798.webm
Partchcore Genesis -- Harry Partch x Happy Hardcore
partchcore-genesis-1767859136656.webm
Full gallery with PDF scores and analysis -
Try It
export ANTHROPIC_API_KEY=your_key_here
mediocre generate \
-C "Messiaen,Varese,Spectralism" \
-M "Merzbow,Burial,Oneohtrix Point Never" \
-s "Excessively Experimental" \
--sequential --max-iterations 8 --stream-text
That's it. Walk away. Come back to rendered audio.
What It Does
- Genre fusion at scale -- give it any classical composers and modern artists, it figures out how to merge them
- Multi-agent pipeline -- 10+ specialized agents (composer, QA critic, orchestrator, drum arranger, soundfont selector, ornamentation, MIDI expression, title guard, genre researcher) run in coordination
- Iterative refinement -- the orchestrator scores each iteration and directs targeted improvements until the piece passes QA or hits your iteration limit
- Human-in-the-loop --
--interactivemode lets you approve, reject, redirect, listen, or branch at every iteration - Full render pipeline -- ABC - MIDI - WAV - WebM, with PDF scores and structured JSON analysis for every piece
- Soundfont-aware -- custom TiMidity configs per composition, soundfont selected per genre
- Structured output -- composition agent uses Zod schemas to build valid ABC deterministically, not by hoping the LLM gets the syntax right
- Dataset-ready -- everything outputs to structured JSON alongside the audio for ML training
Install
Requires: Node.js 18+, ANTHROPIC_API_KEY, and these system tools:
apt install abcmidi abcm2ps ghostscript timidity fluidsynth sox ffmpeg
# macOS
brew install abcmidi abcm2ps ghostscript timidity fluidsynth sox ffmpeg
# NixOS
nix-shell -p abcmidi abcm2ps ghostscript timidity fluidsynth sox ffmpeg
Key Flags
| Flag | What it does |
|---|---|
-C |
Classical composers/genres to fuse from |
-M |
Modern artists/genres to fuse from |
-s |
Style description |
--sequential |
Enable multi-agent orchestration loop |
--max-iterations N |
How many refinement cycles (default: 5) |
--interactive |
Pause at each iteration for human control |
--stream-text |
Watch the composition being written in real time |
-c N |
Generate N compositions |
--model |
Use a different model (works with --proxy-url + --api-key for any provider) |
More
Full CLI reference, architecture docs, advanced usage, and troubleshooting in DOCS.md.