// Behind The Music
Who Am I
I’m bitwize—a hacker who loves music and experimenting with new technology. When AI music generation tools started getting good, I couldn’t resist seeing what was possible.
I’m also excited to finally make the music I wish already existed—songs about weird niche tech interests of mine, like how the Debian operating system came to be, the drama of the Linux kernel mailing list, or the underground warez scene. Stories that deserve to be told, but nobody’s made albums about … until now.
The Short Version
bitwize music is an experiment in human + AI music collaboration. Not “AI-generated music” where you push a button and get a song. More like having a really weird, really helpful bandmate who never sleeps and has read everything on the internet.
The human brings the vision, the judgment, and the final call. The AI brings tireless iteration, research at scale, and an obsessive attention to detail. Together, I make albums that neither of us could make alone.
Now Open Source
The entire production system behind bitwize music is now public.
I’ve open-sourced the Claude Code plugin that powers every album on this site:
github.com/bitwize-music-studio/claude-ai-music-skills
This isn’t a watered-down demo. It’s the actual toolkit—50 specialized skills, 13 structured templates, 67 genre guides with 85 artist deep-dives, an MCP server with 30+ tools, Suno V5 reference documentation, Python mastering/promotion/sheet-music scripts, and the research workflows used to build documentary albums.
What’s In The Toolkit
| Category | What’s Included |
|---|---|
| Production Skills | lyric-writer, album-conceptualizer, suno-engineer, mastering-engineer, mix-engineer, album-art-director, sheet-music-publisher |
| Quality Control | lyric-reviewer, pronunciation-specialist, explicit-checker, plagiarism-checker, validate-album, pre-generation-check, verify-sources |
| Research | 11 specialized researcher skills for legal, government, journalism, security, historical, financial, biographical, and more |
| Promotion | promo-director, promo-writer, promo-reviewer — video generation, social media copy, platform-specific formatting |
| Import & Workflow | import-audio, import-art, import-track, clipboard, album-dashboard, next-step, rename |
| Templates | 8 core templates (track, album, artist, genre, research, sources, ideas) + 5 promo templates (Twitter, Instagram, TikTok, YouTube, Facebook) |
| Genre Reference | 67 genre guides with Suno-optimized style prompts, 85 artist deep-dives covering technique, vocal style, and production patterns |
| Suno Reference | V5 best practices, pronunciation guide, artist blocklist, structure/voice/instrument tags |
| Python Tools | Mastering (5 scripts), Promotion (3 scripts), Sheet Music (3 scripts), Cloud Upload |
| MCP Server | 30+ tools for instant state queries — albums, tracks, sessions, config, QC, and mastering |
| Testing | 2,333 tests across 32 test files — plugin validation, unit tests, integration tests |
Why Open Source It?
Because AI music production shouldn’t be a black box. The interesting part isn’t “I used AI”—it’s how. What prompts work? What quality checks matter? How do you maintain documentary rigor when an AI is helping write lyrics?
These are questions worth sharing answers to. If you’re experimenting with AI music, maybe this saves you the trial-and-error I went through.
The Workflow
This isn’t “open ChatGPT and type some prompts.” The production system behind bitwize music includes over 250,000 lines of structured documentation—custom instructions, templates, workflows, research files, and track specifications that guide every phase of album creation.
Here’s how it actually works:
Phase 1: Concept
Every album starts with a question. What’s the story? What’s the angle? For documentary albums like The Wizard or The Scene, that means identifying a real story worth telling—something with depth, drama, and a hook that’ll make someone want to listen.
From the toolkit: The album-conceptualizer skill guides 7 planning phases before any lyrics get written—foundation, concept deep dive, sonic direction, structure planning, album art, practical details, and confirmation. No track writing until all phases complete.
Phase 2: Research
This is where documentary albums get serious. Court documents, DOJ press releases, contemporary newspaper accounts, academic sources. Not Wikipedia summaries—primary sources wherever possible.
From the toolkit: Eleven specialized researcher skills handle different source types:
| Researcher | Domain | Example Use |
|---|---|---|
researchers-legal | Court documents, indictments, rulings | Ross Ulbricht case files for The Scene |
researchers-gov | DOJ/FBI press releases, agency statements | Federal takedown announcements |
researchers-historical | Archives, contemporary accounts | Edison Papers at Rutgers for The Wizard |
researchers-security | Malware analysis, CVEs, attribution reports | Technical details for hacker albums |
researchers-journalism | Investigative articles, interviews | Long-form reporting verification |
researchers-financial | SEC filings, earnings calls | Corporate fraud research |
researchers-biographical | Personal backgrounds, interviews | Subject deep-dives |
researchers-primary-source | Subject’s own words: tweets, blogs, forums | Direct quotes and context |
researchers-tech | Project histories, changelogs | Open source project research |
researchers-verifier | Fact-checking, citation validation | Final QC before human review |
researcher | General research coordination | Cross-domain investigation |
The document-hunter skill automates retrieval from public archives using Playwright.
Here’s what research actually looks like for The Wizard (about Thomas Edison’s darker side):
| Source Type | Examples |
|---|---|
| Official Archives | Thomas A. Edison Papers at Rutgers (150,000+ documents) |
| Academic Books | Mark Essig’s Edison and the Electric Chair |
| Court Records | In re Kemmler, 136 U.S. 436 (1890) |
| Contemporary Newspapers | New York Sun (August 25, 1889 exposé), Brooklyn Daily Eagle |
That album required cross-referencing 11 separate research files covering everything from Edison’s patent litigation to primary sources about Topsy the elephant.
Phase 3: Writing
Lyrics get written, revised, and polished. The AI helps with iteration—trying different rhyme schemes, checking for prosody issues, making sure verse 2 actually develops the story instead of just repeating verse 1 with different words.
From the toolkit: The lyric-writer skill enforces:
- Rhyme quality checks — No self-rhymes, no lazy patterns
- Prosody analysis — Stressed syllables on strong beats
- POV/tense consistency — No accidental shifts
- Source verification — Lyrics match captured research (for documentary tracks)
- Pronunciation scanning — Catch homographs before generation
- Genre-specific conventions — Verse length limits, rhyme schemes, and structures tuned per genre and BPM
The lyric-reviewer runs an 8-point QC checklist before any track goes to Suno.
But the creative direction stays human. What’s the emotional arc? What’s the hook? What makes this track matter?
Phase 4: Generation
This is where AI music generation comes in. I use Suno to turn lyrics and style descriptions into actual audio. It’s not one-and-done—it’s iterative. Generate, listen, adjust, try again. Sometimes it takes 10, 20 or more generations to land the right sound for a single track.
From the toolkit: The suno-engineer skill optimizes prompts for Suno V5. The pre-generation-check skill runs a final validation pass. The reference documentation includes:
| Reference File | Contents |
|---|---|
| V5 Best Practices | Comprehensive prompting guide |
| Pronunciation Guide | Homographs, tech terms, fixes |
| Structure Tags | [Verse], [Chorus], [Bridge], etc. |
| Voice Tags | Vocal manipulation and style |
| Instrumental Tags | 100+ instruments |
| Artist Blocklist | Names that trigger copyright filters |
| Workspace Management | Organizing generations efficiently |
The clipboard skill copies Suno-ready prompts to your clipboard, and a Tampermonkey userscript can auto-fill Suno’s input fields directly.
Phase 5: Verify
For documentary albums, this is non-negotiable. Every factual claim gets traced back to sources. The research sections on the album pages aren’t decoration—they’re the receipts.
From the toolkit: Human verification is baked into the workflow. Track status moves through Sources Pending → Sources Verified → In Progress. The verify-sources skill coordinates the verification process, and the researchers-verifier handles citation validation—but final sign-off is always human.
Phase 6: Master
Raw Suno output isn’t ready for streaming platforms. Every track gets mastered to streaming standards.
From the toolkit: Five Python scripts in tools/mastering/:
| Script | Function |
|---|---|
analyze_tracks.py | Measure LUFS, true peak, dynamic range |
master_tracks.py | Apply loudness normalization, EQ, limiting |
qc_tracks.py | Run 7 automated quality checks on mastered audio |
fix_dynamic_track.py | Handle high-dynamic-range problem tracks |
reference_master.py | Match sound of professional reference tracks |
The mastering-engineer skill coordinates the workflow, with genre-specific presets for EQ and compression.
Phase 7: Promote
Promo videos, social media copy, platform-specific formatting. Getting the music in front of people.
From the toolkit: Three promotion scripts in tools/promotion/:
| Script | Function |
|---|---|
generate_promo_video.py | Create promo videos with album art, waveforms, and audio |
generate_album_sampler.py | Build album sampler videos from track highlights |
generate_all_promos.py | Batch-generate promos for an entire album |
The promo-director coordinates video generation, promo-writer generates social media copy from 5 platform-specific templates (Twitter, Instagram, TikTok, YouTube, Facebook), and promo-reviewer polishes copy for each platform’s conventions.
Phase 8: Release
Distribution, metadata, and deployment. The boring but necessary stuff that turns a collection of tracks into an actual album people can find and listen to.
From the toolkit: The release-director skill runs through the complete release checklist—metadata prep, DistroKid formatting, SoundCloud upload coordination, and website deployment. The cloud-uploader handles pushing audio and assets to cloud storage.
The Audio Engineering Pipeline
Here’s the end-to-end flow from raw Suno output to live on streaming platforms:
Raw Suno Audio
↓
Import & Organize ── import-audio skill
↓
Mix Polish ── mix-engineer skill (per-stem processing)
↓
Analyze ── analyze_tracks.py (LUFS, peaks, spectral)
↓
Master ── master_tracks.py (EQ, compress, normalize, limit)
↓
QC ── qc_tracks.py (7 automated checks)
↓
Sheet Music ── sheet-music-publisher skill (transcribe → publish)
↓
Promo Videos ── promo-director skill (15s vertical videos)
↓
Release ── release-director skill (9-point QA → distribute)
↓
Live on Platforms
Fully automated — Mastering (analyze → EQ → limit → QC), promo video generation, sheet music publishing, website deployment. These run end-to-end with no human intervention once triggered.
AI-assisted — Lyric writing, research gathering, stem processing, social media copy. The AI does the heavy lifting but a human guides the direction.
Human-only — Suno generation (listening and selecting keepers), quality control sign-off, creative direction, DistroKid/SoundCloud uploads. These require ears and judgment that can’t be automated.
Sheet Music Pipeline
The sheet music step in the pipeline above expands to its own multi-stage flow:
Mastered WAV
↓
AnthemScore (auto-transcribe) → PDF + MusicXML + MIDI
↓
MuseScore (manual polish) → fix notes, rhythms, layout
↓
Title Cleanup → strip track numbers, add credits
↓
Songbook (optional) → combine PDFs, add TOC + page numbers
↓
Publish to R2 → available on bitwizemusic.com
The MCP Server
The plugin includes a Model Context Protocol server that gives Claude Code instant structured access to your entire production state.
Instead of scanning files and parsing markdown every time, the MCP server provides 30+ tools for querying albums, tracks, sessions, config, and running QC checks. It’s the reason session startup is fast — 2-3 file reads instead of 50-220.
The server also exposes mastering and QC tools directly, so Claude can analyze audio, run quality checks, and coordinate the mastering pipeline without shelling out to Python scripts manually.
The Genre System
The toolkit includes 67 genre guides — not just a list of genre names, but deep references covering:
- Suno-optimized style prompts — What to type into the style box for each genre
- Verse length limits — Genre and BPM-specific limits to prevent Suno from cutting lyrics
- Lyric conventions — Rhyme schemes, structures, and vocabulary norms per genre
- 85 artist deep-dives — Detailed breakdowns of vocal style, production techniques, and what makes each artist’s sound distinctive
Genres covered include everything from hip-hop and k-pop to opera, vaporwave, doom-metal, and bossa-nova. The K-pop guide alone includes 27 artist deep-dives from BTS to NewJeans.
Tools I Use
The bitwize music stack:
AI & Generation
- Claude Code — AI collaborator for writing, research, iteration, and documentation.
- Suno — AI music generation. Turns lyrics and style descriptions into actual songs.
- ChatGPT — Album artwork generation with DALL-E.
Audio Processing
- Python — Custom mastering, promotion, and sheet music scripts.
- pyloudnorm — ITU-R BS.1770-4 loudness measurement for streaming targets.
- Matchering — Reference-based mastering to match the sound of professional tracks.
- FFmpeg — Promo video generation, audio extraction, format conversion.
- SciPy — Signal processing for EQ and filtering.
- Librosa — Audio analysis for smart segment selection in promo videos.
Sheet Music
- AnthemScore — Audio-to-sheet-music transcription for piano reductions.
- MuseScore — Sheet music editing, cleanup, and export to PDF, MusicXML, and MIDI.
Every released album has free downloadable sheet music in three formats: PDF (printable scores), MusicXML (editable in any notation software), and MIDI (playable/importable). Individual tracks and full album songbooks are available from each album’s sheet music page.
Research & Automation
- Playwright — Automated browser for document hunting from public archives.
Website & Infrastructure
- GitHub — Version control for everything: lyrics, research, website, documentation. Easy reverts, change tracking, and collaboration history.
- Hugo — Static site generator for bitwizemusic.com.
- Cloudflare Pages — Hosting and deployment.
- Cloudflare R2 — Object storage for promo videos and sheet music files (PDF, MusicXML, MIDI).
Distribution
- DistroKid — Distribution to Spotify, Apple Music, and everywhere else.
- SoundCloud — Primary streaming and sharing platform.
How It All Fits Together
Claude Code is the orchestrator. It doesn’t just help with writing—it runs the entire production pipeline through the open-source skill system. Research, lyric iteration, mastering scripts, promo video generation, social media copy, website deployment—all triggered and coordinated through Claude Code.
What’s automated:
- Research gathering and source verification
- Initial lyric drafts and technical quality checks (rhyme, prosody, pronunciation)
- Running Python mastering scripts and audio QC
- Generating promo videos and social media copy
- Sheet music transcription, songbook creation, and publishing to R2
- Website builds and deployment
- Version control and documentation
- MCP server for instant production state queries
What’s still manual:
- Lyric iteration and refinement (story flow, emotional arc, creative direction)
- Suno generation (pasting prompts, listening, downloading keepers)
- Quality control listening and final approval
- SoundCloud and DistroKid uploads
- Album artwork generation via ChatGPT
The goal is human judgment where it matters (creative decisions, quality control) and automation everywhere else.
What It Costs
This isn’t free. Transparency about the process means transparency about the price tag.
The Stack
| Category | Service | Cost | Frequency |
|---|---|---|---|
| AI Core | Claude Code Max | $200 | Monthly |
| AI Core | ChatGPT Plus | $19.99 | Monthly |
| Music Generation | Suno Pro | $182.30* | Annual |
| Distribution | DistroKid Ultimate | $89.99 | Annual |
| Streaming | SoundCloud Artist Pro | $99 | Annual |
| Transcription | AnthemScore | $107 | One-time |
| Image Editing | Photopea | Free | — |
*Black Friday 2025 pricing (40% off first year). Regular price $288/year.
Annual Breakdown
| Type | First Year | Ongoing |
|---|---|---|
| Monthly subscriptions | $2,639.88 | $2,639.88 |
| Annual subscriptions | $371.29 | $476.99 |
| One-time purchases | $107.00 | — |
| Total | $3,118.17 | $3,116.87 |
That’s roughly $260/month for the full production stack.
Note: Claude Code Max ($200/month) isn’t music-only — I use it for my day job, other projects, and general hacking. The music production shares that cost. If you’re already using Claude Code for development work, the marginal cost for music is just the generation and distribution tools.
What Each Tool Does
| Tool | Role in Production |
|---|---|
| Claude Code Max | AI collaborator — research, writing, iteration, automation, code |
| ChatGPT Plus | Album artwork generation (DALL-E) |
| Suno Pro | AI music generation — turns lyrics into audio |
| DistroKid Ultimate | Distribution to Spotify, Apple Music, Amazon, etc. |
| SoundCloud Artist Pro | Primary streaming platform, analytics, Pro features |
| AnthemScore | Audio-to-sheet-music transcription for piano reductions |
| Photopea | Light image editing — cropping, resizing, touchups (free) |
Is It Worth It?
Depends on what you’re building. For serious album production with documentary research, quality control, mastering automation, and multi-platform distribution — this stack handles it.
Could you do it cheaper? Yes:
- Drop Claude Code Max for the free tier (limited usage)
- Skip ChatGPT if you use other image tools
- Use SoundCloud free tier
- Skip AnthemScore if you don’t need sheet music
A minimal stack (Suno + DistroKid) runs about $275/year. The full production toolkit is 10x that — because it does 10x more.
Documentary Rigor
Some of these albums tell real stories about real events. That comes with responsibility.
The Source Hierarchy
Not all sources are equal. I follow a strict hierarchy:
- Court documents — Indictments, rulings, transcripts (highest authority)
- Government releases — DOJ press releases, agency statements
- Investigative journalism — Long-form reporting from reputable outlets
- News coverage — Contemporary newspaper accounts
- Wikipedia — Context only, never for facts
Myth Busting
Sometimes research reveals that popular narratives are wrong.
The Wizard addresses a famous myth: that Thomas Edison personally electrocuted Topsy the elephant as anti-AC propaganda.
The myth: Edison electrocuted Topsy to scare people away from AC current.
What I found:
- Edison was never at Luna Park
- The War of Currents ended in 1892; Topsy died in 1903 (10 years later)
- Zero mentions of Topsy in Edison’s correspondence at Rutgers
- Luna Park owners Thompson & Dundy ordered the execution, not Edison
The album addresses this directly—Topsy’s death was the culmination of Edison’s legacy, not his action. That’s a more interesting (and accurate) story than the myth.
Track-by-Track Verification
Every documentary track gets a verification table. Here’s a real example from “December Fifth” on The Wizard:
| Lyric Claim | Verified Fact | Source |
|---|---|---|
| December 5, 1888 | Date of large animal demonstration | Edison and the Electric Chair |
| Edison attended | “Edison personally attended and addressed the committee” | Multiple sources |
| 4 calves, 1 horse killed | Documented count | Edison and the Electric Chair |
| 770 volts | Voltage used on first calf | Executed Today |
If a claim can’t be verified, it gets flagged as “creative license” and documented as such.
What Gets Documented as Creative License
I’m explicit about what’s dramatization:
| Element | Type | Notes |
|---|---|---|
| Internal thoughts of Edison | Dramatization | No documented internal monologue |
| Topsy’s perspective | Artistic license | Anthropomorphization for narrative |
| Emotional framing | Interpretation | “Accusatory narrator” is artistic choice |
What is not creative license: all dates, names, numbers, court rulings, and attributed quotes.
The Pronunciation Challenge
AI music generation has a dirty secret: it can’t read.
When Suno sees “live,” it doesn’t know if you mean “live performance” (LYVE) or “live your life” (LIV). When it sees “read,” it guesses—and guesses wrong half the time.
From the toolkit: The pronunciation guide and pronunciation-specialist skill catch these before generation.
Real Fixes from Real Tracks
From “December Fifth” on The Wizard:
| Original | Problem | Fixed Version |
|---|---|---|
| Medico-Legal Society | Technical term | Med-ih-koh Lee-gul Society |
| Kennelly | Unusual name | Ken-uh-lee |
| electricity | Common mispronunciation | ee-lek-triss-i-tee |
From Deb + Ian:
| Original | Problem | Fixed Version |
|---|---|---|
| Debian | Tech term | Deb-ee-in |
The Homograph Problem
These words have two pronunciations. Every one requires a decision:
| Word | Could Be | Or | The Fix |
|---|---|---|---|
| live | LYVE (perform) | LIV (exist) | Rewrite or add context |
| wind | WINED (breeze) | WIND (coil) | “the breeze” or “wound up” |
| tear | TEER (cry) | TARE (rip) | “crying” or “ripped” |
| bass | BASE (fish) | BASS (guitar) | “low end” or “the fish” |
| lead | LEED (guide) | LED (metal) | “leading” or “leaden” |
| read | REED (present) | RED (past) | Context or rewrite |
I scan every lyric for these before generation. It’s tedious. It matters.
Why This Matters
A mispronounced word breaks the spell. When the AI says “LEED” instead of “LED” in a song about metal, it sounds wrong to every listener—even if they can’t articulate why. I catch these before generation, not after.
Lyric Craft
Good lyrics aren’t just rhymes. Every track goes through quality checks.
From the toolkit: The lyric-writer skill enforces these automatically, and the lyric-reviewer runs an 8-point checklist before any track goes to Suno.
Prosody
Stressed syllables need to land on strong beats. When they don’t, lines feel awkward even if the words are fine.
Bad prosody (stress on wrong beat):
“The MACH-ine is RUN-ning NOW”
Good prosody (natural stress pattern):
“The ma-CHINE is run-NING now”
The AI checks every line for this before I generate.
Rhyme Quality
Not all rhymes are equal:
| Type | Example | Quality |
|---|---|---|
| Perfect rhyme | “gate / late” | Strong |
| Slant rhyme | “gate / great” | Acceptable |
| Self-rhyme | “gate / gate” | Never |
| Repeated end word | “running / running” | Never |
Lazy patterns get caught and fixed before generation.
Verse Development
V2 can’t just be V1 with different words. It needs to develop the story:
| V1 | V2 |
|---|---|
| Introduces situation | Raises stakes |
| Sets the scene | Shows consequences |
| Presents character | Reveals depth |
If verse 2 just rewords verse 1, it gets rewritten.
Generation Iteration
Getting a track right isn’t one attempt. It’s a process.
What I’m Listening For
- Vocal delivery — Does the phrasing feel natural?
- Pronunciation — Did the phonetic fixes work?
- Structure — Are all sections (verse, chorus, bridge) present?
- Mood — Does it match the intended emotion?
- Audio quality — No weird artifacts or glitches?
Iteration Reality
Some tracks land on attempt 3. Some take 20+. The generation log tracks every attempt:
| # | Date | Model | Result | Notes | Rating |
|---|---|---|---|---|---|
| 1 | 2025-12-03 | V5 | [Listen] | First attempt, too fast | — |
| 2 | 2025-12-03 | V5 | [Listen] | Better pacing, wrong mood | — |
| 3 | 2025-12-03 | V5 | [Listen] | Keeper | ✓ |
I don’t hide the iteration. It’s part of the process.
When Generation Isn’t Enough
Sometimes the AI nails the vibe but something’s off—the backing vocals overpower the lead, the bass is too prominent, an instrument clashes with the vocal melody. That’s when I open Suno Studio and extract stems.
Stem separation lets me isolate:
- Lead vocals — Adjust levels, add effects, fix mix issues
- Backing vocals — Pull them back or push them forward
- Instruments — Tweak individual elements that don’t sit right
- Bass/drums — Rebalance the low end
Here’s how the mix polish flow works:
Suno Stems (up to 12 tracks)
↓
Per-Stem Processing:
Vocals → noise reduction + presence EQ + compression
Drums → highpass + compression + gate
Bass → highpass + compression + sub-bass lift
Guitar → highpass + presence EQ
Other → genre-specific processing
↓
Remix (sum all stems)
↓
Polished Audio → ready for mastering
It’s not always needed, but when a track is 90% there and regenerating would lose what works, stem editing saves it.
Mastering for Streaming
Raw Suno output isn’t ready for streaming platforms. Every track gets mastered.
From the toolkit: The mastering scripts handle this automatically:
# Analyze all tracks
python3 analyze_tracks.py
# Master with genre-appropriate EQ
python3 master_tracks.py --genre hip-hop
# Run automated QC checks
python3 qc_tracks.py
# Handle problem tracks
python3 fix_dynamic_track.py "problem_track.wav"
Per-Track Processing Chain
Every track passes through the same signal chain:
Input WAV
↓
Parametric EQ → high-mid cut @ 3.5 kHz (Q 1.5)
high-shelf cut @ 8 kHz
↓
Gentle Compression → 1.5:1 ratio, -18 dBFS threshold
30 ms attack, 200 ms release
↓
Loudness Normalization → -14 LUFS integrated target
↓
Peak Limiter → -1.0 dBTP ceiling
2-stage: hard limit + tanh soft clip
↓
Output WAV (mastered/)
EQ and compression are genre-dependent — presets control how much high-mid cut and compression each genre gets. The EQ tames the harshness Suno tends to bake into its output, compression glues the mix, and normalization + limiting bring everything to streaming-ready loudness.
Target Standards
| Platform | LUFS Target | True Peak |
|---|---|---|
| Spotify | -14 LUFS | -1.0 dBTP |
| Apple Music | -16 LUFS | -1.0 dBTP |
| YouTube | -14 LUFS | -1.0 dBTP |
Common Fixes
| Issue | Problem | Solution |
|---|---|---|
| Too quiet | Won’t compete on playlists | Loudness normalization |
| Harsh high-mids | Ear fatigue (2-6kHz) | Surgical EQ cuts |
| Weak low end | Thin on speakers | Bass enhancement |
| Dynamic range | Too compressed or too dynamic | Multiband compression |
Album Consistency
All tracks on an album should be within 1 dB LUFS of each other. A quiet track after a loud one feels wrong, even if each sounds fine in isolation.
Genre Experimentation
bitwize music isn’t one sound. The project spans:
- Nerdcore/Hip-Hop — Tech nostalgia, hacker culture, internet history
- Dark Industrial — Heavier documentary work
- Indie Folk — Quieter, introspective storytelling
- Country/Americana — Road songs and heartbreak
- Ska Punk — Horns, energy, chaos
- K-Pop — Sweet-and-dangerous synth-pop with Korean hooks
- Opera — Classical vocal drama
- Dark Cabaret — Theatrical horror and satire
- Swing — Big band energy
- Synth-Pop/Electronic — 80s-influenced electronic
Different stories need different sounds. A documentary about Thomas Edison’s animal experiments doesn’t sound like a Christmas ska album. A K-pop concept album about candy-coated danger doesn’t sound like a dark cabaret confession. That’s the point.
From the toolkit: The 67 genre guides include Suno-optimized style prompts, verse length limits, and lyric conventions for each genre.
Try It Yourself
The entire system is available for you to use:
Quick Start
# Install via Claude Code plugin marketplace
/plugin marketplace add bitwize-music-studio/claude-ai-music-skills
/plugin install bitwize-music@claude-ai-music-skills
# Run setup assistant
/bitwize-music:setup
# Configure your workspace
/bitwize-music:configure
# Start your first album
/bitwize-music:new-album
What You’ll Need
| Component | Required? | Purpose |
|---|---|---|
| Claude Code | Yes | AI collaborator and skill runner |
| Suno subscription | Yes | Audio generation |
| Python 3.10+ | For MCP server | Fast state queries (auto-enabled) |
| Python 3.8+ | For mastering | Loudness/EQ processing |
| Playwright | For research | Automated document retrieval |
Learning Resources
- Repository README — Setup and configuration
- Quick Start Guides — First album, bulk releases, true-story albums
- Suno Reference Docs — Prompting guides
- Templates — Starting points for tracks/albums
- Genre Guides — 67 genres with artist deep-dives
Transparency
I’m not hiding the process. The method is part of the art.
You can see the research. You can see the sources. You can see what’s documented and what’s interpretation. You can see the actual code that powers the production.
The albums stand on their own as music, but the documentation is there for anyone who wants to dig deeper.
This is what AI collaboration looks like when you do it with intention—not as a gimmick, but as a genuine creative partnership.