// Behind The Music

Who Am I

I’m bitwize—a hacker who loves music and experimenting with new technology. When AI music generation tools started getting good, I couldn’t resist seeing what was possible.

I’m also excited to finally make the music I wish already existed—songs about weird niche tech interests of mine, like how the Debian operating system came to be, the drama of the Linux kernel mailing list, or the underground warez scene. Stories that deserve to be told, but nobody’s made albums about … until now.


The Short Version

bitwize music is an experiment in human + AI music collaboration. Not “AI-generated music” where you push a button and get a song. More like having a really weird, really helpful bandmate who never sleeps and has read everything on the internet.

The human brings the vision, the judgment, and the final call. The AI brings tireless iteration, research at scale, and an obsessive attention to detail. Together, I make albums that neither of us could make alone.


Now Open Source

The entire production system behind bitwize music is now public.

I’ve open-sourced the Claude Code plugin that powers every album on this site:

github.com/bitwize-music-studio/claude-ai-music-skills

This isn’t a watered-down demo. It’s the actual toolkit—50 specialized skills, 13 structured templates, 67 genre guides with 85 artist deep-dives, an MCP server with 30+ tools, Suno V5 reference documentation, Python mastering/promotion/sheet-music scripts, and the research workflows used to build documentary albums.

What’s In The Toolkit

CategoryWhat’s Included
Production Skillslyric-writer, album-conceptualizer, suno-engineer, mastering-engineer, mix-engineer, album-art-director, sheet-music-publisher
Quality Controllyric-reviewer, pronunciation-specialist, explicit-checker, plagiarism-checker, validate-album, pre-generation-check, verify-sources
Research11 specialized researcher skills for legal, government, journalism, security, historical, financial, biographical, and more
Promotionpromo-director, promo-writer, promo-reviewer — video generation, social media copy, platform-specific formatting
Import & Workflowimport-audio, import-art, import-track, clipboard, album-dashboard, next-step, rename
Templates8 core templates (track, album, artist, genre, research, sources, ideas) + 5 promo templates (Twitter, Instagram, TikTok, YouTube, Facebook)
Genre Reference67 genre guides with Suno-optimized style prompts, 85 artist deep-dives covering technique, vocal style, and production patterns
Suno ReferenceV5 best practices, pronunciation guide, artist blocklist, structure/voice/instrument tags
Python ToolsMastering (5 scripts), Promotion (3 scripts), Sheet Music (3 scripts), Cloud Upload
MCP Server30+ tools for instant state queries — albums, tracks, sessions, config, QC, and mastering
Testing2,333 tests across 32 test files — plugin validation, unit tests, integration tests

Why Open Source It?

Because AI music production shouldn’t be a black box. The interesting part isn’t “I used AI”—it’s how. What prompts work? What quality checks matter? How do you maintain documentary rigor when an AI is helping write lyrics?

These are questions worth sharing answers to. If you’re experimenting with AI music, maybe this saves you the trial-and-error I went through.


The Workflow

This isn’t “open ChatGPT and type some prompts.” The production system behind bitwize music includes over 250,000 lines of structured documentation—custom instructions, templates, workflows, research files, and track specifications that guide every phase of album creation.

Here’s how it actually works:

Phase 1: Concept

Every album starts with a question. What’s the story? What’s the angle? For documentary albums like The Wizard or The Scene, that means identifying a real story worth telling—something with depth, drama, and a hook that’ll make someone want to listen.

From the toolkit: The album-conceptualizer skill guides 7 planning phases before any lyrics get written—foundation, concept deep dive, sonic direction, structure planning, album art, practical details, and confirmation. No track writing until all phases complete.

Phase 2: Research

This is where documentary albums get serious. Court documents, DOJ press releases, contemporary newspaper accounts, academic sources. Not Wikipedia summaries—primary sources wherever possible.

From the toolkit: Eleven specialized researcher skills handle different source types:

ResearcherDomainExample Use
researchers-legalCourt documents, indictments, rulingsRoss Ulbricht case files for The Scene
researchers-govDOJ/FBI press releases, agency statementsFederal takedown announcements
researchers-historicalArchives, contemporary accountsEdison Papers at Rutgers for The Wizard
researchers-securityMalware analysis, CVEs, attribution reportsTechnical details for hacker albums
researchers-journalismInvestigative articles, interviewsLong-form reporting verification
researchers-financialSEC filings, earnings callsCorporate fraud research
researchers-biographicalPersonal backgrounds, interviewsSubject deep-dives
researchers-primary-sourceSubject’s own words: tweets, blogs, forumsDirect quotes and context
researchers-techProject histories, changelogsOpen source project research
researchers-verifierFact-checking, citation validationFinal QC before human review
researcherGeneral research coordinationCross-domain investigation

The document-hunter skill automates retrieval from public archives using Playwright.

Here’s what research actually looks like for The Wizard (about Thomas Edison’s darker side):

Source TypeExamples
Official ArchivesThomas A. Edison Papers at Rutgers (150,000+ documents)
Academic BooksMark Essig’s Edison and the Electric Chair
Court RecordsIn re Kemmler, 136 U.S. 436 (1890)
Contemporary NewspapersNew York Sun (August 25, 1889 exposé), Brooklyn Daily Eagle

That album required cross-referencing 11 separate research files covering everything from Edison’s patent litigation to primary sources about Topsy the elephant.

Phase 3: Writing

Lyrics get written, revised, and polished. The AI helps with iteration—trying different rhyme schemes, checking for prosody issues, making sure verse 2 actually develops the story instead of just repeating verse 1 with different words.

From the toolkit: The lyric-writer skill enforces:

  • Rhyme quality checks — No self-rhymes, no lazy patterns
  • Prosody analysis — Stressed syllables on strong beats
  • POV/tense consistency — No accidental shifts
  • Source verification — Lyrics match captured research (for documentary tracks)
  • Pronunciation scanning — Catch homographs before generation
  • Genre-specific conventions — Verse length limits, rhyme schemes, and structures tuned per genre and BPM

The lyric-reviewer runs an 8-point QC checklist before any track goes to Suno.

But the creative direction stays human. What’s the emotional arc? What’s the hook? What makes this track matter?

Phase 4: Generation

This is where AI music generation comes in. I use Suno to turn lyrics and style descriptions into actual audio. It’s not one-and-done—it’s iterative. Generate, listen, adjust, try again. Sometimes it takes 10, 20 or more generations to land the right sound for a single track.

From the toolkit: The suno-engineer skill optimizes prompts for Suno V5. The pre-generation-check skill runs a final validation pass. The reference documentation includes:

Reference FileContents
V5 Best PracticesComprehensive prompting guide
Pronunciation GuideHomographs, tech terms, fixes
Structure Tags[Verse], [Chorus], [Bridge], etc.
Voice TagsVocal manipulation and style
Instrumental Tags100+ instruments
Artist BlocklistNames that trigger copyright filters
Workspace ManagementOrganizing generations efficiently

The clipboard skill copies Suno-ready prompts to your clipboard, and a Tampermonkey userscript can auto-fill Suno’s input fields directly.

Phase 5: Verify

For documentary albums, this is non-negotiable. Every factual claim gets traced back to sources. The research sections on the album pages aren’t decoration—they’re the receipts.

From the toolkit: Human verification is baked into the workflow. Track status moves through Sources PendingSources VerifiedIn Progress. The verify-sources skill coordinates the verification process, and the researchers-verifier handles citation validation—but final sign-off is always human.

Phase 6: Master

Raw Suno output isn’t ready for streaming platforms. Every track gets mastered to streaming standards.

From the toolkit: Five Python scripts in tools/mastering/:

ScriptFunction
analyze_tracks.pyMeasure LUFS, true peak, dynamic range
master_tracks.pyApply loudness normalization, EQ, limiting
qc_tracks.pyRun 7 automated quality checks on mastered audio
fix_dynamic_track.pyHandle high-dynamic-range problem tracks
reference_master.pyMatch sound of professional reference tracks

The mastering-engineer skill coordinates the workflow, with genre-specific presets for EQ and compression.

Phase 7: Promote

Promo videos, social media copy, platform-specific formatting. Getting the music in front of people.

From the toolkit: Three promotion scripts in tools/promotion/:

ScriptFunction
generate_promo_video.pyCreate promo videos with album art, waveforms, and audio
generate_album_sampler.pyBuild album sampler videos from track highlights
generate_all_promos.pyBatch-generate promos for an entire album

The promo-director coordinates video generation, promo-writer generates social media copy from 5 platform-specific templates (Twitter, Instagram, TikTok, YouTube, Facebook), and promo-reviewer polishes copy for each platform’s conventions.

Phase 8: Release

Distribution, metadata, and deployment. The boring but necessary stuff that turns a collection of tracks into an actual album people can find and listen to.

From the toolkit: The release-director skill runs through the complete release checklist—metadata prep, DistroKid formatting, SoundCloud upload coordination, and website deployment. The cloud-uploader handles pushing audio and assets to cloud storage.


The Audio Engineering Pipeline

Here’s the end-to-end flow from raw Suno output to live on streaming platforms:

Raw Suno Audio
    ↓
Import & Organize  ── import-audio skill
    ↓
Mix Polish          ── mix-engineer skill (per-stem processing)
    ↓
Analyze             ── analyze_tracks.py (LUFS, peaks, spectral)
    ↓
Master              ── master_tracks.py (EQ, compress, normalize, limit)
    ↓
QC                  ── qc_tracks.py (7 automated checks)
    ↓
Sheet Music         ── sheet-music-publisher skill (transcribe → publish)
    ↓
Promo Videos        ── promo-director skill (15s vertical videos)
    ↓
Release             ── release-director skill (9-point QA → distribute)
    ↓
Live on Platforms

Fully automated — Mastering (analyze → EQ → limit → QC), promo video generation, sheet music publishing, website deployment. These run end-to-end with no human intervention once triggered.

AI-assisted — Lyric writing, research gathering, stem processing, social media copy. The AI does the heavy lifting but a human guides the direction.

Human-only — Suno generation (listening and selecting keepers), quality control sign-off, creative direction, DistroKid/SoundCloud uploads. These require ears and judgment that can’t be automated.

Sheet Music Pipeline

The sheet music step in the pipeline above expands to its own multi-stage flow:

Mastered WAV
    ↓
AnthemScore (auto-transcribe) → PDF + MusicXML + MIDI
    ↓
MuseScore (manual polish)     → fix notes, rhythms, layout
    ↓
Title Cleanup                 → strip track numbers, add credits
    ↓
Songbook (optional)           → combine PDFs, add TOC + page numbers
    ↓
Publish to R2                 → available on bitwizemusic.com

The MCP Server

The plugin includes a Model Context Protocol server that gives Claude Code instant structured access to your entire production state.

Instead of scanning files and parsing markdown every time, the MCP server provides 30+ tools for querying albums, tracks, sessions, config, and running QC checks. It’s the reason session startup is fast — 2-3 file reads instead of 50-220.

The server also exposes mastering and QC tools directly, so Claude can analyze audio, run quality checks, and coordinate the mastering pipeline without shelling out to Python scripts manually.


The Genre System

The toolkit includes 67 genre guides — not just a list of genre names, but deep references covering:

  • Suno-optimized style prompts — What to type into the style box for each genre
  • Verse length limits — Genre and BPM-specific limits to prevent Suno from cutting lyrics
  • Lyric conventions — Rhyme schemes, structures, and vocabulary norms per genre
  • 85 artist deep-dives — Detailed breakdowns of vocal style, production techniques, and what makes each artist’s sound distinctive

Genres covered include everything from hip-hop and k-pop to opera, vaporwave, doom-metal, and bossa-nova. The K-pop guide alone includes 27 artist deep-dives from BTS to NewJeans.


Tools I Use

The bitwize music stack:

AI & Generation

  • Claude Code — AI collaborator for writing, research, iteration, and documentation.
  • Suno — AI music generation. Turns lyrics and style descriptions into actual songs.
  • ChatGPT — Album artwork generation with DALL-E.

Audio Processing

  • Python — Custom mastering, promotion, and sheet music scripts.
  • pyloudnorm — ITU-R BS.1770-4 loudness measurement for streaming targets.
  • Matchering — Reference-based mastering to match the sound of professional tracks.
  • FFmpeg — Promo video generation, audio extraction, format conversion.
  • SciPy — Signal processing for EQ and filtering.
  • Librosa — Audio analysis for smart segment selection in promo videos.

Sheet Music

  • AnthemScore — Audio-to-sheet-music transcription for piano reductions.
  • MuseScore — Sheet music editing, cleanup, and export to PDF, MusicXML, and MIDI.

Every released album has free downloadable sheet music in three formats: PDF (printable scores), MusicXML (editable in any notation software), and MIDI (playable/importable). Individual tracks and full album songbooks are available from each album’s sheet music page.

Research & Automation

  • Playwright — Automated browser for document hunting from public archives.

Website & Infrastructure

  • GitHub — Version control for everything: lyrics, research, website, documentation. Easy reverts, change tracking, and collaboration history.
  • Hugo — Static site generator for bitwizemusic.com.
  • Cloudflare Pages — Hosting and deployment.
  • Cloudflare R2 — Object storage for promo videos and sheet music files (PDF, MusicXML, MIDI).

Distribution

  • DistroKid — Distribution to Spotify, Apple Music, and everywhere else.
  • SoundCloud — Primary streaming and sharing platform.

How It All Fits Together

Claude Code is the orchestrator. It doesn’t just help with writing—it runs the entire production pipeline through the open-source skill system. Research, lyric iteration, mastering scripts, promo video generation, social media copy, website deployment—all triggered and coordinated through Claude Code.

What’s automated:

  • Research gathering and source verification
  • Initial lyric drafts and technical quality checks (rhyme, prosody, pronunciation)
  • Running Python mastering scripts and audio QC
  • Generating promo videos and social media copy
  • Sheet music transcription, songbook creation, and publishing to R2
  • Website builds and deployment
  • Version control and documentation
  • MCP server for instant production state queries

What’s still manual:

  • Lyric iteration and refinement (story flow, emotional arc, creative direction)
  • Suno generation (pasting prompts, listening, downloading keepers)
  • Quality control listening and final approval
  • SoundCloud and DistroKid uploads
  • Album artwork generation via ChatGPT

The goal is human judgment where it matters (creative decisions, quality control) and automation everywhere else.


What It Costs

This isn’t free. Transparency about the process means transparency about the price tag.

The Stack

CategoryServiceCostFrequency
AI CoreClaude Code Max$200Monthly
AI CoreChatGPT Plus$19.99Monthly
Music GenerationSuno Pro$182.30*Annual
DistributionDistroKid Ultimate$89.99Annual
StreamingSoundCloud Artist Pro$99Annual
TranscriptionAnthemScore$107One-time
Image EditingPhotopeaFree

*Black Friday 2025 pricing (40% off first year). Regular price $288/year.

Annual Breakdown

TypeFirst YearOngoing
Monthly subscriptions$2,639.88$2,639.88
Annual subscriptions$371.29$476.99
One-time purchases$107.00
Total$3,118.17$3,116.87

That’s roughly $260/month for the full production stack.

Note: Claude Code Max ($200/month) isn’t music-only — I use it for my day job, other projects, and general hacking. The music production shares that cost. If you’re already using Claude Code for development work, the marginal cost for music is just the generation and distribution tools.

What Each Tool Does

ToolRole in Production
Claude Code MaxAI collaborator — research, writing, iteration, automation, code
ChatGPT PlusAlbum artwork generation (DALL-E)
Suno ProAI music generation — turns lyrics into audio
DistroKid UltimateDistribution to Spotify, Apple Music, Amazon, etc.
SoundCloud Artist ProPrimary streaming platform, analytics, Pro features
AnthemScoreAudio-to-sheet-music transcription for piano reductions
PhotopeaLight image editing — cropping, resizing, touchups (free)

Is It Worth It?

Depends on what you’re building. For serious album production with documentary research, quality control, mastering automation, and multi-platform distribution — this stack handles it.

Could you do it cheaper? Yes:

  • Drop Claude Code Max for the free tier (limited usage)
  • Skip ChatGPT if you use other image tools
  • Use SoundCloud free tier
  • Skip AnthemScore if you don’t need sheet music

A minimal stack (Suno + DistroKid) runs about $275/year. The full production toolkit is 10x that — because it does 10x more.


Documentary Rigor

Some of these albums tell real stories about real events. That comes with responsibility.

The Source Hierarchy

Not all sources are equal. I follow a strict hierarchy:

  1. Court documents — Indictments, rulings, transcripts (highest authority)
  2. Government releases — DOJ press releases, agency statements
  3. Investigative journalism — Long-form reporting from reputable outlets
  4. News coverage — Contemporary newspaper accounts
  5. Wikipedia — Context only, never for facts

Myth Busting

Sometimes research reveals that popular narratives are wrong.

The Wizard addresses a famous myth: that Thomas Edison personally electrocuted Topsy the elephant as anti-AC propaganda.

The myth: Edison electrocuted Topsy to scare people away from AC current.

What I found:

  • Edison was never at Luna Park
  • The War of Currents ended in 1892; Topsy died in 1903 (10 years later)
  • Zero mentions of Topsy in Edison’s correspondence at Rutgers
  • Luna Park owners Thompson & Dundy ordered the execution, not Edison

The album addresses this directly—Topsy’s death was the culmination of Edison’s legacy, not his action. That’s a more interesting (and accurate) story than the myth.

Track-by-Track Verification

Every documentary track gets a verification table. Here’s a real example from “December Fifth” on The Wizard:

Lyric ClaimVerified FactSource
December 5, 1888Date of large animal demonstrationEdison and the Electric Chair
Edison attended“Edison personally attended and addressed the committee”Multiple sources
4 calves, 1 horse killedDocumented countEdison and the Electric Chair
770 voltsVoltage used on first calfExecuted Today

If a claim can’t be verified, it gets flagged as “creative license” and documented as such.

What Gets Documented as Creative License

I’m explicit about what’s dramatization:

ElementTypeNotes
Internal thoughts of EdisonDramatizationNo documented internal monologue
Topsy’s perspectiveArtistic licenseAnthropomorphization for narrative
Emotional framingInterpretation“Accusatory narrator” is artistic choice

What is not creative license: all dates, names, numbers, court rulings, and attributed quotes.


The Pronunciation Challenge

AI music generation has a dirty secret: it can’t read.

When Suno sees “live,” it doesn’t know if you mean “live performance” (LYVE) or “live your life” (LIV). When it sees “read,” it guesses—and guesses wrong half the time.

From the toolkit: The pronunciation guide and pronunciation-specialist skill catch these before generation.

Real Fixes from Real Tracks

From “December Fifth” on The Wizard:

OriginalProblemFixed Version
Medico-Legal SocietyTechnical termMed-ih-koh Lee-gul Society
KennellyUnusual nameKen-uh-lee
electricityCommon mispronunciationee-lek-triss-i-tee

From Deb + Ian:

OriginalProblemFixed Version
DebianTech termDeb-ee-in

The Homograph Problem

These words have two pronunciations. Every one requires a decision:

WordCould BeOrThe Fix
liveLYVE (perform)LIV (exist)Rewrite or add context
windWINED (breeze)WIND (coil)“the breeze” or “wound up”
tearTEER (cry)TARE (rip)“crying” or “ripped”
bassBASE (fish)BASS (guitar)“low end” or “the fish”
leadLEED (guide)LED (metal)“leading” or “leaden”
readREED (present)RED (past)Context or rewrite

I scan every lyric for these before generation. It’s tedious. It matters.

Why This Matters

A mispronounced word breaks the spell. When the AI says “LEED” instead of “LED” in a song about metal, it sounds wrong to every listener—even if they can’t articulate why. I catch these before generation, not after.


Lyric Craft

Good lyrics aren’t just rhymes. Every track goes through quality checks.

From the toolkit: The lyric-writer skill enforces these automatically, and the lyric-reviewer runs an 8-point checklist before any track goes to Suno.

Prosody

Stressed syllables need to land on strong beats. When they don’t, lines feel awkward even if the words are fine.

Bad prosody (stress on wrong beat):

“The MACH-ine is RUN-ning NOW”

Good prosody (natural stress pattern):

“The ma-CHINE is run-NING now”

The AI checks every line for this before I generate.

Rhyme Quality

Not all rhymes are equal:

TypeExampleQuality
Perfect rhyme“gate / late”Strong
Slant rhyme“gate / great”Acceptable
Self-rhyme“gate / gate”Never
Repeated end word“running / running”Never

Lazy patterns get caught and fixed before generation.

Verse Development

V2 can’t just be V1 with different words. It needs to develop the story:

V1V2
Introduces situationRaises stakes
Sets the sceneShows consequences
Presents characterReveals depth

If verse 2 just rewords verse 1, it gets rewritten.


Generation Iteration

Getting a track right isn’t one attempt. It’s a process.

What I’m Listening For

  • Vocal delivery — Does the phrasing feel natural?
  • Pronunciation — Did the phonetic fixes work?
  • Structure — Are all sections (verse, chorus, bridge) present?
  • Mood — Does it match the intended emotion?
  • Audio quality — No weird artifacts or glitches?

Iteration Reality

Some tracks land on attempt 3. Some take 20+. The generation log tracks every attempt:

#DateModelResultNotesRating
12025-12-03V5[Listen]First attempt, too fast
22025-12-03V5[Listen]Better pacing, wrong mood
32025-12-03V5[Listen]Keeper

I don’t hide the iteration. It’s part of the process.

When Generation Isn’t Enough

Sometimes the AI nails the vibe but something’s off—the backing vocals overpower the lead, the bass is too prominent, an instrument clashes with the vocal melody. That’s when I open Suno Studio and extract stems.

Stem separation lets me isolate:

  • Lead vocals — Adjust levels, add effects, fix mix issues
  • Backing vocals — Pull them back or push them forward
  • Instruments — Tweak individual elements that don’t sit right
  • Bass/drums — Rebalance the low end

Here’s how the mix polish flow works:

Suno Stems (up to 12 tracks)
    ↓
Per-Stem Processing:
  Vocals  → noise reduction + presence EQ + compression
  Drums   → highpass + compression + gate
  Bass    → highpass + compression + sub-bass lift
  Guitar  → highpass + presence EQ
  Other   → genre-specific processing
    ↓
Remix (sum all stems)
    ↓
Polished Audio → ready for mastering

It’s not always needed, but when a track is 90% there and regenerating would lose what works, stem editing saves it.


Mastering for Streaming

Raw Suno output isn’t ready for streaming platforms. Every track gets mastered.

From the toolkit: The mastering scripts handle this automatically:

# Analyze all tracks
python3 analyze_tracks.py

# Master with genre-appropriate EQ
python3 master_tracks.py --genre hip-hop

# Run automated QC checks
python3 qc_tracks.py

# Handle problem tracks
python3 fix_dynamic_track.py "problem_track.wav"

Per-Track Processing Chain

Every track passes through the same signal chain:

Input WAV
    ↓
Parametric EQ          → high-mid cut @ 3.5 kHz (Q 1.5)
                         high-shelf cut @ 8 kHz
    ↓
Gentle Compression     → 1.5:1 ratio, -18 dBFS threshold
                         30 ms attack, 200 ms release
    ↓
Loudness Normalization → -14 LUFS integrated target
    ↓
Peak Limiter           → -1.0 dBTP ceiling
                         2-stage: hard limit + tanh soft clip
    ↓
Output WAV (mastered/)

EQ and compression are genre-dependent — presets control how much high-mid cut and compression each genre gets. The EQ tames the harshness Suno tends to bake into its output, compression glues the mix, and normalization + limiting bring everything to streaming-ready loudness.

Target Standards

PlatformLUFS TargetTrue Peak
Spotify-14 LUFS-1.0 dBTP
Apple Music-16 LUFS-1.0 dBTP
YouTube-14 LUFS-1.0 dBTP

Common Fixes

IssueProblemSolution
Too quietWon’t compete on playlistsLoudness normalization
Harsh high-midsEar fatigue (2-6kHz)Surgical EQ cuts
Weak low endThin on speakersBass enhancement
Dynamic rangeToo compressed or too dynamicMultiband compression

Album Consistency

All tracks on an album should be within 1 dB LUFS of each other. A quiet track after a loud one feels wrong, even if each sounds fine in isolation.


Genre Experimentation

bitwize music isn’t one sound. The project spans:

  • Nerdcore/Hip-Hop — Tech nostalgia, hacker culture, internet history
  • Dark Industrial — Heavier documentary work
  • Indie Folk — Quieter, introspective storytelling
  • Country/Americana — Road songs and heartbreak
  • Ska Punk — Horns, energy, chaos
  • K-Pop — Sweet-and-dangerous synth-pop with Korean hooks
  • Opera — Classical vocal drama
  • Dark Cabaret — Theatrical horror and satire
  • Swing — Big band energy
  • Synth-Pop/Electronic — 80s-influenced electronic

Different stories need different sounds. A documentary about Thomas Edison’s animal experiments doesn’t sound like a Christmas ska album. A K-pop concept album about candy-coated danger doesn’t sound like a dark cabaret confession. That’s the point.

From the toolkit: The 67 genre guides include Suno-optimized style prompts, verse length limits, and lyric conventions for each genre.


Try It Yourself

The entire system is available for you to use:

Quick Start

# Install via Claude Code plugin marketplace
/plugin marketplace add bitwize-music-studio/claude-ai-music-skills
/plugin install bitwize-music@claude-ai-music-skills

# Run setup assistant
/bitwize-music:setup

# Configure your workspace
/bitwize-music:configure

# Start your first album
/bitwize-music:new-album

What You’ll Need

ComponentRequired?Purpose
Claude CodeYesAI collaborator and skill runner
Suno subscriptionYesAudio generation
Python 3.10+For MCP serverFast state queries (auto-enabled)
Python 3.8+For masteringLoudness/EQ processing
PlaywrightFor researchAutomated document retrieval

Learning Resources


Transparency

I’m not hiding the process. The method is part of the art.

You can see the research. You can see the sources. You can see what’s documented and what’s interpretation. You can see the actual code that powers the production.

The albums stand on their own as music, but the documentation is there for anyone who wants to dig deeper.

This is what AI collaboration looks like when you do it with intention—not as a gimmick, but as a genuine creative partnership.