What Royalty Statement Data Tells You Before You Model: A Pre-Valuation Checklist

CT
Chapter Two
10 min read

Every music catalog valuation starts with a model. But before you build a model, you need to know whether the data feeding it is complete, consistent, and trustworthy. The difference between a good acquisition and a bad one is rarely the discount rate -- it is almost always the data.

This checklist covers six fundamental data-quality checks that should be performed before any valuation model is built. Each check has a clear pass/caution/flag framework so that analysts, investors, and advisors can quickly assess whether a catalog's royalty statement data is ready for modeling -- or whether gaps need to be closed first.

Why Pre-Valuation Data Quality Matters

Valuation models are only as reliable as the inputs they consume. In music catalog acquisitions, the primary inputs are royalty statements from distributors, publishers, collection societies, and sub-publishers. These statements arrive in dozens of formats, cover different periods, use different track identifiers, and report in different currencies.

If you feed incomplete or inconsistent data into a valuation model, the model will still produce a number. It will look precise. It will be wrong.

The six checks below are designed to catch the most common and most impactful data-quality problems before they propagate into a valuation.


Check 1: Statement Coverage

Statement coverage is the most fundamental check. If a major income source is missing from the data set, the model will systematically understate (or occasionally overstate) earnings.

What to check:

The goal is to confirm that the statements collectively represent the full earnings picture of the catalog -- not just the portion that was easiest to collect.

Statement Coverage Assessment

CheckStandardPassCautionFlag
Distributor coverageAll active distribution agreements representedAll distributors present1-2 minor distributors absentMajor distributor absent or more than 2 missing
PRO / territory coverageCollection society statements for all material territoriesAll material territories presentMinor territory absent (under 5% earnings)Major territory absent (US / UK / EU)
History depthMinimum 3 years; 5-7 years preferred5+ years available3-4 years availableunder 3 years for material distributors
Period continuityNo unexplained gaps over 1 consecutive periodComplete history, no gaps1 gap period, explained2+ consecutive gaps, unexplained

A missing distributor does not necessarily mean missing income -- it may mean the income is reported through a different entity. But it does mean you need to trace the money. If a distributor representing 15% of earnings is absent from the data set, your LTM figure is wrong by at least that amount.


Check 2: Period Alignment

Royalty statements from different sources cover different time periods. Distributors may report monthly or quarterly. Collection societies often report semi-annually with significant lag. If these periods are not properly aligned before aggregation, you will double-count some earnings and miss others.

What to check:

Period Alignment Assessment

CheckStandardPassCautionFlag
Reporting frequency mappingFrequency documented for every sourceAll sources mapped and documented1-2 sources undocumentedFrequency unknown for material source
Lag identificationReporting lag quantified per sourceLag documented, consistentLag estimated but not confirmedLag unknown or highly variable
Period normalisationAll data aligned to common periods before aggregationFull normalisation appliedPartial normalisation, minor sources excludedRaw statements aggregated without normalisation
Earning vs. payment periodAccounting period used, not payment dateEarning periods used consistentlyMixed usage, documentedPayment dates used as earning periods

Period misalignment is one of the most common causes of LTM distortion. A collection society statement received in Q1 2025 may cover earnings from H1 2024. If you assign those earnings to Q1 2025, you are inflating the most recent period and deflating the period where the income was actually generated. This creates artificial growth trends that do not exist.


Check 3: Track Resolution (ISRC and Identifier Matching)

Track-level analysis requires that every line item in every statement be matched to a specific recording or composition. The primary identifier for recordings is the ISRC (International Standard Recording Code). For compositions, it is the ISWC. In practice, many statements arrive with partial or missing identifiers, requiring resolution through title matching, artist matching, or other heuristics.

What to check:

Track Resolution Assessment

CheckStandardPassCautionFlag
ISRC / ISWC coverageover 95% of earnings resolved to a valid identifierover 95% resolved85-95% resolvedunder 85% resolved by earnings
Unresolved earnings shareunder 5% of total earnings unresolvedunder 2% unresolved2-5% unresolvedover 5% unresolved
Duplicate detectionReissues and remasters identified and groupedDuplicates flagged and groupedPartial detection, manual review neededNo duplicate detection applied
Rights type separationSound recording and composition IDs distinctCleanly separatedMostly separated, some ambiguityCommingled or indistinguishable

A catalog where 20% of earnings cannot be attributed to a specific track is a catalog you cannot model at the track level. You may still be able to model it at the portfolio level, but you lose the ability to identify which tracks are driving value, which are declining, and which represent concentration risk.


Check 4: Source Classification

Royalty income arrives from many sources, but for modeling purposes, it must be classified into standard categories: streaming, downloads, physical, sync, performance (broadcast), mechanical, and other. Each category has different growth dynamics, decay profiles, and risk characteristics. Misclassification distorts every downstream analysis.

What to check:

Source Classification Assessment

CheckStandardPassCautionFlag
Taxonomy appliedConsistent taxonomy across all sourcesStandard taxonomy applied, all sourcesTaxonomy applied, minor inconsistenciesNo standard taxonomy or major inconsistencies
Streaming sub-typesAd-supported, premium, and video distinguishedAll sub-types separatedPartial separationAll streaming income aggregated
Sync separationSync income separately identifiedSync cleanly separatedSync partially identifiedSync commingled with other income
Unclassified shareunder 5% of earnings unclassifiedunder 2% unclassified2-10% unclassifiedover 10% unclassified

Source classification matters because a catalog earning 80% from streaming has a fundamentally different risk profile than one earning 80% from sync. Streaming income is relatively predictable and platform-dependent. Sync income is lumpy, relationship-dependent, and harder to forecast. If you model them as a single revenue stream, your confidence intervals will be meaningless.


Check 5: Currency Handling

Music catalogs generate income in multiple currencies. Statements from GEMA arrive in euros, from JASRAC in yen, from MCPS in pounds. To build a valuation model, all income must be converted to a single reporting currency. How that conversion is done -- and when the exchange rate is applied -- materially affects the result.

What to check:

Currency Handling Assessment

CheckStandardPassCautionFlag
Source currency identificationNative currency documented for every sourceAll source currencies documentedMost documented, 1-2 assumedSource currencies unknown for material sources
Conversion methodologyConsistent, documented conversion approachPeriod-average rates, consistently appliedMixed methodology, documentedNo documented methodology or inconsistent rates
FX vs. underlying trend separationCurrency effects quantified separatelyFX impact isolated and quantifiedFX impact estimatedNo FX separation; growth figures include currency effects
Pre-converted statement detectionPre-converted sources identifiedAll pre-conversions identified and notedSome pre-conversions suspectedUnknown whether sources are pre-converted

Currency handling is especially important for catalogs with significant European or Asian income. A catalog that appears to show 5% growth year-over-year may actually show flat or declining earnings in local currency terms, with the apparent growth entirely attributable to a weakening dollar. If your model projects that growth forward, you are implicitly betting on continued dollar weakness -- which may not be your intent.


Check 6: Rights Structure Verification

The final check -- and often the most complex -- is verifying that the rights structure is correctly represented in the data. Music rights are split across multiple dimensions: sound recording vs. composition, writer share vs. publisher share, territory-specific sub-publishing agreements, and co-ownership splits. If the data does not correctly reflect the seller's actual ownership share, the model will overstate or understate the acquirable income.

What to check:

Rights Structure Assessment

CheckStandardPassCautionFlag
Ownership share documentationPer-track, per-rights-type ownership documentedComplete ownership schedule providedOwnership provided at catalog level, not per-trackOwnership unclear or undocumented
Co-ownership splitsAll co-writer / co-publisher splits reflectedAll splits documented and appliedMost splits documented, minor gapsSignificant co-ownership splits missing or disputed
Gross vs. net distinctionClear distinction between gross royalty and seller's netNet share consistently derivedGross/net distinction inconsistent across sourcesNo distinction; unclear what share is acquirable
Reversion / term limitsAll reversion clauses and term limits identifiedAll terms documented, no near-term reversionsSome terms documented, minor reversion riskReversion clauses not reviewed or near-term reversion present
Chain of title verificationRegistered ownership verified at PROsVerified against PRO registrationsPartially verifiedNo verification performed

Rights structure errors are among the most expensive mistakes in catalog acquisitions. If the data shows a track generating $100,000 per year but the seller only owns 50% of the publishing, the acquirable income is $50,000. If the model uses the gross figure, the buyer will overpay by a factor of two on that track. Across a large catalog, even small, systematic rights-share errors compound into material valuation distortions.


The Complete Pre-Valuation Checklist

The table below consolidates all six checks into a single reference. For each item, the "If failed" column describes the specific impact on the valuation model.

CategoryCheckIf failed: impact on modeling
CoverageAll distributors presentLTM understated; missing income not projected
CoveragePRO / territory coverage completeTerritory-level analysis unreliable; geographic risk masked
Coverage5+ years of historyDecay curve estimation unreliable; trend analysis weakened
AlignmentPeriods normalised to common timelineLTM distorted; artificial growth or decline trends
AlignmentEarning period used (not payment date)Seasonality misattributed; lag creates phantom trends
Resolution95%+ earnings resolved to ISRC / ISWCTrack-level analysis impossible; concentration risk hidden
ResolutionDuplicates detected and groupedTrack counts inflated; per-track metrics diluted
ClassificationStandard taxonomy appliedSource mix analysis unreliable; decay assumptions misapplied
ClassificationSync income separatedForecast volatility understated; lumpy income smoothed
Classificationunder 5% unclassified incomeMaterial earnings in unknown category; cannot model by source
CurrencyConsistent conversion methodologyGrowth rates include FX noise; trend direction may be wrong
CurrencyFX impact isolatedCannot distinguish organic growth from currency movement
RightsPer-track ownership documentedAcquirable income overstated or understated
RightsGross vs. net distinguishedValuation based on non-acquirable income; systematic overpayment risk
RightsReversion clauses reviewedModel projects income beyond ownership period; terminal value inflated

What to Do When Checks Fail

Failed checks do not necessarily mean you should walk away from a deal. They mean you need to either fix the data or adjust the model to reflect the uncertainty.

For coverage gaps:

For alignment problems:

For resolution failures:

For classification issues:

For currency distortions:

For rights structure problems:


Conclusion

Data quality is not a box-checking exercise -- it is the foundation of every valuation decision. A clean data set does not guarantee a good investment, but a dirty data set almost guarantees a bad model.

The six checks in this article are not exhaustive, but they cover the problems that most frequently cause material valuation errors in music catalog acquisitions. Run them before you build your model. Run them again when new data arrives. And if the data fails multiple checks, fix the data before you fix the discount rate.

The model is only as good as the data underneath it. Make sure the data is ready before you ask the model to perform.