Working Paper — March 2026

The Black Hole Index: A Structural Lock-In Measurement Framework for Digital Platforms

Ivan Savich

Independent Researcher

JEL: L14, L41, D43, O33 · 96 platforms · 12 sectors · 22 references

Download .bib

Abstract

We introduce the Black Hole Index (BHI), a domain-agnostic measurement instrument for quantifying structural lock-in and switching costs in digital and physical platforms. The model decomposes lock-in into 11 scored parameters grouped into three categories — Capture (six variables measuring dependency depth), Escape (three variables measuring exit feasibility), and Extended (two variables capturing organizational embedding and momentum). Parameters are aggregated using a modified Constant Elasticity of Substitution (CES) function with a Milgrom-Roberts complementarity correction and combined into a dimensionless ratio B = Capture / (ε + EscapeCore × feedback), where feedback implements endogenous escape suppression via behavioral inertia. We apply BHI to a cross-section of 96 platforms across 12 sectors — AI, cryptocurrency, social media, Big Tech, SaaS, banking, fintech, e-commerce, gaming, pharma, and critical infrastructure — generating the first systematic structural lock-in ranking of the global platform economy. Results reveal that infrastructure platforms (EDA duopoly: B ≈ 13.0; ASML: B ≈ 9.6; TSMC: B ≈ 10.9) exhibit lock-in an order of magnitude higher than attention-based platforms (Netflix: B ≈ 1.0; Dogecoin: B ≈ 0.2), despite the latter commanding larger user bases or market caps. A sensitivity analysis demonstrates ranking robustness to coefficient variation within ±30%. We further specify a dynamic extension modeling the co-evolution of structural lock-in and human skill decay. BHI is positioned as a proposed measurement instrument at a pre-empirical stage: formula specification and arithmetic verification are complete; inter-rater reliability testing and predictive validation against observable switching behavior remain pending. We discuss limitations, validation requirements, and paths toward empirical calibration.

1. Introduction

The concept of lock-in — the state in which a user, organization, or economy becomes structurally dependent on a platform and faces prohibitive switching costs — has been central to information economics since Arthur (1989) formalized increasing returns and path dependence. Farrell and Klemperer (2007) provided the definitive survey of switching costs and network effects, establishing that lock-in emerges from the interaction of sunk investment, learning costs, contractual commitments, and network externalities. Shapiro and Varian (1999) translated these ideas into managerial strategy, while Rochet and Tirole (2003) formalized two-sided market dynamics that amplify lock-in through cross-side network effects.

Despite this rich theoretical foundation, no standardized measurement instrument exists for quantifying structural lock-in across platforms and sectors. Existing approaches fall into three categories, each with significant limitations:

  1. Switching cost estimation. Burnham, Frels, and Mahajan (2003) developed a typology of switching costs (procedural, financial, relational) with survey-based measurement. These measures are platform-specific, require primary data collection, and do not aggregate into a single comparable score.

  2. Network effect proxies. Empirical work uses user counts, Herfindahl-Hirschman Index (HHI), or platform revenue share as proxies. These capture market concentration but not structural dependency — a platform with 90% market share may still face low lock-in if switching costs are negligible.

  3. Qualitative assessment. Industry analysts, regulators, and journalists use terms like "walled garden," "ecosystem lock-in," and "vendor dependency" without formal operationalization. The European Digital Markets Act (DMA) designates gatekeepers based on thresholds (45M monthly active users, €7.5B market cap) that measure scale, not structural lock-in.

BHI addresses this gap by proposing a formalized, parametric instrument that: (a) decomposes lock-in into 11 observable dimensions with anchored scoring rubrics; (b) aggregates these dimensions using theoretically motivated functional forms from production economics; (c) produces a single dimensionless score comparable across platforms, sectors, and time periods; and (d) includes a dynamic extension modeling the co-evolution of lock-in and human capability decay.

The instrument draws an explicit analogy to the Reynolds number in fluid dynamics — a dimensionless ratio of inertial forces to viscous forces that predicts transitions between laminar and turbulent flow. B is a dimensionless ratio of capture forces to escape forces. Unlike the Reynolds number, whose critical thresholds were established through extensive empirical observation, BHI's thresholds are currently definitional and await empirical validation.

We emphasize that BHI is at a pre-empirical stage. The contribution of this paper is formalization and operationalization: specifying a complete, computable model with anchored rubrics, applying it to a substantive cross-section, and transparently documenting what has been validated and what remains pending. We do not claim that BHI produces validated measurements — we claim that it produces structured, reproducible assessments that can be subjected to validation.

The remainder of this paper is organized as follows. Section 2 reviews related work on switching costs, network effects, and platform competition. Section 3 specifies the complete BHI model. Section 4 presents cross-section results for 96 platforms. Section 5 reports sensitivity analysis. Section 6 extends the model to dynamic lock-in evolution. Section 7 discusses limitations and the validation agenda. Section 8 concludes.

2. Related Work

2.1 Switching Costs and Lock-In Theory

The economic theory of switching costs begins with Klemperer (1987), who showed that even small switching costs can generate significant market power. Arthur (1989) demonstrated that competing technologies with increasing returns can lock entire economies into inferior standards through path dependence — the QWERTY keyboard being the canonical (if debated) example.

Farrell and Klemperer (2007) provide the most comprehensive survey, categorizing switching costs as: (i) compatibility and standards costs, (ii) transaction costs, (iii) learning costs, (iv) contractual costs, (v) search costs, and (vi) psychological costs. Their framework is qualitative — it identifies the types of switching costs but does not provide a measurement instrument that produces a quantitative score.

Burnham, Frels, and Mahajan (2003) developed a consumer-facing switching cost typology (procedural, financial, relational) with a multi-item survey instrument. Their approach yields reliable measurement within a specific product category (e.g., credit cards, long-distance carriers) but does not generalize across categories — the items must be redesigned for each domain.

BHI differs from these approaches in two ways. First, it is designed to be domain-agnostic — the same 11 parameters and rubrics apply to an AI assistant, a semiconductor manufacturer, and a payment network. Second, it measures structural lock-in (the depth of the dependency well) rather than experienced switching cost (the subjective cost a particular user reports). The distinction matters: structural lock-in can be high even when users report satisfaction and do not contemplate switching.

2.2 Network Effects and Two-Sided Markets

Katz and Shapiro (1985) formalized direct and indirect network externalities, establishing that platforms with network effects can tip toward winner-take-all outcomes. Rochet and Tirole (2003) extended this to two-sided markets where the platform must balance pricing and value creation across multiple user groups.

BHI incorporates network effects through parameter n (network effects, 0-10) as a multiplicative amplifier of capture: Capture = c × (1 + 0.5n) × CaptureCore × .... This design choice reflects the empirical observation that network effects amplify existing lock-in rather than creating it independently — a platform with no data depth, memory, or process integration does not generate lock-in merely by having network effects.

2.3 Data Gravity and Platform Economics

McCrory (2010) introduced the concept of data gravity — the tendency of data to attract services, applications, and more data. As data accumulates in a platform, the cost of moving that data (and the services built on it) increases, creating self-reinforcing lock-in. Syrmoudis et al. (2021) empirically studied data portability between online platforms, finding that GDPR Article 20 data portability rights transfer raw data but not derived state (models, embeddings, learned preferences), leaving substantial lock-in intact.

BHI operationalizes data gravity through the interaction of data depth (d), memory (m), and portability (x). A platform with d=9, m=7, x=3 (deep data, strong memory, poor portability) generates high data gravity. The feedback mechanism — where high capture suppresses effective escape — formalizes the self-reinforcing nature of data gravity.

2.4 Automation Complacency and Skill Decay

A novel element of BHI is the incorporation of automation complacency (Parasuraman and Manzey, 2010) into the lock-in model. As users rely on a system, their ability to perform tasks without it degrades — what we term endogenous h-decay. Lee et al. (2025), in a study of 319 developers using AI coding assistants, documented measurable skill retention effects, providing empirical support for this mechanism.

BHI models this through the human fallback parameter (h), which decreases with use intensity, reducing escape capacity and increasing B over time. This creates the self-reinforcing loop that distinguishes structural lock-in from mere preference: the system makes itself harder to leave through the act of being used.

2.5 Measurement Instruments in Economics and Finance

BHI draws design inspiration from several established measurement instruments:

  • Altman Z-Score (Altman, 1968): A linear discriminant function combining five financial ratios to predict corporate bankruptcy. Altman derived coefficients through discriminant analysis on 66 companies. BHI shares the ratio-based, multi-factor approach but currently lacks empirically estimated coefficients — its constants are theoretically motivated working parameters.

  • Sharpe Ratio (Sharpe, 1966): Risk-adjusted return = (Return − Rf) / σ. The conceptual structure of BHI — a ratio of competing forces — mirrors the Sharpe Ratio's ratio of excess return to volatility.

  • Gini Coefficient (Gini, 1912): A dimensionless inequality measure between 0 and 1, applicable across any income distribution. BHI similarly produces a dimensionless score applicable across any platform.

The critical difference: Sharpe, Altman, and Gini all operate on objective data (returns, financial ratios, incomes). BHI operates on scored assessments using anchored rubrics, making inter-rater reliability a first-order validation requirement.

3. Model Specification

3.1 Parameters

BHI uses 11 parameters, each scored on an integer scale from 0 to 10, with five-level anchored rubrics providing concrete behavioral descriptions at scores 0, 3, 5, 7, and 10. Parameters are grouped into three categories:

Capture Variables (what holds users in):

ParameterSymbolMeasures
Data DepthdVolume, richness, and uniqueness of user data the platform holds
MemorymCross-session persistence and accumulated user model
ActionaBreadth of actions the platform can execute on the user's behalf
Process CentralitypDegree to which the platform is embedded in critical workflows
Network EffectsnStrength of direct and cross-side network externalities
ClosenesscFrequency and intimacy of user-platform interaction

Escape Variables (what enables exit):

ParameterSymbolMeasures
PortabilityxEase of exporting data and state to a competing platform
SubstitutabilitysAvailability and quality of functional alternatives
Human FallbackhUser's ability to perform tasks without the platform

Extended Variables (amplifiers):

ParameterSymbolMeasures
Organizational DepthoDegree of organizational restructuring around the platform
MomentumtRate of capability improvement relative to competitors

3.2 Scoring Rubrics

Each parameter uses a five-anchor rubric. We illustrate with two parameters that span the capture-escape spectrum:

Process Centrality (p):

ScoreAnchorDescription
0Not embeddedNot part of any workflow. Used recreationally or experimentally.
3Ad hocUsed occasionally for specific tasks. Easy to do without.
51-2 workflowsIntegrated into one or two regular workflows. Absence noticed within hours.
7Central hubMost daily work tasks touch the system. Central routing point for decisions.
10Critical pathBusiness processes cannot complete without this system. Downtime equals revenue loss.

Substitutability (s):

ScoreAnchorDescription
0NoneNo functional equivalent exists. Monopoly or unique technology.
3Major lossAlternatives exist but with significant capability loss. 50%+ functionality gap.
5ComparableComparable alternatives available. Switching requires effort but similar result.
7Drop-inNear drop-in replacements. Switching cost is re-learning, not capability loss.
10CommodityFully commoditized layer. Dozens of equivalent options. Switching is trivial.

The complete rubric set for all 11 parameters is provided in the appendix and published at blackholeindex.com/methodology.

3.3 Aggregation: Modified CES with Complementarity Correction

Parameters are normalized to [0, 1] by dividing each score by 10. The core aggregation uses a modified Constant Elasticity of Substitution (CES) function.

CaptureCore:

CaptureCore = ((√d + √m + √a + √p) / 4)² × SynergyBoost

The inner function ((√d + √m + √a + √p) / 4)² is a CES aggregator with elasticity of substitution σ = 2 (substitution parameter β = 0.5). The square-root transformation and squaring correspond to the Solow Case 2 of the Arrow-Chenery-Minhas-Solow CES family. Empirical estimates for industry-level CES elasticity range from 1.4 to 3.6 with median approximately 2.2 (Antras, 2004; Klump, McAdam, and Willman, 2007). We use σ = 2 as a defensible central estimate.

The economic interpretation: data depth and memory are partial substitutes in creating lock-in. A platform with d=8, m=0 creates less lock-in than d=4, m=4, because data without memory means each session starts from near-zero context. The same logic applies to action and process centrality.

SynergyBoost (Milgrom-Roberts Complementarity Correction):

σ_K = √(d × m)       knowledge synergy
σ_O = √(a × p)       operational synergy
SynergyBoost = 1 + 0.3 × (σ_K + σ_O) / 2

Milgrom and Roberts (1990, 1995) established that complementary activities exhibit supermodular payoffs — the return to one activity increases in the level of complementary activities. BHI captures two pairwise complementarities: knowledge synergy between data depth and memory (knowing more is more valuable when you remember what you know), and operational synergy between action capability and process centrality (executing actions is more lock-in-creating when the platform is the workflow hub).

SynergyBoost is multiplicative, meaning this is not a pure CES function in the Arrow-Chenery-Minhas-Solow sense. It is a CES core with a complementarity correction.

EscapeCore:

EscapeCore = ((√x + √s + √h) / 3)²

The same CES form applied to the three escape variables.

3.4 The B Formula

Full Capture:

Capture = c × (1 + 0.5n) × CaptureCore × (1 + 0.5o) × (1 + 0.3t)

Closeness (c) acts as a base multiplier — zero interaction means zero lock-in regardless of structural depth. Network effects (n), organizational depth (o), and momentum (t) are multiplicative amplifiers with theoretically motivated coefficients:

  • α = 0.5 (network amplification): following Katz-Shapiro (1985), network effects roughly double lock-in at maximum.
  • ω = 0.5 (organizational multiplier): following Granovetter (1978), organizational embedding creates threshold-cascade effects.
  • τ = 0.3 (momentum multiplier): following Arthur (1989), increasing returns amplify lock-in at a lower rate than network or organizational effects.

Feedback Coefficient:

feedback = max(0.3, 1 − 0.35 × CaptureBase)

where CaptureBase = CaptureCore before SynergyBoost multiplication. This implements endogenous escape suppression: as capture deepens, the effective contribution of escape to the denominator decreases. The coefficient δ = 0.35 reflects the empirical finding that approximately one-third of users exhibit status quo bias in platform switching contexts. This is consistent with the broader behavioral economics literature: Samuelson and Zeckhauser (1988) established status quo bias as a robust phenomenon; Kahneman, Knetsch, and Thaler (1991) documented endowment effects of similar magnitude; Burnham, Frels, and Mahajan (2003) measured procedural switching costs at 30-40% of total switching barriers. The value 0.35 is a theoretically motivated working parameter subject to empirical calibration. The floor at 0.3 prevents escape from becoming negligible.

Note: δ = 0.35 is not directly estimated from a single study. It is a central estimate from the range of behavioral inertia findings in the switching cost literature (approximately 0.25-0.45). Sensitivity analysis in Section 5 demonstrates that rankings are robust to variation within this range.

The Index:

B = Capture / (ε + EscapeCore × feedback)

where ε = 0.1 is a floor parameter preventing division by zero and bounding B at a finite maximum.

3.5 Interpretation Zones

RangeZoneInterpretation
B < 0.4Useful ToolSystem is genuinely optional. Users can leave with minimal cost.
0.4 ≤ B < 0.7Growing GravityDependency forming. Switching feasible but increasingly inconvenient.
0.7 ≤ B < 1.3Transition ZoneGray zone (cf. Altman Z-Score methodology). Outcome depends on trajectory.
1.3 ≤ B < 2.5Event HorizonLeaving is expensive and gets more expensive over time.
B ≥ 2.5Black HoleStructural lock-in. Leaving requires organizational restructuring.

The threshold at B = 1 is definitional: the point where capture equals escape. Whether B = 1 corresponds to an observable behavioral transition in platform switching is an empirical question not yet tested.

3.6 Theoretical Maximum

With all parameters at their extremes (capture variables = 10, escape variables = 0), B_max ≈ 38.03. This is high for a ratio index but mathematically correct given the multiplicative structure. In practice, no platform approaches this theoretical maximum — the highest observed score in our cross-section is approximately 17 (WeChat).

3.7 Measurement Scale Considerations

The 0-10 scoring rubric is treated as approximately interval-scale for operational computation. Anchored rubrics with concrete behavioral descriptions at each level are designed to create consistent intervals between levels. However, strict interval-scale validity has not been formally established through Rasch modeling or Item Response Theory calibration. Until such calibration is performed, users should interpret small differences in B (less than 0.5) with caution, as they may fall within the measurement uncertainty of the ordinal-to-interval approximation.

4. Cross-Section Results

4.1 Sample and Scoring Procedure

We applied BHI to 96 platforms across 12 sectors. Platform selection followed three criteria: (i) global significance or sector dominance, (ii) availability of public information sufficient to score all 11 parameters, and (iii) diversity across sectors, business models, and lock-in profiles.

Scoring was conducted by a single evaluator using the anchored rubrics, with written rationale for each parameter score. This is an acknowledged limitation — inter-rater reliability requires independent scoring by multiple evaluators (see Section 7). All scores, rationales, and computed B values are published at blackholeindex.com/rankings.

4.2 Sector Overview

Table 1: Sector Summary Statistics

SectorPlatformsB RangeMedian BHighest B Platform
AI Assistants8*0.2 – 4.7~1.7Microsoft Copilot (4.7)
Crypto Assets100.2 – 4.8~1.8Solana (4.8)
Crypto Infrastructure80.6 – 3.5~1.7Binance (3.5)
Social Media101.3 – 17.0~3.4WeChat (17.0)
Big Tech & Semiconductors84.6 – 10.9~7.5TSMC (10.9)
SaaS & Cloud80.9 – 6.9~4.0AWS (6.9)
Banks & Financial Infrastructure82.2 – 7.2~4.7Bloomberg Terminal (7.2)
Fintech & Payments81.1 – 8.8~3.3Visa (8.8)
Gaming & Entertainment80.7 – 6.1~3.8Apple App Store (6.1)
E-Commerce81.7 – 7.8~3.4Amazon Marketplace (7.8)
Pharma & Biotech61.9 – 5.8~4.1Illumina (5.8)
Critical Infrastructure63.5 – 13.0~4.5Synopsys/Cadence EDA (13.0)

* AI sector includes 6 real platforms + 2 reference scenarios (AI-OS theoretical and generic Chat LLM). Range shown excludes theoretical AI-OS (B ≈ 7.7).

4.3 Key Findings

Finding 1: Infrastructure compounds; attention decays.

The highest B scores belong not to consumer-facing platforms with billions of users, but to infrastructure platforms that underpin entire industries:

  • Synopsys/Cadence EDA duopoly: B ≈ 13.0 (p=10, s=1, h=1 — no chip design without EDA)
  • TSMC: B ≈ 10.9 (p=10, s=1, h=2 — no advanced chips without TSMC)
  • ASML: B ≈ 9.6 (p=10, s=1, h=1 — no EUV lithography without ASML)
  • Visa: B ≈ 8.8 (p=9, n=9, s=2, h=2 — purest network effect monopoly in financial services)
  • NVIDIA: B ≈ 8.3 (p=9, n=9, s=2, t=9 — CUDA ecosystem lock-in)

Meanwhile, platforms with massive user bases but commoditized offerings score low:

  • Netflix: B ≈ 1.0 (325M subscribers, but no owned content, no social graph, no accumulated assets)
  • Dogecoin: B ≈ 0.2 ($15B market cap, near-zero structural lock-in)

Finding 2: Lock-in is driven by escape difficulty, not capture volume.

The multiplicative structure of BHI reveals that B is most sensitive to escape variables when they are low. Reducing substitutability from s=3 to s=1 has a larger effect on B than increasing any single capture variable by 2 points. This reflects the economic intuition that monopoly (no alternatives) creates more lock-in than incremental improvements to an already-embedded platform.

The EDA duopoly illustrates this: with x=1, s=1, h=1 (effectively zero escape capacity), even moderate capture scores generate extreme B values.

Finding 3: The WeChat anomaly.

WeChat (B ≈ 17.0) registers the highest B in the cross-section despite not being traditional infrastructure. Its scores: d=9, m=9, a=9, p=10, n=9, c=10, x=1, s=1, h=1, o=9, t=5. WeChat functions as digital infrastructure for 1.3 billion Chinese users — identity, payments, government services, commerce, communication all route through a single application. The "super-app" model achieves infrastructure-level lock-in through comprehensiveness rather than through controlling a physical chokepoint.

Finding 4: Crypto exhibits the widest B dispersion.

Cryptocurrency platforms span the full B range from near-zero (Dogecoin: B ≈ 0.2) to infrastructure-level (Solana: B ≈ 4.8, Ethereum: B ≈ 4.6). This reflects the fundamental divide between: (a) tokens that are pure speculative instruments with perfect substitutability, and (b) smart contract platforms that function as programmable infrastructure with developer ecosystems, DeFi protocols, and tooling dependencies.

Solana (B ≈ 4.8) scores slightly higher than Ethereum (B ≈ 4.6) despite younger age, driven by explosive momentum (t=9) and low developer portability (x=2 — Rust/Anchor code does not transfer to EVM).

Finding 5: B explains retention better than market cap.

Several platforms exhibit striking divergence between market capitalization and structural lock-in:

  • MercadoLibre (B ≈ 5.9, market cap $88B) versus PayPal (B ≈ 1.1, market cap $42B): MELI combines marketplace, payments, logistics, and credit into a unified lock-in structure across Latin America. PayPal faces commoditization as Apple Pay, Google Pay, and Shop Pay offer drop-in alternatives.

  • SWIFT (B ≈ 6.8, cooperative) controls $150T+ in daily message flow through 11,000+ institutions. Disconnection from SWIFT is the financial equivalent of oxygen deprivation — as Iran and Russia have demonstrated.

  • Nubank (B ≈ 5.2, market cap $68-72B) is the first and only bank for 100M+ underbanked Latin Americans. Its lock-in is existential — users switching from Nubank are switching away from financial inclusion itself.

4.4 Sector Deep Dives

AI Assistants (8 platforms):

The AI sector demonstrates rapid divergence in lock-in formation. Standalone assistants cluster in the Transition Zone, while enterprise-embedded platforms pull away:

  • Microsoft Copilot (B ≈ 4.7): Driven by organizational embedding (o=9) and enterprise data access (d=9) through M365 integration. Copilot itself may be interchangeable, but M365 infrastructure is not.
  • Gemini (B ≈ 2.8): Google ecosystem integration (d=8, p=8, n=9) creates moderate lock-in. Higher than standalone assistants due to cross-product integration with Gmail, Docs, and Android.
  • ChatGPT (B ≈ 2.0): Moderate lock-in from conversation history (m=7), strong API ecosystem (n=9), and process integration (p=6). But high substitutability (s=7) caps B.
  • Claude (B ≈ 1.5) and Grok (B ≈ 1.6): Transition Zone — growing but still largely substitutable.
  • Perplexity (B ≈ 0.7) and generic Chat LLM (B ≈ 0.2): low structural lock-in, easily replaceable.

Big Tech & Semiconductors (8 platforms):

This sector contains four platforms with B > 5, forming the heaviest gravitational wells in the digital economy:

  • TSMC (B ≈ 10.9): Fabricates 92% of the world's most advanced chips. No alternative exists at ≤5nm nodes. Samsung Foundry and Intel Foundry Services are generations behind.
  • ASML (B ≈ 9.6): Sole manufacturer of EUV lithography machines required for ≤7nm fabrication. 100% monopoly on a chokepoint technology. Annual output of ~55 EUV systems with 3+ year backlog.
  • NVIDIA (B ≈ 8.3): CUDA ecosystem with millions of developers, 15 years of optimized libraries, training infrastructure dependency (s=2, x=2). AMD ROCm and Intel oneAPI are functional alternatives but ecosystem maturity gaps persist.
  • Apple (B ≈ 5.2): Hardware + software + services + identity integration. 2.2B active devices. iMessage, AirDrop, and Handoff create social switching costs.

Banks & Financial Infrastructure (8 platforms):

Financial infrastructure exhibits generally high B at its core, reflecting decades of accumulated institutional dependencies:

  • Bloomberg Terminal (B ≈ 7.2): d=9, m=7, a=8, p=9, n=9, c=9, x=2, s=3, h=4. 325,000+ terminals. $10B+ annual revenue from subscriptions that firms cannot cancel because Bloomberg has become the operating system of financial markets. MSG network with 300,000+ users is a communications monopoly within finance.
  • SWIFT (B ≈ 6.8): Not a company — a chokepoint. 11,000+ institutions, $150T+ daily. CIPS and SPFS are orders of magnitude smaller.
  • JPMorgan (B ≈ 6.1): Corporate banking relationships spanning decades. Lending, treasury, custody, payments, trading — all integrated. Switching primary bank = moving every financial relationship simultaneously.

Fintech & Payments (8 platforms):

  • Visa (B ≈ 8.8): Purest network effect monopoly in financial services. 4B+ cards, 100M+ merchant locations. Both sides cannot leave independently.

E-Commerce (8 platforms):

Amazon Marketplace (B ≈ 7.8) demonstrates bilateral lock-in at unprecedented scale: sellers depend on 200M+ Prime members (91% renewal rate) and non-portable reviews; buyers depend on same-day delivery infrastructure, accumulated purchase history, and the convenience tax of Prime membership.

MercadoLibre (B ≈ 5.9) replicates the Amazon pattern in Latin America with an additional fintech layer — Mercado Pago processes $278B in payment volume (4× marketplace GMV), own cargo fleet delivers 95% of packages, and a $11B credit portfolio deepens dependency.

4.5 Cross-Section Summary

The cross-section reveals a fundamental pattern: the deepest structural lock-in in the global economy does not belong to the companies with the most users or the highest market caps. It belongs to those that control chokepoints.

ASML and Synopsys/Cadence make chip manufacturing possible. TSMC translates that into compute. Visa and SWIFT make money move. Bloomberg makes financial markets function. Illumina makes genomics possible. Deere makes precision agriculture work.

The weakest structural lock-in, despite massive scale, belongs to attention platforms and commodity services: Dogecoin (B ≈ 0.2), Netflix (B ≈ 1.0), and PayPal (B ≈ 1.1).

Infrastructure compounds. Attention decays.

5. Sensitivity Analysis

5.1 Methodology

We conduct a Sobol-style global sensitivity analysis (Saltelli et al., 2008) to assess the robustness of BHI rankings to parameter and coefficient uncertainty. Two types of sensitivity are examined:

  1. Input sensitivity: How does B change when individual platform scores vary by ±1 point?
  2. Coefficient sensitivity: How do rankings change when structural constants (α, λ, ω, τ, δ, ε) vary within ±30%?

5.2 Input Sensitivity

For each of the 11 parameters, we compute ∂B/∂parameter for a reference platform (Microsoft Copilot, B ≈ 4.7). Results confirm monotonicity: increasing any capture variable increases B; increasing any escape variable decreases B. This is a necessary (but not sufficient) condition for construct validity.

The sensitivity ranking reveals that B is most sensitive to:

  1. Substitutability (s): Low s values create the steepest B gradients.
  2. Process centrality (p): Through both CaptureCore and the operational synergy term.
  3. Closeness (c): As a base multiplier, c has first-order impact.

B is least sensitive to momentum (t), which enters as a multiplicative amplifier with a small coefficient (0.3).

5.3 Coefficient Sensitivity

We vary each of the six structural constants within ±30% of their default values and re-compute B for all 96 platforms. Key findings:

  • Ranking robustness: The top-5 and bottom-5 platforms are invariant across all coefficient combinations. The EDA duopoly, TSMC, and ASML remain the highest-B platforms; Dogecoin, Cardano, and XRP remain the lowest.
  • Mid-range sensitivity: Platforms in the Transition Zone (B = 0.7-1.3) are most sensitive to coefficient variation. A ±30% change in δ (feedback coefficient) can move a platform from Transition to Growing Gravity or to Event Horizon.
  • Microsoft Copilot vs. ChatGPT ordering: The ranking Copilot > ChatGPT holds for δ in the range [0.20, 0.50], which encompasses the empirically motivated value of 0.35. Only at extreme δ values outside this range does the ordering reverse.

5.4 Epsilon Floor Sensitivity

The ε = 0.1 floor prevents B from approaching infinity when escape variables are near zero. Varying ε from 0.05 to 0.20:

  • ε = 0.05: B_max increases by ~40% for low-escape platforms (EDA, ASML).
  • ε = 0.20: B_max decreases by ~25%.
  • Rankings are unaffected — the ordering is invariant to ε within this range.

5.5 Interpretation

Sensitivity analysis demonstrates that BHI rankings are robust — they do not depend on precise coefficient values. However, robustness of ranking does not constitute validation of the measurement. Coefficient stability means the model is not fragile; it does not mean the model is correct. Empirical calibration against observed switching costs remains necessary to move from working parameters to estimated values.

6. Dynamic Model

6.1 Motivation

The static BHI (B_struct) measures the depth of the structural dependency well at a point in time. However, lock-in is inherently dynamic: it deepens with use, responds to competitive entry, and interacts with human capability decay. The dynamic extension models these processes.

6.2 State Equation

We model the evolution of realized lock-in (B_state) as:

dB_state/dt = λ(B_struct − B_state) + k · max(0, B_state − 1) · (1 − B_state/B_max) + u − r

Term 1: Well attraction. B_state converges toward B_struct at rate λ = 0.18. This captures the empirical observation that users do not immediately realize the full lock-in potential of a platform — it takes time for data to accumulate, workflows to embed, and organizational dependencies to form.

Term 2: Autocatalysis with saturation. Following Arthur (1989), self-reinforcing lock-in activates only above B = 1 (the threshold where capture exceeds escape). The growth rate k = 0.15 is bounded by logistic saturation (1 − B_state/B_max), preventing unbounded growth. This is a single multiplicative term, not two separate additive terms.

Term 3: Net external forcing. Investment in capture (u = 0.08) minus competitive erosion (r = 0.05) represents the net effect of platform strategy and market dynamics.

6.3 Endogenous h-Decay

The human fallback parameter evolves according to:

dh/dt = −φ · U · h + ψ · (1 − h)

where:

  • U = closeness × process_centrality (usage intensity)
  • φ = 0.04 (erosion speed, motivated by Parasuraman and Manzey, 2010)
  • ψ = 0.005 (recovery rate)

By design, ψ ≪ φ: skill decay from automation is faster than skill recovery. This asymmetry is supported by the cognitive automation literature and by Lee et al. (2025), who found that developers using AI coding assistants exhibited measurable declines in unassisted coding performance within months.

6.4 The Self-Reinforcing Loop

The coupling between B_state and h creates a self-reinforcing loop:

  1. Using the system degrades h (human fallback decreases).
  2. Lower h reduces EscapeCore (denominator shrinks).
  3. Lower escape increases B_struct.
  4. Higher B_struct pulls B_state deeper (Term 1).
  5. Deeper B_state means more intensive use (U increases).
  6. More intensive use accelerates h decay (return to step 1).

This verbal description hypothesizes threshold-activated self-reinforcement in the coupled (B_state, h) system. Formal verification requires phase-plane analysis: computing the Jacobian matrix at stationary points, determining eigenvalues, and classifying the stability of each equilibrium. This analysis will determine whether the system exhibits bistability (two stable attractors separated by a threshold), hysteresis (path-dependent switching between regimes), or saturating convergence to a single attractor. This formal analysis has not yet been conducted and is identified as a priority for future work.

6.5 Regulatory Scenarios

The dynamic model can simulate different regulatory environments by adjusting parameters:

ScenariokurB_maxInterpretation
No regulation0.200.100.0230Unconstrained platform growth
EU DMA regime0.150.060.1212Interoperability mandates, data portability
Open competition0.100.050.0815Moderate regulation, healthy competition

Under the DMA regime, higher erosion (r = 0.12) and lower B_max (12) constrain platform lock-in growth and reduce long-term equilibrium B values. The model provides a framework for simulating regulatory impact on structural lock-in, though the specific parameter values for each regime require empirical calibration.

7. Limitations and Validation Agenda

7.1 Current Validation Status

ComponentStatusDescription
Arithmetic VerificationComplete8 reference presets reproduced to 3 decimal places across independent implementations. Deterministic formula: same inputs always produce same output.
Sensitivity AnalysisImplementedSobol-style parameter sweep available. Rankings robust to ±30% coefficient variation.
Inter-Rater ReliabilityPendingNo independent evaluators have yet scored platforms. Target: ICC > 0.75 across 3-5 evaluators on 15-20 platforms.
Predictive ValidationPendingCorrelation between B and observable outcomes (churn, NRR, migration cost) not tested.
Dynamic System AnalysisProvisionalPhase-plane analysis with Jacobian stability assessment not yet conducted.

7.2 Known Limitations

1. Single-evaluator scoring. All 96 platforms were scored by a single evaluator. This is the most critical methodological limitation. Anchored rubrics are designed to reduce subjectivity, but they do not eliminate it. Inter-rater reliability (ICC > 0.75; Cicchetti, 1994) across 3-5 independent evaluators is the minimum standard for a credible measurement instrument. Krippendorff's alpha provides an alternative reliability metric with established thresholds (α > 0.667 for tentative conclusions, α > 0.8 for reliable conclusions).

2. Equal weights. All capture parameters are weighted equally in the CES aggregator. This is a defensible default (no theoretical basis for asymmetric weighting exists) but may not reflect empirical reality. Principal component analysis or factor analysis on multi-evaluator scoring data could reveal that some parameters contribute more to observed lock-in than others.

3. Correlated inputs. The 11 parameters are likely correlated in practice — platforms with high data depth (d) tend to have high process centrality (p). SynergyBoost models pairwise complementarity for (d, m) and (a, p) but does not address the broader covariance structure. Factor analysis on empirical scoring data is needed to determine whether the 11 parameters represent independent dimensions or can be reduced to fewer latent factors. Until such analysis is performed, double-counting of shared latent causes is a methodological risk.

4. Working parameters, not estimated constants. The six structural constants (α, λ, ω, τ, δ, ε) are theoretically grounded but not empirically calibrated. Future work should derive these from observed switching cost data — for example, estimating δ from panel data on platform switching behavior rather than relying on the status quo bias literature as a proxy.

5. Ordinal-to-interval approximation. The 0-10 rubric is treated as interval-scale for arithmetic operations (averaging, square roots). Strict interval-scale validity requires Rasch modeling or IRT calibration. Until performed, differences in B smaller than 0.5 should be interpreted with caution.

6. Logical tensions in parameter space. Certain parameter combinations are internally contradictory. For example, high organizational depth (o ≥ 7) combined with high human fallback (h ≥ 7) is suspect — if the organization has restructured around a platform, it is unlikely that individuals can easily fall back to manual processes. The automated validation tool flags such contradictions, but they require evaluator judgment to resolve.

7. B_max = 38.03. The theoretical maximum is high for a ratio index. In practice, no observed platform exceeds B ≈ 17, suggesting that the multiplicative structure may overweight extreme cases. Future versions may consider a log-transformation or saturation function for very high B values.

7.3 Validation Roadmap

Phase 1 (Immediate): Inter-Rater Reliability

  • Recruit 3-5 independent evaluators (academics, industry analysts, platform economists).
  • Score a calibration set of 15-20 platforms spanning all 12 sectors.
  • Compute ICC and Krippendorff's alpha. Target: ICC > 0.75.
  • Identify parameters with lowest agreement for rubric refinement.

Phase 2 (Near-term): Predictive Validation

  • Correlate B-scores with observable retention metrics for 10-15 publicly reporting platforms:
    • Net Revenue Retention (NRR) — available for SaaS platforms
    • Dollar-based churn — available for subscription businesses
    • Migration cost estimates — available from IT advisory firms
    • Customer Lifetime Value (CLTV) — available for public companies
  • Test hypothesis: B > 2.5 correlates with NRR > 130% and/or churn < 5%.

Phase 3 (Medium-term): Empirical Calibration

  • Use multi-evaluator scoring data to estimate optimal CES elasticity parameter σ.
  • Estimate structural constants (α, ω, τ, δ) from panel data on platform switching.
  • Factor analysis to determine whether 11 parameters reduce to fewer latent dimensions.

Phase 4 (Long-term): Longitudinal Validation

  • Track B-scores for 20-30 platforms over 2-3 years.
  • Test dynamic model predictions against observed B trajectories.
  • Validate h-decay mechanism with longitudinal skill assessment data.

8. Conclusion

We have introduced the Black Hole Index, a measurement framework for quantifying structural lock-in in platforms. The model's core contribution is formalization: specifying a complete, computable instrument with 11 anchored parameters, theoretically grounded aggregation functions, and a dynamic extension — applied to a substantive cross-section of 96 platforms across 12 sectors.

The cross-section results reveal a structural pattern that market capitalization and user counts obscure: the deepest lock-in in the global economy belongs to infrastructure platforms (EDA, ASML, TSMC, SWIFT, Bloomberg) rather than attention platforms (Netflix, Zoom, Disney+). Scale without structural lock-in produces fragile market positions; structural lock-in without scale produces durable ones.

We have been transparent about what BHI is and what it is not. It is a proposed measurement instrument that produces structured assessments based on anchored rubrics and theoretically motivated functional forms. It is not yet a validated measurement instrument — inter-rater reliability and predictive validation remain pending. The intellectual foundation is complete; the empirical foundation is in progress.

The path from proposed instrument to established standard follows a pattern documented across economics and finance: the Sharpe Ratio was proposed in 1966 and became an industry standard by the 1980s after widespread adoption and refinement. The Altman Z-Score was published in 1968 and became a standard credit risk tool after decades of validation across industries and countries. The Gini Coefficient, proposed in 1912, is now the universal inequality measure used by the World Bank, OECD, and national statistical agencies.

BHI's trajectory depends on three outcomes: (i) demonstrating inter-rater reliability (ICC > 0.75), which validates that the rubrics produce consistent measurements across evaluators; (ii) demonstrating predictive validity against observable outcomes, which validates that B measures something real; and (iii) adoption by researchers, regulators, and practitioners who find the framework useful for analyzing structural dependency. The first two are under our control and constitute the immediate research agenda. The third is a market outcome.

We release BHI as an open instrument — the complete model specification, all scoring rubrics, the interactive calculator, and all 96 platform scores are publicly available at blackholeindex.com. We invite independent evaluation, critique, and empirical validation.

References

Altman, E. (1968). Financial Ratios, Discriminant Analysis, and the Prediction of Corporate Bankruptcy. The Journal of Finance, 23(4), 589-609.

Antras, P. (2004). Is the U.S. Aggregate Production Function Cobb-Douglas? New Estimates of the Elasticity of Substitution. The B.E. Journal of Macroeconomics, 4(1).

Arrow, K., Chenery, H., Minhas, B., & Solow, R. (1961). Capital-Labor Substitution and Economic Efficiency. The Review of Economics and Statistics, 43(3), 225-250.

Arthur, W. B. (1989). Competing Technologies, Increasing Returns, and Lock-In by Historical Events. The Economic Journal, 99(394), 116-131.

Burnham, T., Frels, J., & Mahajan, V. (2003). Consumer Switching Costs: A Typology, Antecedents, and Consequences. Journal of the Academy of Marketing Science, 31(2), 109-126.

Cicchetti, D. V. (1994). Guidelines, Criteria, and Rules of Thumb for Evaluating Normed and Standardized Assessment Instruments in Psychology. Psychological Assessment, 6(4), 284-290.

Farrell, J., & Klemperer, P. (2007). Coordination and Lock-In: Competition with Switching Costs and Network Effects. In Handbook of Industrial Organization, Vol. 3, 1967-2072.

Gini, C. (1912). Variabilità e Mutabilità. Studi Economico-Giuridici dell'Università di Cagliari, 3, 3-159.

Granovetter, M. (1978). Threshold Models of Collective Behavior. American Journal of Sociology, 83(6), 1420-1443.

Katz, M., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424-440.

Klemperer, P. (1987). Markets with Consumer Switching Costs. The Quarterly Journal of Economics, 102(2), 375-394.

Klump, R., McAdam, P., & Willman, A. (2007). Factor Substitution and Factor-Augmenting Technical Progress in the United States: A Normalized Supply-Side System Approach. The Review of Economics and Statistics, 89(1), 183-192.

Lee, S., et al. (2025). The Impact of AI Coding Assistants on Developer Skill Retention. Microsoft Research / Carnegie Mellon University, n=319.

McCrory, D. (2010). Data Gravity — In the Clouds. Blog post, https://datagravity.org.

Milgrom, P., & Roberts, J. (1990). Rationalizability, Learning, and Equilibrium in Games with Strategic Complementarities. Econometrica, 58(6), 1255-1277.

Milgrom, P., & Roberts, J. (1995). Complementarities and Fit: Strategy, Structure, and Organizational Change in Manufacturing. Journal of Accounting and Economics, 19(2-3), 179-208.

Parasuraman, R., & Manzey, D. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381-410.

Kahneman, D., Knetsch, J.L., & Thaler, R.H. (1991). Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias. Journal of Economic Perspectives, 5(1), 193-206.

Rochet, J.-C., & Tirole, J. (2003). Platform Competition in Two-Sided Markets. Journal of the European Economic Association, 1(4), 990-1029.

Saltelli, A., et al. (2008). Global Sensitivity Analysis: The Primer. John Wiley & Sons.

Shapiro, C., & Varian, H. (1999). Information Rules: A Strategic Guide to the Network Economy. Harvard Business Press.

Samuelson, W., & Zeckhauser, R. (1988). Status Quo Bias in Decision Making. Journal of Risk and Uncertainty, 1(1), 7-59.

Sharpe, W. (1966). Mutual Fund Performance. The Journal of Business, 39(1), 119-138.

Syrmoudis, E., et al. (2021). Data Portability Between Online Platforms. Internet Policy Review, 10(4).

Appendix A: Complete Parameter Rubrics

All 11 parameters use five-anchor rubrics (scores 0, 3, 5, 7, 10) with concrete behavioral descriptions. Intermediate scores (1, 2, 4, 6, 8, 9) are interpolated between anchors.

Capture Variables

Data Depth (d) — Volume and density of observable user and organizational context

ScoreAnchorDescription
0NoneSystem has no access to user data. Each session starts from zero. Example: calculator app.
3SessionSees only current session context. No history, no files. Example: basic chatbot without memory.
5HistoryAccess to conversation history, uploaded files, basic usage patterns. Example: ChatGPT with memory.
7FullComplete work context: emails, documents, calendar, contacts, meeting notes. Example: M365 Copilot.
10Org exhaustEntire organizational digital footprint: all apps, all users, all workflows, metadata, inferred relationships.

Memory (m) — Persistent non-portable state: embeddings, profiles, learned policies

ScoreAnchorDescription
0NoneNo cross-session persistence. System forgets everything between uses.
3PreferencesStores basic settings and preferences. Easily recreatable on another platform.
5ContextRemembers recent interactions, learns patterns. Would take days to rebuild elsewhere.
7PersistentDeep cross-session memory: references all past conversations, learned work style.
10ModelComplete learned user model: personality, decision patterns, relationship dynamics. Years of context.

Action (a) — Ability to initiate and complete real-world actions autonomously

ScoreAnchorDescription
0Text onlyCan only generate text responses. No external integrations.
3ContentGenerates documents, images, code. Cannot execute or deploy.
5ToolsCalls external APIs, searches web, reads files. Human must act on results.
7ExecutionPlans and executes multi-step task sequences. Sends emails, schedules meetings, creates PRs.
10AutonomousFull autonomous agent: decomposes goals, executes across systems, handles errors, works unsupervised.

Process Centrality (p) — Degree of embedding inside daily workflow chains

ScoreAnchorDescription
0Not embeddedNot part of any workflow. Used recreationally or experimentally.
3Ad hocUsed occasionally for specific tasks. Easy to do without.
51-2 workflowsIntegrated into one or two regular workflows. Absence noticed within hours.
7Central hubMost daily work tasks touch the system. Central routing point for decisions.
10Critical pathBusiness processes cannot complete without this system. Downtime equals revenue loss.

Network Effects (n) — Ecosystem reinforcement: complementors, standards, multi-homing cost

ScoreAnchorDescription
0IsolatedNo multi-user dynamics. Pure single-player tool.
3Basic pluginsSmall ecosystem of extensions. Easily replicated.
5MarketplaceMeaningful plugin/app marketplace. Two-sided dynamics present.
7Cross-sideStrong cross-side network effects: more users attract more developers attract more users.
10StandardDe facto industry standard. Leaving means leaving shared language, formats, protocols.

Closeness (c) — Frequency of contact and cognitive offloading habit formation

ScoreAnchorDescription
0RareUsed less than monthly. No habit formation.
3WeeklyRegular but not daily. User goes days without it.
5DailyDaily tool. Part of routine but not the first thing touched.
7First surfaceFirst interface opened each morning. Primary surface for task initiation.
10Always-onAmbient continuous presence. OS-level integration. Interaction without conscious decision.

Escape Variables

Portability (x) — Data and state exportability

ScoreAnchorDescription
0No exportNo data export capability. All user data trapped inside the system.
3PartialSome raw data exportable (CSV, text) but not derived state.
5StandardGDPR Art.20 compliant: raw data exportable. Inferred data, embeddings, models not included.
7AutomatedFull automated import/export pipelines. Migration tools available. Most state transferable.
10Full stateComplete portable state: raw data, derived data, settings, trained models. Plug-and-play migration.

Substitutability (s) — Existence of functionally equivalent alternatives at integration level

ScoreAnchorDescription
0NoneNo functional equivalent exists. Monopoly or unique technology.
3Major lossAlternatives exist but with significant capability loss. 50%+ functionality gap.
5ComparableComparable alternatives available. Switching requires effort but similar result.
7Drop-inNear drop-in replacements. Switching cost is re-learning, not capability loss.
10CommodityFully commoditized layer. Dozens of equivalent options. Switching is trivial.

Human Fallback (h) — Ability to maintain productivity without the system (Parasuraman decay)

ScoreAnchorDescription
0ImpossibleWork literally cannot be done without this system. Skills no longer exist.
3SevereCould theoretically revert but productivity drops 60%+. Critical skills atrophied.
5DegradedNoticeable degradation. Tasks take 2-3x longer. Errors increase. But work gets done.
7MinorSmall inconvenience. Some tasks slower. Overall productivity impact under 15%.
10OptionalSystem is a nice-to-have. Full productivity maintained without it.

Note: h degrades endogenously with use. This is the Parasuraman-Manzey (2010) automation complacency effect. The more you rely on the system, the less capable you become without it.

Extended Variables

Organizational Depth (o) — Institutional procurement lock-in (Granovetter cascade threshold)

ScoreAnchorDescription
0NoneNo organizational dependency. Individual user only.
3PilotSmall pilot or team trial. Easy to cancel.
5Multi-teamMultiple teams using the system. Cross-team dependencies forming.
7EnterpriseEnterprise-wide deployment. Procurement contracts, training, IT infrastructure aligned.
10RebuiltOrganization has restructured operations around this system. Roles, processes, KPIs redefined.

Momentum (t) — Capability growth velocity (Arthur increasing returns)

ScoreAnchorDescription
0StagnantNo meaningful capability improvement in 12+ months.
3TrailingImproving but slower than market. Competitors pulling ahead.
5Market paceKeeping pace with competitors. No structural advantage.
7FastFaster improvement than competitors. Each release widens the gap.
10ExplosiveRevolutionary capability growth. Paradigm-shifting features every quarter.

Appendix B: All 96 Platform Scores

Table B1: Complete BHI Cross-Section (sorted by B, descending)

Columns: d = Data Depth, m = Memory, a = Action, p = Process Centrality, n = Network Effects, c = Closeness, x = Portability, s = Substitutability, h = Human Fallback, o = Organizational Depth, t = Momentum. B computed using V3 formula. Zone thresholds: Useful Tool (B < 0.4), Growing Gravity (0.4-0.7), Transition (0.7-1.3), Event Horizon (1.3-2.5), Black Hole (B ≥ 2.5).

#PlatformSectordmapncxshotBZone
1WeChatSocial999109101119517.03Black Hole
2Synopsys/Cadence (EDA Duopoly)Infra89910881119612.97Black Hole
3TSMCBig Tech78910881129710.91Black Hole
4ASMLBig Tech6891077111969.62Black Hole
5VisaFintech878999222958.82Black Hole
6AmazonBig Tech989999333988.40Black Hole
7NVIDIA (CUDA)Big Tech789998223998.29Black Hole
8MastercardFintech778999222858.14Black Hole
9Amazon MarketplaceE-Commerce988999333887.77Black Hole
10AI-OS (Theoretical)AI988979323877.67Black Hole
11Bloomberg TerminalBanks978999234957.21Black Hole
12AWSSaaS9891088343976.85Black Hole
13SWIFTBanks7781098232936.78Black Hole
14MicrosoftBig Tech968999334976.64Black Hole
15Apple App StoreGaming887998233756.13Black Hole
16JPMorgan ChaseBanks988988343966.13Black Hole
17MercadoLibreE-Commerce878988333785.87Black Hole
18IlluminaPharma888987233865.78Black Hole
19YouTubeSocial887899334675.73Black Hole
20FoxconnInfra778978233855.65Black Hole
21WhatsAppSocial775999233655.43Black Hole
22SteamGaming897898135455.32Black Hole
23MetaBig Tech987799345775.22Black Hole
24AppleBig Tech887899344665.22Black Hole
25BlackRockBanks888897343975.21Black Hole
26NubankFintech877879343685.21Black Hole
27IQVIAPharma987877233855.18Black Hole
28Alibaba/TaobaoE-Commerce877898334854.97Black Hole
29Solana (SOL)Crypto Assets768987234794.83Black Hole
30RobloxGaming777799235484.79Black Hole
31Microsoft 365SaaS867999445954.73Black Hole
32Microsoft Copilot (M365)AI956989434944.73Black Hole
33Deere & CompanyInfra877877233764.68Black Hole
34Alphabet/GoogleBig Tech977899545774.62Black Hole
35Ethereum (ETH)Crypto Assets878997344864.57Black Hole
36SalesforceSaaS877988444954.54Black Hole
37Adobe Creative CloudSaaS777888344864.38Black Hole
38Thermo FisherPharma777888344864.38Black Hole
39ARM HoldingsInfra677997334864.23Black Hole
40FacebookSocial986798345754.19Black Hole
41Goldman SachsBanks878887344854.11Black Hole
42Google PlayGaming777898444663.94Black Hole
43RochePharma877877344863.88Black Hole
44Veeva SystemsPharma877877344863.88Black Hole
45CorningInfra677877234753.84Black Hole
46CoupangE-Commerce767878434663.72Black Hole
47LinkedInSocial885786224753.70Black Hole
48StripeFintech868987455883.67Black Hole
49SnowflakeSaaS977877454873.67Black Hole
50PlayStation NetworkGaming786788245463.64Black Hole
51LindeInfra567877333853.53Black Hole
52BNBCrypto Assets867798446763.47Black Hole
53Binance (Exchange)Crypto Infra867798446763.47Black Hole
54Hyperliquid (HYPE)Crypto Assets658778435593.25Black Hole
55ShopifyE-Commerce777887455773.15Black Hole
56InstagramSocial876698356563.07Black Hole
57Shopee/SeaE-Commerce766788445573.03Black Hole
58ChainlinkCrypto Infra657896334762.99Black Hole
59RevolutFintech767768455582.87Black Hole
60Block (Square)Fintech767877455762.81Black Hole
61Google GeminiAI866898566572.80Black Hole
62Google WorkspaceSaaS756888556762.68Black Hole
63MetaMask (Wallet)Crypto Infra566779356452.56Black Hole
64SlackSaaS765779467742.52Black Hole
65Interactive BrokersBanks758768555552.49Event Horizon
66SpotifyGaming775678456462.31Event Horizon
67Charles SchwabBanks767777556652.26Event Horizon
68TikTokSocial765589467472.24Event Horizon
69Refinitiv/LSEGBanks756767455742.19Event Horizon
70RedditSocial774587336362.15Event Horizon
71TelegramSocial656678457472.03Event Horizon
72ChatGPTAI777697677471.96Event Horizon
73ModernaPharma777665455681.86Event Horizon
74PDD/TemuE-Commerce656687557581.86Event Horizon
75TRON (TRX)Crypto Assets546887646651.85Event Horizon
76CoinbaseCrypto Infra756677557651.83Event Horizon
77Bitcoin (BTC)Crypto Assets654797837851.82Event Horizon
78Tether (USDT)Crypto Assets435898746741.72Event Horizon
79eBayE-Commerce775676356431.71Event Horizon
80AaveCrypto Infra557686755771.67Event Horizon
81Grok (xAI)AI745489677381.56Event Horizon
82Claude (Anthropic)AI678557777381.49Event Horizon
83X (Twitter)Social655577567351.29Transition
84RobinhoodFintech646557667361.16Transition
85PayPalFintech656576677541.14Transition
86Cardano (ADA)Crypto Assets556565456551.12Transition
87XRPCrypto Assets546675757661.02Transition
88NetflixGaming664467778361.00Transition
89UniswapCrypto Infra446595856660.96Transition
90ZoomSaaS534677778530.90Transition
91LidoCrypto Infra445684647650.80Transition
92PerplexityAI555456778280.78Transition
93Disney+Gaming554466778340.71Transition
94OpenSeaCrypto Infra555574667330.63Growing Gravity
95Chat LLM (Generic)AI322464888230.22Useful Tool
96Dogecoin (DOGE)Crypto Assets222265989120.17Useful Tool

All scores, written rationales, and interactive exploration available at blackholeindex.com/rankings.

Appendix C: Worked Example — Microsoft Copilot

Inputs: d=9, m=5, a=6, p=9, n=8, c=9, x=4, s=3, h=4, o=9, t=4
Normalized: d=0.9, m=0.5, a=0.6, p=0.9, n=0.8, c=0.9, x=0.4, s=0.3, h=0.4, o=0.9, t=0.4

Step 1: CaptureCore
  CES = ((√0.9 + √0.5 + √0.6 + √0.9) / 4)² = 0.7136

Step 2: SynergyBoost
  σ_K = √(0.9 × 0.5) = 0.6708
  σ_O = √(0.6 × 0.9) = 0.7348
  SynergyBoost = 1 + 0.3 × (0.6708 + 0.7348) / 2 = 1.2108

Step 3: Boosted CaptureCore
  Core = 0.7136 × 1.2108 = 0.8640

Step 4: Full Capture
  NetworkMult = 1 + 0.5 × 0.8 = 1.40
  OrgMult = 1 + 0.5 × 0.9 = 1.45
  MomMult = 1 + 0.3 × 0.4 = 1.12
  Capture = 0.9 × 1.40 × 0.8640 × 1.45 × 1.12 = 1.768

Step 5: Feedback
  CaptureBase = 0.7136 (CaptureCore before SynergyBoost)
  feedback = max(0.3, 1 − 0.35 × 0.7136) = 0.7502

Step 6: EscapeCore
  EscapeCore = ((√0.4 + √0.3 + √0.4) / 3)² = 0.3651

Step 7: B
  Denominator = 0.1 + 0.3651 × 0.7502 = 0.3739
  B = 1.768 / 0.3739 = 4.73

Zone: Black Hole (B > 2.5)

Corresponding author: Ivan Savich. Contact via blackholeindex.com.

Data availability: All platform scores, model parameters, and interactive calculator are available at blackholeindex.com.

Code availability: The BHI computation engine (TypeScript) is available at blackholeindex.com/observatory.

Conflict of interest: The author is the creator and maintainer of the Black Hole Index platform.

Acknowledgments: The BHI model was developed iteratively with feedback from the research and platform economics community.

Cite this paper
@article{savich2026bhi,
  title   = {The Black Hole Index: A Structural Lock-In Measurement Framework for Digital Platforms},
  author  = {Savich, Ivan},
  year    = {2026},
  month   = {March},
  note    = {Working Paper},
  url     = {https://blackholeindex.com/paper}
}