Home Guides Coinspeaker Methodology: How We Rank Crypto Assets

Coinspeaker Methodology: How We Rank Crypto Assets

Created: Author Image Julia Sakovich, Editor-in-Chief
7 mins

To help readers separate signal from noise, we, at Coinspeaker, apply a consistent process to every piece we publish, whether it’s a how-to guide, a review, or a ranked list.

This page outlines our data standards, scoring framework, red-flag rules, refresh frequency, and independence policies, so you can understand exactly how conclusions are reached. Use our work as structured input to your own decisions, not as a substitute for them.

1) Scope & Definitions

Our rankings cover freely tradable, non-custodial crypto assets (L1/L2 base assets, application tokens, and meme tokens). Stablecoins and wrapped assets are assessed with a separate framework due to distinct risk profiles; they are excluded from the composite ranking unless explicitly noted. NFT collections are out of scope.

Unit of analysis: an asset’s primary network (or canonical contract on its origin chain). Forks or bridged versions are treated separately if they have materially different risk/usage characteristics.

2) Purpose of the Rankings

  • Decision support for investors and users. Distill complex, multi-source data into a transparent composite score.
  • Trend tracking. Highlight momentum in development, adoption, and network health over time.
  • Risk signaling. Surface red flags (e.g., security incidents, regulatory actions) that may not be fully priced in.
  • Accountability. Encourage projects to publish roadmaps, audits, and transparent disclosures.
  • Not financial advice: Rankings are informational and should not be the sole basis for investment decisions.

3) Evaluation Framework

Each asset receives a 0–100 composite score built from normalized sub-scores. Criteria weights reflect what we view as durable value drivers and risk mitigants. We maintain separate category adjustments (see 3.3) to account for structural differences across L1s/L2s, application tokens, and meme tokens.

3.1 Criteria & Base Weights (Total = 100%)

Criterion Weight What we measure (summary)
Market Capitalization 12% Free-float market cap; float-adjusted where feasible.
Liquidity & Volume Quality 12% Spot & perp volume breadth across reputable venues, order book depth, slippage at standard sizes, wash-trade risk filters.
Technology & Development Activity 14% Code quality and velocity (commits, contributors, active repos), protocol upgrades, client diversity, test coverage/docs.
Security & Reliability 14% Audit history, critical CVEs, time since last incident, client/node reliability, chain liveness, on-chain exploits.
Decentralization & Network Health 10% Validator/miner concentration (e.g., Nakamoto coefficient), stake distribution, client diversity, geographic/jurisdictional dispersion.
Adoption & Utility 12% Unique active addresses, transactions, TVL (if relevant), fee payers, integrations, real economic usage (payments, DeFi, gaming, infra).
Community & Governance 8% Governance participation, proposal quality/pass rates, forum vitality, social traction (de-botted), foundation transparency.
Roadmap Execution 8% Milestone completion, delivery cadence vs. promises, upgrade preparedness and post-mortems.
Tokenomics & Distribution 6% Emission schedule, unlocks, treasury health, insider concentration, staking incentives, sell-pressure overhang.
Regulatory & Compliance Posture 4% Enforcement actions, exchange compliance status, sanctions exposure, KYC/AML risk indicators for core venues.

3.2 How Each Criterion is Scored (Key Inputs)

Market Capitalization

  • Free-float adjustment where lockups/treasury allocations dominate.
  • Outlier handling for illiquid or thinly quoted pairs.

Liquidity & Volume Quality

  • Median bid–ask and depth at pre-set order sizes across top-tier venues.
  • Venue-quality weighting; de-emphasize suspect exchanges and concentrated wash patterns.
  • Perp basis and funding stability as a liquidity/health proxy.

Technology & Development Activity

  • Active repos, unique monthly contributors, issue velocity and closure rates.
  • Client diversity, formal verification usage, test coverage proxies, version release cadence.

Security & Reliability

  • Independent audit coverage (named firms), critical severity findings and remediation latency.
  • Historical exploit count/losses; chain halts/reorgs; mean time between incidents; bug bounty scope/payouts.

Decentralization & Network Health

  • Nakamoto coefficient (consensus power to control/stop the chain) and validator/geographic dispersion.
  • Client implementation diversity; stake/mining pool concentration; validator churn.

Adoption & Utility

  • Daily/weekly/monthly active addresses, non-spam transactions, fee-paying users, TVL (where applicable), unique integrators, merchant/payment support.

Community & Governance

  • Governance voter turnout, unique voters, quorum reliability, treasury transparency, forum engagement quality; social metrics after bot/airdrop farm filtering.

Roadmap Execution

  • Public milestone tracking; % completed vs. scheduled; delivery punctuality; post-upgrade stability.

Tokenomics & Distribution

  • Emissions and unlock schedule over next 12–24 months; treasury runway; insider/VC concentration; staking/LP incentives and net sell pressure modeling.

Regulatory & Compliance Posture

  • Notable enforcement actions; delistings; compliance tiering of major exchanges; sanctions/OFAC exposure; clarity of corporate domicile and disclosures.

3.3 Category Adjustments

We tailor weights based on asset type:

  • Base/L1 & Scaling (L2/Sidechains): Greater weight on decentralization, security, and client diversity.
    • Adjusted weights: Security & Reliability +3%, Decentralization & Network Health +2%. Offsets: Market Capitalization –3%, Community & Governance –2%.
  • Application/DeFi/Game tokens: Greater weight on Adoption/Utility and Tokenomics.
    • Adjusted weights: Adoption & Utility +3%, Tokenomics & Distribution +3%. Offsets: Decentralization & Network Health –3%, Regulatory & Compliance –3%.
  • Meme tokens: Emphasize liquidity, community, and tokenomics; de-emphasize roadmap execution and development activity.
    • Adjusted weights: Liquidity & Volume Quality +4%, Community & Governance +6%, Tokenomics & Distribution +4%. Offsets: Technology & Development Activity –6%, Roadmap Execution –4%, Regulatory & Compliance –4%.
  • Early-stage assets (<12 months mainnet): Cap Market Cap contribution at the 75th percentile; emphasize execution and security readiness.

3.4 Normalization & Composite Score

  • For each metric, we winsorize at the 2nd/98th percentiles, then min–max normalize to 0–100.
  • Criterion sub-scores = weighted aggregates of their underlying normalized metrics.
  • Composite score = Σ(weight_i × subscore_i).
  • Where data is missing, we (a) impute with conservative priors and apply a penalty, or (b) exclude the metric and proportionally reweight the criterion—whichever is stricter for the asset.

3.5 Red-Flag Overrides (Risk Modifiers)

Certain events temporarily cap an asset’s composite at predefined ceilings until resolved:

  • Critical exploit or chain halt within the last 90 days (cap at 60).
  • Admitted/confirmed critical audit findings unpatched for >30 days (cap at 65).
  • Major enforcement action or exchange delisting wave (cap at 70).

4) Data Sources

We triangulate across independent providers and primary sources:

  • Market & Liquidity: CoinGecko, CoinMarketCap, Kaiko, Coin Metrics, exchange APIs/order books; derivatives data from major venues.
  • On‑Chain & Protocol Analytics: Glassnode, Token Terminal, DeFiLlama, Dune dashboards, native block explorers (e.g., Etherscan, Solscan), our archival nodes.
  • Development: GitHub/GitLab public repos, release notes, client docs.
  • Security: Audit reports (e.g., Trail of Bits, OpenZeppelin, Quantstamp), bug bounty programs, CVE/NIST feeds, incident post-mortems.
  • Governance & Community: Snapshot/Tally, project forums, Discourse, foundation transparency reports, verified social channels.
  • Disclosures: Whitepapers, litepapers, official docs, quarterly updates; direct interviews for clarification (non-compensated).

We do not rely on any single provider’s proprietary “scores.” We ingest raw data where possible and document all transformations.

5) Testing & Verification Process

  • Cross‑checks: Every critical metric is validated across at least two independent sources when available.
  • Venue filtering: We apply an exchange‑quality list and wash‑trading heuristics before computing liquidity/volume.
  • Reproducibility: All transformations are scripted; point‑in‑time snapshots are hashed and time‑stamped.
  • Manual reviews: Sampled audits of outliers (top/bottom deciles) and any asset with sudden score deltas >10 points.
  • Peer review: Material methodology changes receive internal review and (when feasible) third‑party consultation.

6) Review Frequency

  • Scheduled: Monthly full refresh of all inputs and scores; weekly checks for high‑volatility metrics (liquidity, incidents).
  • Event‑driven: Immediate interim reviews for security incidents, major upgrades/forks, enforcement actions, or delistings.

7) Independence & Disclosure

  • Advertising, sponsorships, listings, or affiliate relationships do not influence rankings or weights.
  • Research staff must disclose asset holdings and are subject to trading blackout windows around publication.
  • Projects may submit factual corrections with evidence; no preferential treatment is offered or accepted.

8) Update Protocol (Methodology Changes)

  • Versioning: Every methodology update receives a new version number and public changelog.
  • Backtesting: We re‑score historical periods to assess stability and unintended effects.
  • Communication: Material changes (≥5% cumulative weight shift across criteria or new red‑flag rules) are announced at least one cycle before taking effect, unless risk mitigation requires immediate action.
  • Re‑ranking triggers: A change in weights/metrics, or a qualifying major event, triggers an out‑of‑cycle re‑rank for impacted assets.

9) Appeals & Corrections

Projects may request corrections by submitting: (1) the disputed field(s), (2) primary‑source evidence, (3) date range. We review within the next cycle and publish accepted changes in the changelog.

10) Limitations

  • Data quality varies across chains and venues; some metrics are proxies.
  • Social metrics are prone to manipulation; we deploy anti-bot filters but cannot eliminate noise entirely.
  • Scores are comparative, not absolute; they reflect conditions at the time of measurement.
  • Meme token metrics are especially volatile and sentiment-driven; scores may shift disproportionately with community trends.

For information on how we research, review, and update articles, see our Editorial Guidelines & Policy.

Contact:
[email protected]
Changelog: v1.1 (2025‑09‑03) – Added tailored weights for L1s, application tokens, and meme tokens.

Julia Sakovich

Julia Sakovich

Editor-in-Chief, 1202 posts

Julia is an experienced content writer. She works with various topics and business domains, including but not limited to blockchain, cryptocurrencies, AI, and software development. Her articles are regularly featured on reputable news websites and IT business portals. Currently, Julia is the Editor-in-Chief at Coinspeaker.

Coinspeaker in Numbers

250K+

Monthly Users

80+

Articles & Guides

5000+

Research Hours

23

Authors

Share:
guides
Is SUBBD Legit or a Scam? September 9th, 2025

This guide will help determine if SUBBD is legitimate or a scam. Here’s what our deep dive into the project uncovered.