The disagreement issue in post hoc feature attribution techniques is discussed in this study. Explainers like SHAP, LIME, and gradient-based techniques frequently result in contradictory feature importance rankings for the same model. Post hoc Explainer Agreement Regularization (PEAR), a loss term added after model training, is introduced to counteract this and promote increased explainer consensus without significantly compromising accuracy. Experiments on three datasets show that PEAR offers a customizable balance between explanation consensus and predictive performance, and it enhances agreement across explainers, including those not directly used in training. PEAR improves explanations' dependability and credibility in crucial machine learning applications by turning disagreement into a controlled parameter.The disagreement issue in post hoc feature attribution techniques is discussed in this study. Explainers like SHAP, LIME, and gradient-based techniques frequently result in contradictory feature importance rankings for the same model. Post hoc Explainer Agreement Regularization (PEAR), a loss term added after model training, is introduced to counteract this and promote increased explainer consensus without significantly compromising accuracy. Experiments on three datasets show that PEAR offers a customizable balance between explanation consensus and predictive performance, and it enhances agreement across explainers, including those not directly used in training. PEAR improves explanations' dependability and credibility in crucial machine learning applications by turning disagreement into a controlled parameter.

New AI Study Tackles the Transparency Problem in Black-Box Models

2025/09/21 13:46

:::info Authors:

(1) Avi Schwarzschild, University of Maryland, College Park, Maryland, USA and Work completed while working at Arthur (avi1umd.edu);

(2) Max Cembalest, Arthur, New York City, New York, USA;

(3) Karthik Rao, Arthur, New York City, New York, USA;

(4) Keegan Hines, Arthur, New York City, New York, USA;

(5) John Dickerson†, Arthur, New York City, New York, USA (john@arthur.ai).

:::

Abstract and 1. Introduction

1.1 Post Hoc Explanation

1.2 The Disagreement Problem

1.3 Encouraging Explanation Consensus

  1. Related Work

  2. Pear: Post HOC Explainer Agreement Regularizer

  3. The Efficacy of Consensus Training

    4.1 Agreement Metrics

    4.2 Improving Consensus Metrics

    [4.3 Consistency At What Cost?]()

    4.4 Are the Explanations Still Valuable?

    4.5 Consensus and Linearity

    4.6 Two Loss Terms

  4. Discussion

    5.1 Future Work

    5.2 Conclusion, Acknowledgements, and References

Appendix

ABSTRACT

As neural networks increasingly make critical decisions in highstakes settings, monitoring and explaining their behavior in an understandable and trustworthy manner is a necessity. One commonly used type of explainer is post hoc feature attribution, a family of methods for giving each feature in an input a score corresponding to its influence on a model’s output. A major limitation of this family of explainers in practice is that they can disagree on which features are more important than others. Our contribution in this paper is a method of training models with this disagreement problem in mind. We do this by introducing a Post hoc Explainer Agreement Regularization (PEAR) loss term alongside the standard term corresponding to accuracy, an additional term that measures the difference in feature attribution between a pair of explainers. We observe on three datasets that we can train a model with this loss term to improve explanation consensus on unseen data, and see improved consensus between explainers other than those used in the loss term. We examine the trade-off between improved consensus and model performance. And finally, we study the influence our method has on feature attribution explanations.

1 INTRODUCTION

As machine learning becomes inseparable from important societal sectors like healthcare and finance, increased transparency of how complex models arrive at their decisions is becoming critical. In this work, we examine a common task in support of model transparency that arises with the deployment of complex black-box models in production settings: explaining which features in the input are most influential in the model’s output. This practice allows data scientists and machine learning practitioners to rank features by importance – the features with high impact on model output are considered more important, and those with little impact on model output are considered less important. These measurements inform how model users debug and quality check their models, as well as how they explain model behavior to stakeholders.

1.1 Post Hoc Explanation

The methods of model explanation considered in this paper are post hoc local feature attribution scores. The field of explainable artificial intelligence (XAI) is rapidly producing different methods of this

\ Figure 1: Our loss that encourages explainer consensus boosts the correlation between LIME and other common post hoc explainers. This comes with a cost of less than two percentage points of accuracy compared with our baseline model on the Electricity dataset. Our method improves consensus on six agreement metrics and all pairs of explainers we evaluated. Note that this plot measures the rank correlation agreement metric and the specific bar heights depend on this choice of metric.

\ type to make sense of model behavior [e.g., 21, 24, 30, 32, 37]. Each of these methods has a slightly different formula and interpretation of its raw output, but in general they all perform the same task of attributing a model’s behavior to its input features. When tasked to explain a model’s output with a corresponding input (and possible access to the model weights), these methods answer the question, “How influential is each individual feature of the input in the model’s computation of the output?”

\ Data scientists are using post hoc explainers at increasing rates – popular methods like LIME and SHAP have had over 350 thousand and 6 million downloads of their Python packages in the last 30 days, respectively [23].

1.2 The Disagreement Problem

The explosion of different explanation methods leads Krishna et al. [15] to observe that when neural networks are trained naturally, i.e. for accuracy alone, often post hoc explainers disagree on how much different features influenced a model’s outputs. They coin the term the disagreement problem and argue that when explainers disagree about which features of the input are important, practitioners have little concrete evidence as to which of the explanations, if any, to trust.

\ There is an important discussion around local explainers and their true value in reaching the communal goal of model transparency and interpretability [see, e.g., 7, 18, 29]; indeed, there are ongoing discussions about the efficacy of present-day explanation methods in specific domains [for healthcare see, e.g., 8]. Feature importance estimates may fail at making a model more transparent when the model being explained is too complex to allow for easily attributing the output to the contribution of each individual feature.

\ In this paper, we make no normative judgments with respect to this debate, but rather view “explanations” as signals to be used alongside other debugging, validation, and verification approaches in the machine learning operations (MLOps) pipeline. Specifically, we take the following practical approach: make the amount of explanation disagreement a controllable model parameter instead of a point of frustration that catches stakeholders off-guard.

1.3 Encouraging Explanation Consensus

Consensus between two explainers does not require that the explainers output the same exact scores for each feature. Rather, consensus between explainers means that whatever disagreement they exhibit can be reconciled. Data scientists and machine learning practitioners say in a survey that explanations are in basic agreement if they satisfy agreement metrics that align with human intuition, which provides a quantitative way to evaluate the extent to which consensus is being achieved [15]. When faced with disagreement between explainers, a choice has to be made about what to do next – if such an arbitrary crossroads moment is avoidable via specialized model training, we believe it would be a valuable addition to a data scientist’s toolkit.

\ We propose, as our main contribution, a training routine to help alleviate the challenge posed by post hoc explanation disagreement. Achieving better consensus between explanations does not provide more interpretability to a model inherently. But, it may lend more trust to the explanations if different approaches to attribution agree more often on which features are important. This gives consensus the practical benefit of acting as a sanity check – if consensus is observed, the choice of which explainer a practitioner uses is less consequential with respect to downstream stakeholder impact, making their interpretation less subjective.

2 RELATED WORK

Our work focuses on post hoc explanation tools. Some post hoc explainers, like LIME [24] and SHAP [21], are proxy models trained atop a base machine learning model with the sole intention of “explaining” that base model. These explainers rely only on the model’s inputs and outputs to identify salient features. Other explainers, such as Vanilla Gradients (Grad) [32], Gradient Times Input (Grad*Input) [30], Integrated Gradients (IntGrad) [37] and SmoothGrad [34], do not use a proxy model but instead compute the gradients of a model with respect to input features to identify important features.[1] Each of these explainers has its quirks and there are reasons to use, or not use, them all—based on input type, model type, downstream task, and so on. But there is an underlying pattern unifying all these explanation tools. Han et al. [12] provide a framework that characterizes all the post hoc explainers used in this paper as different types of local-function approximation. For more details about the individual post hoc explainers used in this paper, we refer the reader to the individual papers and to other works about when and why to use each one [see, e.g., 5, 13].

\ We build directly on prior work that defines and explores the disagreement problem [15]. Disagreement here refers to the difference in feature importance scores between two feature attribution methods, but can be quantified several different ways as are described by the metrics Krishna et al. [15] define and use. We describe these metrics in Section 4.

\ The method we propose in this paper relates to previous work that trains models with constraints on explanations via penalties on the disagreement between feature attribution scores and handcrafted ground-truth scores [26, 27, 41]. Additionally, work has been done to leverage the disagreement between different posthoc explanations to construct new feature attribution scores that improve metrics like stability and pairwise rank agreement [2, 16, 25].

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

CaaS: The "SaaS Moment" for Blockchain

CaaS: The "SaaS Moment" for Blockchain

Source: VeradiVerdict Compiled by: Zhou, ChainCatcher Summary Crypto as a Service (CaaS) is the "Software as a Service (SaaS) era" in the blockchain space. Banks and fintech companies no longer need to build crypto infrastructure from scratch. They can simply connect to APIs and white-label platforms to launch digital asset functionality within days or weeks, instead of the years that used to take. ( Note: White-labeling essentially involves one party providing a product or technology, while another party brands it for sale or operation. In the finance/crypto field, this refers to banks or exchanges using third-party trading systems, wallets, or payment gateways and then rebranding them.) Mainstream markets are accelerating adoption through three channels. Banks are partnering with custodians like Coinbase, Anchorage, and BitGo while actively exploring tokenized assets; fintech companies are issuing their own stablecoins using platforms like M^0; and payment processors such as Western Union (with $300 billion in annual transactions) and Zelle (with over $1 trillion in annual transactions) are now integrating stablecoins to enable instant, low-cost cross-border settlements. Crypto as a Service (CaaS) isn't actually that complicated. Essentially, it's Software as a Service (SaaS) based on cryptocurrency, making it a hundred times easier for institutions and businesses to integrate into the cryptocurrency space. Banks, fintech companies, and enterprises no longer need to painstakingly build internal cryptocurrency functionality. Instead, they can simply plug and play, deploying within days using proven APIs and white-label platforms. Businesses can focus on their customers without worrying about the complexities of blockchain. They can leverage existing infrastructure to participate in cryptocurrency transactions more efficiently and cost-effectively. In other words, they can easily and seamlessly integrate into the digital asset ecosystem. CaaS is poised for exponential growth. CaaS is a cloud-based business model and infrastructure solution that enables businesses, fintech companies, and developers to integrate cryptocurrency and blockchain functionality into their operations without having to build or maintain the underlying technology from scratch. CaaS provides ready-to-use, scalable services, typically delivered via APIs or white-label platforms, such as crypto wallets, trading engines, payment gateways, asset storage, custody, and compliance tools. This allows businesses to quickly offer digital asset functionality under their own brand, reducing development costs, time, and required technical expertise. Like other "as-a-service" offerings, this model allows businesses of all sizes, from startups to established companies, to participate in a cost-effective manner. In September 2025, Coinbase Institutional listed CaaS as one of its biggest growth areas. Since 2013, Pantera Capital has been committed to driving the development of CaaS through investment. We strategically invest in infrastructure, tools, and technology to ensure that CaaS can operate at scale. By accelerating the development of backend fund management, custody, and wallets, we have significantly enhanced the service tier of CaaS. Advantages of CaaS By using CaaS to transparently integrate encryption capabilities into their systems, enterprises can achieve numerous strategic and operational advantages more quickly and cost-effectively. These advantages include: One-stop integration and seamless embedding : The CaaS platform eliminates the need for custom development cycles, enabling teams to activate features in days rather than months. Flexible profit models : Businesses can choose a subscription-based fixed-price model for predictable costs, or a pay-as-you-go billing model to keep expenses in line with revenue. Either approach avoids large upfront capital investments. Outsourcing blockchain complexity : Enterprises can offload technical management while benefiting from a powerful enterprise-grade backend, ensuring near-perfect uptime, real-time monitoring, and automatic failover. Developer-friendly APIs and SDKs : Developers can embed wallet creation and key management functions, smoothly handle on-chain settlements, trigger smart contract interactions, and create a comprehensive sandbox environment. White-label branding and an intuitive interface : The CaaS solution is easy to customize, enabling non-technical teams to configure free infrastructure, supported assets, and user onboarding processes. Other value-added features : Leading providers bundle ancillary services together, such as fraud detection based on on-chain analytics; automated tax filing; multi-signature fund management; and cross-chain bridging for asset interoperability. These characteristics transform cryptocurrency from a technological novelty into a revenue-generating product line while maintaining a focus on core business capabilities. Three core use cases We believe the world is rapidly evolving towards a cryptocurrency-native environment, with individuals and businesses interacting more frequently with digital assets. This shift is driven by increasing user acceptance of blockchain wallets, decentralized applications, and on-chain transactions, which in turn benefits from continuously improving user interfaces, abundant educational resources, and practical application value. However, for cryptocurrencies to truly integrate into the mainstream and achieve widespread adoption, a strong and seamless bridge must be built to bridge the gap between traditional finance (TradFi) and decentralized finance (DeFi). Institutions seek the advantages of cryptocurrencies (speed, programmability, and global accessibility) while relying on trustworthy intermediaries to manage their underlying complexities: tools, security, technology stack, and liquidity provision. Ultimately, this ecosystem integration could gradually bring billions of users onto the blockchain. Use Case 1: Bank Banks are increasingly partnering with regulated cryptocurrency custodians such as Coinbase Custody, Anchorage Digital, and BitGo to provide institutional-grade custody, insured storage, and seamless spot trading services for digital assets like Bitcoin and Ethereum. These foundational services—custody, execution, and basic lending—represent the most readily achievable aspects of cryptocurrency integration, enabling banks to easily embrace customers without forcing them out of the traditional banking system. Beyond these fundamental elements, banks can leverage decentralized finance (DeFi) protocols to generate competitive returns from idle treasury assets or customer deposits. For example, they can deploy stablecoins into permissionless lending markets (such as Morpho, Aave, or Compound) or liquidity pools of automated market makers (AMMs) like Uniswap to obtain real-time, transparent returns that typically outperform traditional fixed-income products. The tokenization of Real-World Assets (RWAs) presents transformative opportunities. Banks can initiate and distribute on-chain versions of traditional securities (e.g., tokenized U.S. Treasury bonds, corporate bonds, private credit, or even real estate funds issued through BlackRock's BUIDL fund), bringing off-chain value to public blockchains like Ethereum, Polygon, or Base. These RWAs can then be traded peer-to-peer through DeFi protocols such as Morpho (for optimizing lending), Pendle (for yield sharing), or Centrifuge (for private credit pools), while ensuring KYC/AML compliance through whitelisted wallets or institutional vaults. RWAs can also serve as high-quality collateral in the DeFi lending market. Crucially, banks can offer seamless stablecoin access without losing customers. Through embedded wallets or custodial sub-accounts, customers can hold USDC, USDT, or FDIC-insured digital dollars directly within the bank's app (for payments, remittances, or yield-generating investments) without leaving the bank's ecosystem. This "walled garden" model resembles a new bank but with regulated trust. Looking ahead, major banks may form alliances to issue branded stablecoins backed 1:1 by centralized reserves. These stablecoins could be settled instantly on public blockchains while complying with regulatory requirements, thus connecting traditional finance with programmable money. If a bank views blockchain as infrastructure, rather than an accessory tool, it is likely to capture the next trillion dollars in value. Use Case 2: Fintech Companies and New Types of Banks Fintech companies and new-age banks are rapidly integrating cryptocurrencies into their core offerings through strategic partnerships with established platforms such as Robinhood, Revolut, and Webull. These collaborations enable seamless use and secure custody of digital assets, while providing instant trading of tokenized versions of traditional stocks, effectively bridging the gap between traditional finance and blockchain-based markets. Beyond partnerships, fintech companies can leverage professional service providers like Alchemy to build and launch their own blockchain infrastructure. Alchemy, a leader in blockchain development platforms, offers scalable node infrastructure, enhanced APIs, and developer tools that simplify the creation of custom Layer-1 or Layer-2 networks. This allows fintech companies to tailor blockchains for specific use cases, such as high-throughput payments, decentralized authentication, or RWA (Risk Weighted Authorization), while ensuring compliance with evolving regulatory requirements and optimizing for low latency and cost-effectiveness. Fintech companies can further deepen their involvement in the cryptocurrency space by issuing their own stablecoins and leveraging decentralized protocols on platforms like M^0 to mint yielding, fungible stablecoins backed by high-quality collateral such as US Treasury bonds. By adopting this model, fintech companies can mint their own tokens on demand, maintain full control over the underlying economic mechanisms (including interest accumulation and redemption mechanisms), ensure regulatory compliance through transparent on-chain reserves, and participate in co-governance through decentralized autonomous organizations (DAOs). Furthermore, they can benefit from enhanced liquidity pools on major exchanges and DeFi protocols, reducing fragmentation and increasing user adoption. This approach not only creates new revenue streams but also positions fintech companies as innovators in the field of programmable money and fosters customer loyalty in the competitive digital economy. Use Case 3: Payment Processor Payment companies are building stablecoin "sandwiches": a multi-tiered cross-border settlement system that receives fiat currency at one end and exports instant, low-cost liquidity in another jurisdiction, while minimizing foreign exchange spreads, intermediary fees, and settlement delays. The components of the "sandwich" include: Top Slice (Entry Point) : US customers send US dollars to payment providers such as Stripe, Circle, Ripple, or newer banks like Mercury. Filling (minting) : US dollars are immediately exchanged at a 1:1 ratio for regulated stablecoins—usually USDC (Circle), USDP (Paxos), or bank-issued digital dollars. Bottom Slice (Export) : Stablecoins are bridged or exchanged for local currency stablecoins—for example, aARS (pegged to the Argentine peso), BRLA (Brazil), or MXNA (Mexico)—or become central bank digital currency pilot projects directly (for example, Drex in Brazil). Settlement : Funds arrive in local bank accounts, mobile wallets or merchant payments on a T+0 (instant) basis, with total costs typically below 0.1%, compared to 3-7% through SWIFT + agent banks. Western Union, a 175-year-old remittance giant that processes over $300 billion in remittances annually, recently announced the integration of stablecoins into its ecosystem. Pantera Capital CEO Devin McGranahan stated in July 2025 that the company had historically been "cautious" about cryptocurrencies, concerned about their volatility and regulatory issues. However, the enactment of the Genius Act has changed this. “As the rules become clearer, we see a real opportunity to integrate digital assets into our business,” McGranahan said on the Q3 2025 earnings call. The result: Western Union is currently actively testing stablecoin solutions for Treasury settlements and customer payments, leveraging blockchain technology to eliminate the cumbersome processes of correspondent banking. Zelle, a bank-backed peer-to-peer payment giant (part of Early Warning Services, a consortium of JPMorgan Chase, Bank of America, Wells Fargo, and others), facilitates over $1 trillion in fee-free transfers annually within the United States via simple phone numbers or email addresses, currently boasting over 2,300 partner institutions and 150 million users. However, cross-border payments have been a previous challenge. On October 24, 2025, Early Warning announced a stablecoin plan aimed at bringing Zelle to the international market, offering "the same speed and reliability" overseas. As banks, fintech/new banks, and payment processors integrate cryptocurrencies in an intuitive, plug-and-play, and compliant manner (with as few regulators as possible), they can continue to expand their global reach and strengthen relationships. in conclusion CaaS is not hype—it represents a revolution in infrastructure that makes cryptocurrencies invisible to end users. Just as people don't think of AWS when watching Netflix or Salesforce when checking a CRM, consumers and businesses won't think of blockchain when making instant cross-border payments or accessing tokenized assets. The winners of this revolution are not companies that add cryptocurrencies as an afterthought to traditional systems, but rather institutions and enterprises that see blockchain as infrastructure, and the investors who support the underlying technology that underpins it all.
Share
PANews2025/11/05 16:00
Bitcoin Price Crashes Below $99,000: Experts Breaks Down Why

Bitcoin Price Crashes Below $99,000: Experts Breaks Down Why

Bitcoin endured one of its sharpest selloffs of the year on Tuesday, knifing below the six-figure threshold and printing lows around the $99,000 area on major composites before rebounding. At press time, bitcoin (BTC) hovered near $101,700 after an intraday trough just above $99,000 on widely used benchmarks, marking a fall of roughly 6% day-over-day and the lowest print since June. The slide came as US equities limped into mid-week, with the Nasdaq up 20.9% year-to-date and the S&P 500 up 15.1% as of Tuesday’s close—gains that underscore how much bitcoin has lagged other risk assets during long stretches of 2025. That divergence, together with a growing body of ETF-flow data showing several straight sessions of net outflows from US spot bitcoin funds into early November, provided the macro backdrop for a fragile crypto tape. Independent tallies from Farside/SoSoValue and multiple outlets point to a roughly $1.3–$1.4 billion cumulative bleed over four trading days into November 3–4, led by BlackRock’s IBIT. Why Is Bitcoin Price Down? Into that context, Joe Consorti—Head of Growth at Horizon (Theya, YC)—argues the selloff is less a loss of conviction than a structural handoff of supply. In a video analysis posted late November 4 US time, he framed the day’s move as “one of its roughest days of the year, down more than 6 percent, falling to $99,000 for the first time since June,” adding that while equities would call that “the start of a bear market… for Bitcoin, though, this is typical of a bull market drawdown.” He noted that “we’ve already weathered two separate 30 percent drawdowns during this bull run,” and characterized the present action as “a transfer of Bitcoin’s ownership base from the old guard to the new guard.” Related Reading: CryptoQuant Head Reveals Reason Behind Bearish Bitcoin Trend Consorti anchored his thesis to a now-viral framework from macro investor Jordi Visser: bitcoin’s “silent IPO.” In Visser’s Substack essay—shared widely since the weekend—he posits that 2025’s rangebound price belies an orderly, IPO-like distribution as early-era holders access the deepest liquidity the asset has ever had through ETFs, institutional custodians and corporate balance sheets. “Early-stage investors… need liquidity. They need an exit. They need to diversify,” Visser wrote, arguing that methodical selling “results [in] a sideways grind that drives everyone crazy.” Consorti adopted the frame bluntly: “This isn’t panic selling, it’s the natural evolution of an asset that’s reached maturity… a transfer of ownership from concentrated hands to distributed ones.” Evidence for that churn has been visible on-chain. Multiple instances of Satoshi-era wallets and miner addresses reanimating this quarter—some after 14 years—have been documented, including July’s duo of 10,000-BTC wallets and late-October movement from a 4,000-BTC miner address. While not dispositive that coins are being market-sold, the pattern is consistent with supply redistributing from early concentrates to broader, regulated channels. Technically, Consorti cast the drop as part of “digestion,” not exhaustion. “The RSI tells us Bitcoin is at its most oversold level since April, when the last leg of the bull run began. Every drawdown this cycle, 30%, 35%, and now 20%, has built support rather than destroyed it.” He added a key conditional: “If we spend too much time below $100,000, that could suggest the distribution isn’t done… perhaps we’re in for a bull-market reversal into a bear market.” Macro, however, is intruding. The Federal Reserve cut rates by 25 bps on October 29 to a 3.75%–4.00% target range, but Chair Jerome Powell carefully pushed back on the idea of an automatic December cut, citing “strongly differing views” inside the FOMC and a “data fog” from the ongoing government shutdown. Markets promptly tempered their odds for further near-term easing. Consorti’s warning that bitcoin “is extremely correlated” to risk-asset drawdowns therefore looms large: if equities lurch meaningfully lower or funding stress reappears, crypto will feel it. Related Reading: Bitcoin Bull Run: Over Or Just Paused? CryptoQuant CEO Presents The Data If Visser’s “silent IPO” is right, ETFs are both symptom and salve. They have delivered the two-sided depth to absorb legacy supply but also introduced a new, faster-moving cohort whose redemptions can amplify downdrafts. That dynamic showed up again this week in the four-day string of net outflows concentrated in IBIT, even as longer-term assets under management remain enormous by historical standards. Consorti’s conclusion was starkly patient, not euphoric. “For every seller looking to liquidate their position, there’s a new participant stepping in for the long haul… It’s slow, it’s uneven, and it’s psychologically draining, but once it’s finished, it unlocks the next leg higher. Because the marginal seller is gone, and what’s left is a base of holders who don’t need to sell.” Whether Tuesday’s pierce of the six-figure floor proves the climactic flush—or merely another chapter in a months-long ownership transfer—will hinge on how quickly price reclaims and bases above $100,000, how ETF flows stabilize, and whether the Fed’s path from here restores risk appetite or starves it. For now, the most important story in bitcoin may be happening under the surface, not on the chart. At press time, BTC traded at $101,865. Featured image created with DALL.E, chart from TradingView.com
Share
NewsBTC2025/11/05 16:00