This study explores how semantic information within log messages enhances anomaly detection, often outperforming models that rely solely on sequential or temporal data. Through Transformer-based experiments on public datasets, the authors find that event occurrence and semantic cues are more predictive of anomalies than sequence order. The research underscores the limits of current datasets and calls for new, well-annotated benchmarks to evaluate log-based anomaly detection more effectively, enabling models that fully leverage log semantics and event context.This study explores how semantic information within log messages enhances anomaly detection, often outperforming models that rely solely on sequential or temporal data. Through Transformer-based experiments on public datasets, the authors find that event occurrence and semantic cues are more predictive of anomalies than sequence order. The research underscores the limits of current datasets and calls for new, well-annotated benchmarks to evaluate log-based anomaly detection more effectively, enabling models that fully leverage log semantics and event context.

Why Log Semantics Matter More Than Sequence Data in Detecting Anomalies

2025/11/04 01:52

Abstract

1 Introduction

2 Background and Related Work

2.1 Different Formulations of the Log-based Anomaly Detection Task

2.2 Supervised v.s. Unsupervised

2.3 Information within Log Data

2.4 Fix-Window Grouping

2.5 Related Works

3 A Configurable Transformer-based Anomaly Detection Approach

3.1 Problem Formulation

3.2 Log Parsing and Log Embedding

3.3 Positional & Temporal Encoding

3.4 Model Structure

3.5 Supervised Binary Classification

4 Experimental Setup

4.1 Datasets

4.2 Evaluation Metrics

4.3 Generating Log Sequences of Varying Lengths

4.4 Implementation Details and Experimental Environment

5 Experimental Results

5.1 RQ1: How does our proposed anomaly detection model perform compared to the baselines?

5.2 RQ2: How much does the sequential and temporal information within log sequences affect anomaly detection?

5.3 RQ3: How much do the different types of information individually contribute to anomaly detection?

6 Discussion

7 Threats to validity

8 Conclusions and References

\

6 Discussion

We discuss our lessons learned according to the experimental results.

Semantic information contributes to anomaly detection

The findings of this study confirm the efficacy of utilizing semantic information within log messages for log-based anomaly detection. Recent studies show classical machine learning models and simple log representation (vectorization) techniques can outperform complex DL counterparts [7, 23]. In these simple approaches, log events within log data are substituted with event IDs or tokens, and semantic information is lost. However, according to our experimental results, the semantic information is valuable for subsequent models to distinguish anomalies, while the event occurrence information is also prominent.

We call for future contributions of new, high-quality datasets that can be combined with our flexible approach to evaluate the influence of different components in logs for anomaly detection. ***The results of our study confirm the findings of recent works [16, 23]. Most anomalies may not be associated with sequential information within log sequences. The occurrence of certain log templates and the semantics within log templates contribute to the anomalies. This finding highlights the importance of employing new datasets to validate the recent designs of DL models (e.g., LSTM [10], Transformer [11]). Moreover, our flexible approach can be used off-the-shelf with the new datasets to evaluate the influences of different components and contribute to high-quality anomaly detection that leverages the full capacity of logs.

The publicly available log datasets that are well-annotated for anomaly detection are limited, which greatly hinders the evaluation and development of anomaly detection approaches that have practical impacts. Except for the HDFS dataset, whose anomaly annotations are session-based, the existing public datasets contain annotations for each log entry within log data, which implies the anomalies are only associated with certain specific log events or associated parameters within the events. Under this setting, the causality or sequential information that may imply anomalous behaviors is ignored.

7 Threats to validity

We have identified the following threats to the validity of our findings:

Construct Validity

In our proposed anomaly detection method, we adopt the Drain parser to parse the log data. Although the Drain parser performs well and can generate relatively accurate parsing results, parsing errors still exist. The parsing error may influence the generation of log event embedding (i.e., logs from the same log event may have different embeddings) and thus influence the performance of the anomaly detection model. To mitigate this threat, we pass some extra regular expressions for each dataset to the parser. These regular expressions can help the parser filter some known dynamic areas in log messages and thus achieve more accurate results.

\ Internal Validity There are various hyperparameters involved in our proposed anomaly detection model and experiment settings: 1) In the process of generating samples for both training and test sets, we define minimum and maximum lengths, along with step sizes, to generate log sequences of varying lengths. We do not have prior knowledge about the range of sequence length in which anomalies may reside. However, we set these parameters according to the common practices of previous studies, which adopt fixlength grouping. 2) The Transformer-based anomaly detection model entails numerous hyperparameters, such as the number of transformer layers, attention heads, and the size of the fully-connected layer. As the number of combinations is huge, we were not able to do a grid search. However, we referred to the settings of similar models and experimented with different combinations of hyperparameters, selecting the bestperforming combination accordingly.

\ External Validity

In this study, we conducted experiments on four public log datasets for anomaly detection. Some findings and conclusions obtained from our experimental results are constrained to the studied datasets. However, the studied datasets are the most used ones to evaluate the log-based anomaly detection models. They have become the standard of the evaluation. As the annotation of the log datasets demands a lot of human effort, there are only a few publicly available datasets for log-based anomaly detection tasks. The studied datasets are representative, thus enabling the findings to illuminate prevalent challenges within the realm of anomaly detection.

Reliability

The reliability of our findings may be influenced by the reproducibility of results, as variations in dataset preprocessing, hyperparameter tuning, and log parsing configurations across different implementations could lead to discrepancies. To mitigate this threat, we adhered to well-used preprocessing processes and hyperparameter settings, which are detailed in the paper. However, even minor differences in experimental setups or parser configurations may yield divergent outcomes, potentially impacting the consistency of the model’s performance across independent studies.

8 Conclusions and References

The existing log-based anomaly detection approaches have used different types of information within log data. However, it remains unclear how these different types of information contribute to the identification of anomalies. In this study, we first propose a Transformer-based anomaly detection model, with which we conduct experiments with different input feature combinations to understand the role of different information in detecting anomalies within log sequences. The experimental results demonstrate that our proposed approach achieves competitive and more stable performance compared to simple machine learning models when handling log sequences of varying lengths. With the proposed model and the studied datasets, we find that sequential and temporal information do not contribute to the overall performance of anomaly detection when the event occurrence information is present. The event occurrence information is the most prominent feature for identifying anomalies, while the inclusion of semantic information from log templates is helpful for anomaly detection models. Our results and findings generally confirm that of the recent empirical studies and indicate the deficiency of using the existing public datasets to evaluate anomaly detection methods, especially the deep learning models. Our work highlights the need to utilize new datasets that contain different types of anomalies and align more closely with real-world systems to evaluate anomaly detection models. Our flexible approach can be readily applied with the new datasets to evaluate the influences of different components and enhance anomaly detection by leveraging the full capacity of log information.

:::info Supplementary information: The source code of the proposed method is publicly available in our supplementary material package 1.

:::

Acknowledgements

We would like to gratefully acknowledge the Natural Sciences and Engineering Research Council of Canada (NSERC, RGPIN-2021-03900) and the Fonds de recherche du Qu´ebec – Nature et technologies (FRQNT, 326866) for their funding support for this work.

References

[1] He, S., Zhu, J., He, P., Lyu, M.R.: Experience report: System log analysis for anomaly detection. In: 2016 IEEE 27th International Symposium on Software Reliability Engineering (ISSRE), pp. 207–218 (2016). IEEE

[2] Oliner, A., Ganapathi, A., Xu, W.: Advances and challenges in log analysis. Communications of the ACM 55(2), 55–61 (2012)

[3] He, S., He, P., Chen, Z., Yang, T., Su, Y., Lyu, M.R.: A survey on automated log analysis for reliability engineering. ACM computing surveys (CSUR) 54(6), 1–37 (2021)

[4] Zhu, J., He, S., Liu, J., He, P., Xie, Q., Zheng, Z., Lyu, M.R.: Tools and benchmarks for automated log parsing. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), pp. 121–130 (2019). IEEE

[5] Chen, Z., Liu, J., Gu, W., Su, Y., Lyu, M.R.: Experience report: Deep learningbased system log analysis for anomaly detection. arXiv preprint arXiv:2107.05908 (2021)

[6] Nedelkoski, S., Bogatinovski, J., Acker, A., Cardoso, J., Kao, O.: Self-attentive classification-based anomaly detection in unstructured logs. In: 2020 IEEE International Conference on Data Mining (ICDM), pp. 1196–1201 (2020). IEEE

[7] Wu, X., Li, H., Khomh, F.: On the effectiveness of log representation for log-based anomaly detection. Empirical Software Engineering 28(6), 137 (2023)

[8] Xu, W., Huang, L., Fox, A., Patterson, D., Jordan, M.I.: Detecting large-scale system problems by mining console logs. In: Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles, pp. 117–132 (2009)

[9] Lou, J.-G., Fu, Q., Yang, S., Xu, Y., Li, J.: Mining invariants from console logs for system problem detection. In: 2010 USENIX Annual Technical Conference (USENIX ATC 10) (2010) 1https://github.com/mooselab/suppmaterial-CfgTransAnomalyDetector 21

[10] Du, M., Li, F., Zheng, G., Srikumar, V.: Deeplog: Anomaly detection and diagnosis from system logs through deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1285–1298 (2017)

[11] Le, V.-H., Zhang, H.: Log-based anomaly detection without log parsing. In: 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 492–504 (2021). IEEE

[12] Guo, H., Yang, J., Liu, J., Bai, J., Wang, B., Li, Z., Zheng, T., Zhang, B., Peng, J., Tian, Q.: Logformer: A pre-train and tuning pipeline for log anomaly detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 135–143 (2024)

[13] He, S., Lin, Q., Lou, J.-G., Zhang, H., Lyu, M.R., Zhang, D.: Identifying impactful service system problems via log analysis. In: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 60–70 (2018)

[14] Farzad, A., Gulliver, T.A.: Unsupervised log message anomaly detection. ICT Express 6(3), 229–237 (2020)

[15] Le, V.-H., Zhang, H.: Log-based anomaly detection with deep learning: how far are we? In: 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE), pp. 1356–1367 (2022). IEEE

[16] Landauer, M., Skopik, F., Wurzenberger, M.: A critical review of common log data sets used for evaluation of sequence-based anomaly detection techniques. Proceedings of the ACM on Software Engineering 1(FSE), 1354–1375 (2024)

[17] Zhu, J., He, S., He, P., Liu, J., Lyu, M.R.: Loghub: A large collection of system log datasets for ai-driven log analytics. In: 2023 IEEE 34th International Symposium on Software Reliability Engineering (ISSRE), pp. 355–366 (2023). IEEE

[18] Bodik, P., Goldszmidt, M., Fox, A., Woodard, D.B., Andersen, H.: Fingerprinting the datacenter: automated classification of performance crises. In: Proceedings of the 5th European Conference on Computer Systems, pp. 111–124 (2010)

[19] Chen, M., Zheng, A.X., Lloyd, J., Jordan, M.I., Brewer, E.: Failure diagnosis using decision trees. In: International Conference on Autonomic Computing, 2004. Proceedings., pp. 36–43 (2004). IEEE

[20] Liang, Y., Zhang, Y., Xiong, H., Sahoo, R.: Failure prediction in ibm bluegene/l event logs. In: Seventh IEEE International Conference on Data Mining (ICDM 2007), pp. 583–588 (2007). IEEE

[21] Guo, H., Yuan, S., Wu, X.: Logbert: Log anomaly detection via bert. In: 2021 22 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2021). IEEE

[22] Lin, Q., Zhang, H., Lou, J.-G., Zhang, Y., Chen, X.: Log clustering based problem identification for online service systems. In: Proceedings of the 38th International Conference on Software Engineering Companion, pp. 102–111 (2016)

[23] Yu, B., Yao, J., Fu, Q., Zhong, Z., Xie, H., Wu, Y., Ma, Y., He, P.: Deep learning or classical machine learning? an empirical study on log-based anomaly detection. In: Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, pp. 1–13 (2024)

[24] He, P., Zhu, J., Zheng, Z., Lyu, M.R.: Drain: An online log parsing approach with fixed depth tree. In: 2017 IEEE International Conference on Web Services (ICWS), pp. 33–40 (2017). IEEE

[25] Reimers, N., Gurevych, I.: Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019)

[26] Face, H.: all-MiniLM-L6-v2 Model. Accessed: April 8, 2024. https://huggingface. co/sentence-transformers/all-MiniLM-L6-v2

[27] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017)

[28] Irie, K., Zeyer, A., Schl¨uter, R., Ney, H.: Language modeling with deep transformers. arXiv preprint arXiv:1905.04226 (2019)

[29] Haviv, A., Ram, O., Press, O., Izsak, P., Levy, O.: Transformer language models without positional encodings still learn positional information. In: Goldberg, Y., Kozareva, Z., Zhang, Y. (eds.) Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 1382–1390. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (2022). https://doi.org/10.18653/v1/ 2022.findings-emnlp.99 . https://aclanthology.org/2022.findings-emnlp.99

[30] Kazemi, S.M., Goel, R., Eghbali, S., Ramanan, J., Sahota, J., Thakur, S., Wu, S., Smyth, C., Poupart, P., Brubaker, M.: Time2vec: Learning a vector representation of time. arXiv preprint arXiv:1907.05321 (2019)

[31] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

[32] Oliner, A., Stearley, J.: What supercomputers say: A study of five system logs. In: 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN’07), pp. 575–584 (2007). IEEE

:::info Authors:

  1. Xingfang Wu
  2. Heng Li
  3. Foutse Khomh

:::

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

CaaS: The "SaaS Moment" for Blockchain

CaaS: The "SaaS Moment" for Blockchain

Source: VeradiVerdict Compiled by: Zhou, ChainCatcher Summary Crypto as a Service (CaaS) is the "Software as a Service (SaaS) era" in the blockchain space. Banks and fintech companies no longer need to build crypto infrastructure from scratch. They can simply connect to APIs and white-label platforms to launch digital asset functionality within days or weeks, instead of the years that used to take. ( Note: White-labeling essentially involves one party providing a product or technology, while another party brands it for sale or operation. In the finance/crypto field, this refers to banks or exchanges using third-party trading systems, wallets, or payment gateways and then rebranding them.) Mainstream markets are accelerating adoption through three channels. Banks are partnering with custodians like Coinbase, Anchorage, and BitGo while actively exploring tokenized assets; fintech companies are issuing their own stablecoins using platforms like M^0; and payment processors such as Western Union (with $300 billion in annual transactions) and Zelle (with over $1 trillion in annual transactions) are now integrating stablecoins to enable instant, low-cost cross-border settlements. Crypto as a Service (CaaS) isn't actually that complicated. Essentially, it's Software as a Service (SaaS) based on cryptocurrency, making it a hundred times easier for institutions and businesses to integrate into the cryptocurrency space. Banks, fintech companies, and enterprises no longer need to painstakingly build internal cryptocurrency functionality. Instead, they can simply plug and play, deploying within days using proven APIs and white-label platforms. Businesses can focus on their customers without worrying about the complexities of blockchain. They can leverage existing infrastructure to participate in cryptocurrency transactions more efficiently and cost-effectively. In other words, they can easily and seamlessly integrate into the digital asset ecosystem. CaaS is poised for exponential growth. CaaS is a cloud-based business model and infrastructure solution that enables businesses, fintech companies, and developers to integrate cryptocurrency and blockchain functionality into their operations without having to build or maintain the underlying technology from scratch. CaaS provides ready-to-use, scalable services, typically delivered via APIs or white-label platforms, such as crypto wallets, trading engines, payment gateways, asset storage, custody, and compliance tools. This allows businesses to quickly offer digital asset functionality under their own brand, reducing development costs, time, and required technical expertise. Like other "as-a-service" offerings, this model allows businesses of all sizes, from startups to established companies, to participate in a cost-effective manner. In September 2025, Coinbase Institutional listed CaaS as one of its biggest growth areas. Since 2013, Pantera Capital has been committed to driving the development of CaaS through investment. We strategically invest in infrastructure, tools, and technology to ensure that CaaS can operate at scale. By accelerating the development of backend fund management, custody, and wallets, we have significantly enhanced the service tier of CaaS. Advantages of CaaS By using CaaS to transparently integrate encryption capabilities into their systems, enterprises can achieve numerous strategic and operational advantages more quickly and cost-effectively. These advantages include: One-stop integration and seamless embedding : The CaaS platform eliminates the need for custom development cycles, enabling teams to activate features in days rather than months. Flexible profit models : Businesses can choose a subscription-based fixed-price model for predictable costs, or a pay-as-you-go billing model to keep expenses in line with revenue. Either approach avoids large upfront capital investments. Outsourcing blockchain complexity : Enterprises can offload technical management while benefiting from a powerful enterprise-grade backend, ensuring near-perfect uptime, real-time monitoring, and automatic failover. Developer-friendly APIs and SDKs : Developers can embed wallet creation and key management functions, smoothly handle on-chain settlements, trigger smart contract interactions, and create a comprehensive sandbox environment. White-label branding and an intuitive interface : The CaaS solution is easy to customize, enabling non-technical teams to configure free infrastructure, supported assets, and user onboarding processes. Other value-added features : Leading providers bundle ancillary services together, such as fraud detection based on on-chain analytics; automated tax filing; multi-signature fund management; and cross-chain bridging for asset interoperability. These characteristics transform cryptocurrency from a technological novelty into a revenue-generating product line while maintaining a focus on core business capabilities. Three core use cases We believe the world is rapidly evolving towards a cryptocurrency-native environment, with individuals and businesses interacting more frequently with digital assets. This shift is driven by increasing user acceptance of blockchain wallets, decentralized applications, and on-chain transactions, which in turn benefits from continuously improving user interfaces, abundant educational resources, and practical application value. However, for cryptocurrencies to truly integrate into the mainstream and achieve widespread adoption, a strong and seamless bridge must be built to bridge the gap between traditional finance (TradFi) and decentralized finance (DeFi). Institutions seek the advantages of cryptocurrencies (speed, programmability, and global accessibility) while relying on trustworthy intermediaries to manage their underlying complexities: tools, security, technology stack, and liquidity provision. Ultimately, this ecosystem integration could gradually bring billions of users onto the blockchain. Use Case 1: Bank Banks are increasingly partnering with regulated cryptocurrency custodians such as Coinbase Custody, Anchorage Digital, and BitGo to provide institutional-grade custody, insured storage, and seamless spot trading services for digital assets like Bitcoin and Ethereum. These foundational services—custody, execution, and basic lending—represent the most readily achievable aspects of cryptocurrency integration, enabling banks to easily embrace customers without forcing them out of the traditional banking system. Beyond these fundamental elements, banks can leverage decentralized finance (DeFi) protocols to generate competitive returns from idle treasury assets or customer deposits. For example, they can deploy stablecoins into permissionless lending markets (such as Morpho, Aave, or Compound) or liquidity pools of automated market makers (AMMs) like Uniswap to obtain real-time, transparent returns that typically outperform traditional fixed-income products. The tokenization of Real-World Assets (RWAs) presents transformative opportunities. Banks can initiate and distribute on-chain versions of traditional securities (e.g., tokenized U.S. Treasury bonds, corporate bonds, private credit, or even real estate funds issued through BlackRock's BUIDL fund), bringing off-chain value to public blockchains like Ethereum, Polygon, or Base. These RWAs can then be traded peer-to-peer through DeFi protocols such as Morpho (for optimizing lending), Pendle (for yield sharing), or Centrifuge (for private credit pools), while ensuring KYC/AML compliance through whitelisted wallets or institutional vaults. RWAs can also serve as high-quality collateral in the DeFi lending market. Crucially, banks can offer seamless stablecoin access without losing customers. Through embedded wallets or custodial sub-accounts, customers can hold USDC, USDT, or FDIC-insured digital dollars directly within the bank's app (for payments, remittances, or yield-generating investments) without leaving the bank's ecosystem. This "walled garden" model resembles a new bank but with regulated trust. Looking ahead, major banks may form alliances to issue branded stablecoins backed 1:1 by centralized reserves. These stablecoins could be settled instantly on public blockchains while complying with regulatory requirements, thus connecting traditional finance with programmable money. If a bank views blockchain as infrastructure, rather than an accessory tool, it is likely to capture the next trillion dollars in value. Use Case 2: Fintech Companies and New Types of Banks Fintech companies and new-age banks are rapidly integrating cryptocurrencies into their core offerings through strategic partnerships with established platforms such as Robinhood, Revolut, and Webull. These collaborations enable seamless use and secure custody of digital assets, while providing instant trading of tokenized versions of traditional stocks, effectively bridging the gap between traditional finance and blockchain-based markets. Beyond partnerships, fintech companies can leverage professional service providers like Alchemy to build and launch their own blockchain infrastructure. Alchemy, a leader in blockchain development platforms, offers scalable node infrastructure, enhanced APIs, and developer tools that simplify the creation of custom Layer-1 or Layer-2 networks. This allows fintech companies to tailor blockchains for specific use cases, such as high-throughput payments, decentralized authentication, or RWA (Risk Weighted Authorization), while ensuring compliance with evolving regulatory requirements and optimizing for low latency and cost-effectiveness. Fintech companies can further deepen their involvement in the cryptocurrency space by issuing their own stablecoins and leveraging decentralized protocols on platforms like M^0 to mint yielding, fungible stablecoins backed by high-quality collateral such as US Treasury bonds. By adopting this model, fintech companies can mint their own tokens on demand, maintain full control over the underlying economic mechanisms (including interest accumulation and redemption mechanisms), ensure regulatory compliance through transparent on-chain reserves, and participate in co-governance through decentralized autonomous organizations (DAOs). Furthermore, they can benefit from enhanced liquidity pools on major exchanges and DeFi protocols, reducing fragmentation and increasing user adoption. This approach not only creates new revenue streams but also positions fintech companies as innovators in the field of programmable money and fosters customer loyalty in the competitive digital economy. Use Case 3: Payment Processor Payment companies are building stablecoin "sandwiches": a multi-tiered cross-border settlement system that receives fiat currency at one end and exports instant, low-cost liquidity in another jurisdiction, while minimizing foreign exchange spreads, intermediary fees, and settlement delays. The components of the "sandwich" include: Top Slice (Entry Point) : US customers send US dollars to payment providers such as Stripe, Circle, Ripple, or newer banks like Mercury. Filling (minting) : US dollars are immediately exchanged at a 1:1 ratio for regulated stablecoins—usually USDC (Circle), USDP (Paxos), or bank-issued digital dollars. Bottom Slice (Export) : Stablecoins are bridged or exchanged for local currency stablecoins—for example, aARS (pegged to the Argentine peso), BRLA (Brazil), or MXNA (Mexico)—or become central bank digital currency pilot projects directly (for example, Drex in Brazil). Settlement : Funds arrive in local bank accounts, mobile wallets or merchant payments on a T+0 (instant) basis, with total costs typically below 0.1%, compared to 3-7% through SWIFT + agent banks. Western Union, a 175-year-old remittance giant that processes over $300 billion in remittances annually, recently announced the integration of stablecoins into its ecosystem. Pantera Capital CEO Devin McGranahan stated in July 2025 that the company had historically been "cautious" about cryptocurrencies, concerned about their volatility and regulatory issues. However, the enactment of the Genius Act has changed this. “As the rules become clearer, we see a real opportunity to integrate digital assets into our business,” McGranahan said on the Q3 2025 earnings call. The result: Western Union is currently actively testing stablecoin solutions for Treasury settlements and customer payments, leveraging blockchain technology to eliminate the cumbersome processes of correspondent banking. Zelle, a bank-backed peer-to-peer payment giant (part of Early Warning Services, a consortium of JPMorgan Chase, Bank of America, Wells Fargo, and others), facilitates over $1 trillion in fee-free transfers annually within the United States via simple phone numbers or email addresses, currently boasting over 2,300 partner institutions and 150 million users. However, cross-border payments have been a previous challenge. On October 24, 2025, Early Warning announced a stablecoin plan aimed at bringing Zelle to the international market, offering "the same speed and reliability" overseas. As banks, fintech/new banks, and payment processors integrate cryptocurrencies in an intuitive, plug-and-play, and compliant manner (with as few regulators as possible), they can continue to expand their global reach and strengthen relationships. in conclusion CaaS is not hype—it represents a revolution in infrastructure that makes cryptocurrencies invisible to end users. Just as people don't think of AWS when watching Netflix or Salesforce when checking a CRM, consumers and businesses won't think of blockchain when making instant cross-border payments or accessing tokenized assets. The winners of this revolution are not companies that add cryptocurrencies as an afterthought to traditional systems, but rather institutions and enterprises that see blockchain as infrastructure, and the investors who support the underlying technology that underpins it all.
Share
PANews2025/11/05 16:00