This article compares the proposed IIL method against SOTA incremental learning techniques on Cifar-100 and ImageNet-100.This article compares the proposed IIL method against SOTA incremental learning techniques on Cifar-100 and ImageNet-100.

SAGE Net Ablation Study: Analyzing the Impact of Input Sequence Length on Performance

2025/11/06 02:00

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

    \

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

5.2. Comparison with SOTA methods

Tab. 1 shows the test performance of different methods on the Cifar-100 and ImageNet-100. The proposed method achieves the best performance promotion after ten consecutive IIL tasks by a large margin with a low forgetting rate. Although ISL [13] which is proposed for a similar setting of learning from new sub-categories has a low forgetting rate, it fails on the new requirement of model enhancement. Attain a better performance on the test data is more important than forgetting on a certain data.

\ In the new IIL setting, all rehearsal-based methods including iCarl [22], PODNet [4], Der [31] and OnPro [29], not perform well. Old exemplars can cause memory overfitting and model bias [35]. Thus, limited old exemplars not always have a positive influence to the stability and plasticity [26], especially in the IIL task. Forgetting rate of rehearsal-based methods is high compared to other methods, which also explains their performance degradation on the test data. Detailed performance at each learning phase is shown in Fig. 4. Compared to other methods that struggle in resisting forgetting, our method is the only one that stably promotes the existing model on both of the two datasets.

\ Following ISL [13], we further apply our method on the incremental sub-population learning as shown in Tab. 2. Sub-population incremental learning is a special case of the IIL where new knowledge comes from the new subclasses. Compared to the SOTA ISL [13], our method is notably superior in learning new subclasses over long incremental steps with a comparable small forgetting rate. Noteworthy, ISL [13] use Continual Hyperparameter Framework (CHF) [3] searching the best learning rate (such as low to 0.005 in 15-step task) for each setting. While our method learns utilizing ISL pretrained base model with a fixed learning rate (0.05). Low learning rate in ISL reduces the forgetting but hinders the new knowledge learning. The proposed method well balances learning new from unseen subclasses and resisting forgetting on seen classes.

\

:::info Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.

:::


:::info This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

The post Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference appeared on BitcoinEthereumNews.com. Key Takeaways Ethereum’s new roadmap was presented by Vitalik Buterin at the Japan Dev Conference. Short-term priorities include Layer 1 scaling and raising gas limits to enhance transaction throughput. Vitalik Buterin presented Ethereum’s development roadmap at the Japan Dev Conference today, outlining the blockchain platform’s priorities across multiple timeframes. The short-term goals focus on scaling solutions and increasing Layer 1 gas limits to improve transaction capacity. Mid-term objectives target enhanced cross-Layer 2 interoperability and faster network responsiveness to create a more seamless user experience across different scaling solutions. The long-term vision emphasizes building a secure, simple, quantum-resistant, and formally verified minimalist Ethereum network. This approach aims to future-proof the platform against emerging technological threats while maintaining its core functionality. The roadmap presentation comes as Ethereum continues to compete with other blockchain platforms for market share in the smart contract and decentralized application space. Source: https://cryptobriefing.com/ethereum-roadmap-scaling-interoperability-security-japan/
Share
BitcoinEthereumNews2025/09/18 00:25