Tag: 算法
All the articles with the tag "算法".
Kimi Linear: An Expressive, Efficient Attention Architecture
Updated: at 19:10Published: at 13:55Kimi Linear,有比较详细的实验&Scale Up。有Linear Attention可以去掉RoPE这个结论还是比较惊喜的。
Parallelizing Linear Transformers with the Delta Rule over Sequence Length
Updated: at 16:46Published: at 14:43DeltaNet
MLP Memory: Language Modeling with Retriever-pretrained External Memory
Published: at 14:22用MLP学习并代替RAG中kNN输出的概率分布。
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Published: at 17:47看看Shift-Window Attention。
SpikeVideoFormer: An Efficient Spike-Driven Video Transformer with Hamming Attention and O(T) Complexity
Published: at 16:56用汉明距离替换Attention中的点乘操作,避免出现Spike错开的情况。中间的做法比较有趣,但是实验感觉做的一般般,尤其是claim了自己有硬件实现的情况下energy计算还用的是纯算法的计算,并且FPGA的具体实现也没有透露,说了也没有说清楚。精度没有超过ANN2SNN的SOTA。重点还是需要用一些其他的操作替换掉对SNN不适应的算子。
SlowFast Networks for Video Recognition
Updated: at 06:15Published: at 16:57多分支CNN,会不会有一些分支能学到更加相似的帧间变化?
DeltaCNN: End-to-End CNN Inference of Sparse Frame Differences in Videos
Updated: at 15:07Published: at 12:11利用CNN Layer的“线性”特征在帧之间做feature的差分,并且做了CUDA加速。和ViStream几乎一样的思路,能不能解决我们现在的问题?
Temporal Flexibility in Spiking Neural Networks: Towards Generalization Across Time Steps and Deployment Friendliness
Published: at 15:38ICLR2025 Poster,似乎也在做Elastic inference?
A Simple Framework for Contrastive Learning of Visual Representations
Published: at 13:42对比学习SimCLR的论文。对比学习能对齐每一层的Feature吗?
QKFormer: Hierarchical Spiking Transformer using Q-K Attention
Published: at 18:09QKFormer,NIPS2024 Spotlight,把Direct Training SNN在ImageNet和CIFAR上的点刷的特别高,感觉之后要做就避不开它。