Releases: Dao-AILab/flash-attention
Releases Β· Dao-AILab/flash-attention
v2.7.1.post4
07 Dec 06:15
Compare
Sorry, something went wrong.
No results found
[CI] Don't include <ATen/cuda/CUDAGraphsUtils.cuh>
v2.7.1.post3
07 Dec 05:41
Compare
Sorry, something went wrong.
No results found
[CI] Change torch #include to make it work with torch 2.1 Philox
v2.7.1.post2
07 Dec 01:13
Compare
Sorry, something went wrong.
No results found
[CI] Use torch 2.6.0.dev20241001, reduce torch #include
v2.7.1.post1
06 Dec 01:53
Compare
Sorry, something went wrong.
No results found
[CI] Fix CUDA version for torch 2.6
v2.7.1
06 Dec 01:43
Compare
Sorry, something went wrong.
No results found
v2.7.0.post2
13 Nov 04:02
Compare
Sorry, something went wrong.
No results found
[CI] Pytorch 2.5.1 does not support python 3.8
v2.7.0.post1
12 Nov 22:29
Compare
Sorry, something went wrong.
No results found
[CI] Switch back to CUDA 12.4
v2.7.0
12 Nov 22:12
Compare
Sorry, something went wrong.
No results found
v2.6.3
25 Jul 08:33
Compare
Sorry, something went wrong.
No results found
v2.6.2
23 Jul 09:30
Compare
Sorry, something went wrong.
No results found