Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b4485
f446c2cf
·
SYCL: Add gated linear attention kernel (#11175)
·
Jan 15, 2025
b4481
091592d7
·
Refactor test-chat-template.cpp (#11224)
·
Jan 14, 2025
b4475
84a44815
·
cli : auto activate conversation mode if chat template is available (#11214)
·
Jan 13, 2025
b4474
39509fb0
·
cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (#11042)
·
Jan 13, 2025
b4468
8f70fc3d
·
llama : remove 'd' from bad special token log (#11212)
·
Jan 13, 2025
b4467
1244cdcf
·
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (#11211)
·
Jan 13, 2025
b4466
924518e2
·
Reset color before we exit (#11205)
·
Jan 12, 2025
b4465
9a483999
·
llama : fix chat template gguf key (#11201)
·
Jan 12, 2025
b4464
08f10f69
·
llama : remove notion of CLS token (#11064)
·
Jan 12, 2025
b4458
c3f9d257
·
Vulkan: Fix float16 use on devices without float16 support + fix...
·
Jan 10, 2025
b4457
ee7136c6
·
llama: add support for QRWKV6 model architecture (#11001)
·
Jan 10, 2025
b4456
c6860cc7
·
SYCL: Refactor ggml_sycl_compute_forward (#11121)
·
Jan 10, 2025
b4453
f8feb4b0
·
model: Add support for PhiMoE arch (#11003)
·
Jan 09, 2025
b4451
d9feae1c
·
llama-chat : add phi 4 template (#11148)
·
Jan 09, 2025
b4450
8d59d911
·
fix: add missing msg in static_assert (#11143)
·
Jan 08, 2025
gguf-v0.14.0
8a1d9c25
·
gguf-py : move scripts directory (#11116)
·
Jan 08, 2025
b4447
f7cd1330
·
ci : use actions from ggml-org (#11140)
·
Jan 08, 2025
b4446
4d2b3d88
·
lora : improve compat with `mergekit-extract-lora` (#11131)
·
Jan 08, 2025
b4445
c07d437b
·
llama : avoid hardcoded QK_K (#11061)
·
Jan 08, 2025
b4443
c792dcf4
·
ggml : allow loading backend with env variable (ggml/1059)
·
Jan 08, 2025
1
…
28
29
30
31
32
33
34
35
36
…
178