Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b4365
36319dec
·
tts : small QoL for easy model fetch (#10903)
·
Dec 19, 2024
b4363
a3c33b1d
·
ggml: fix arm build with gcc (#10895)
·
Dec 19, 2024
b4362
2fffc52b
·
llama : fix Roberta embeddings (#10856)
·
Dec 19, 2024
b4361
7585edbd
·
convert : Add support for Microsoft Phi-4 model (#10817)
·
Dec 19, 2024
b4360
cd920d0a
·
tests: disable GGUF test for bad value size (#10886)
·
Dec 19, 2024
b4359
7909e858
·
llama-run : improve progress bar (#10821)
·
Dec 19, 2024
b4358
9177484f
·
ggml : fix arm build (#10890)
·
Dec 18, 2024
b4357
0bf2d10c
·
tts : add OuteTTS support (#10784)
·
Dec 18, 2024
b4354
0e70ba68
·
server : add "tokens" output (#10853)
·
Dec 18, 2024
b4353
46828872
·
server : (embeddings) using same format for "input" and "content" (#10872)
·
Dec 18, 2024
b4351
4da69d1a
·
Revert "llama : add Falcon3 support (#10864)" (#10876)
·
Dec 18, 2024
b4350
d62b532c
·
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
·
Dec 17, 2024
b4349
081b29bd
·
tests: add tests for GGUF (#10830)
·
Dec 17, 2024
b4348
5437d4aa
·
sync : ggml
·
Dec 17, 2024
b4343
0006f5a7
·
ggml : update ggml_backend_cpu_device_supports_op (#10867)
·
Dec 17, 2024
b4342
05c3a444
·
server : fill usage info in embeddings and rerank responses (#10852)
·
Dec 17, 2024
b4341
382bc7f2
·
llama : add Falcon3 support (#10864)
·
Dec 17, 2024
b4338
7b1ec53f
·
vulkan: bugfixes for small subgroup size systems + llvmpipe test (#10809)
·
Dec 17, 2024
b4337
160bc039
·
rwkv6: add wkv6 support for Vulkan backend (#10829)
·
Dec 16, 2024
b4333
a0974156
·
llama : add Deepseek MoE v1 & GigaChat models (#10827)
·
Dec 15, 2024
1
…
32
33
34
35
36
37
38
39
40
…
178