Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b4389
09fe2e76
·
server: allow filtering llama server response fields (#10940)
·
Dec 24, 2024
b4388
30caac3a
·
llama : the WPM vocabs use the CLS token as BOS (#10930)
·
Dec 24, 2024
b4387
60cfa728
·
ggml : use wstring for backend search paths (#10960)
·
Dec 24, 2024
b4386
3327bb0f
·
ggml : fix arm enabled features check (#10961)
·
Dec 24, 2024
b4385
32d6ee63
·
ggml : fix const usage in SSE path (#10962)
·
Dec 23, 2024
b4384
14b699ec
·
server : fix missing model id in /model endpoint (#10957)
·
Dec 23, 2024
b4383
485dc012
·
server : add system_fingerprint to chat/completion (#10917)
·
Dec 23, 2024
b4382
86bf31cf
·
rpc-server : add support for the SYCL backend (#10934)
·
Dec 23, 2024
b4381
b92a14a8
·
llama : support InfiniAI Megrez 3b (#10893)
·
Dec 23, 2024
b4380
6f0c9e03
·
llama : support for Llama-3_1-Nemotron-51B (#10669)
·
Dec 23, 2024
b4379
dab76c92
·
llama-run : include temperature option (#10899)
·
Dec 23, 2024
b4378
7024d59e
·
ggml : fix run-time on FreeBSD in get_executable_path() (#10948)
·
Dec 23, 2024
b4376
7ae33a61
·
llama : add Falcon3 support (#10883)
·
Dec 23, 2024
b4375
ebdee947
·
vulkan: build fixes for 32b (#10927)
·
Dec 22, 2024
b4372
e34c5af4
·
ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0() (#10874)
·
Dec 21, 2024
b4371
eb5c3dc6
·
SYCL: Migrate away from deprecated ggml_tensor->backend (#10840)
·
Dec 20, 2024
b4369
21ae3b9b
·
ggml : add test for SVE and disable when it fails (#10906)
·
Dec 20, 2024
b4368
0a11f8b7
·
convert : fix RWKV v6 model conversion (#10913)
·
Dec 20, 2024
b4367
d408bb92
·
clip : disable GPU support (#10896)
·
Dec 19, 2024
b4366
5cab3e4a
·
llama : minor grammar refactor (#10897)
·
Dec 19, 2024
1
…
31
32
33
34
35
36
37
38
39
…
178