Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b2377
58308a0e
·
server : fix metrics init (#5964)
·
Mar 09, 2024
b2376
5b097973
·
ggml : remove old quantization functions (#5942)
·
Mar 09, 2024
b2374
fb215c38
·
server : normalize embeddings (#5956)
·
Mar 09, 2024
b2372
0db32bea
·
server : fix passing prompt as tokens (#5955)
·
Mar 09, 2024
b2371
8a3012a4
·
ggml : add ggml-common.h to deduplicate shared code (#5940)
·
Mar 09, 2024
b2370
9674aaf3
·
server : simplify logic for empty prompts (#5953)
·
Mar 09, 2024
b2369
950ba1ab
·
Server: reorganize some http logic (#5939)
·
Mar 09, 2024
b2368
e1fa9569
·
server : add SSL support (#5926)
·
Mar 09, 2024
b2367
fd72d2d2
·
server: tests: add truncated prompt tests, better kv cache size (#5933)
·
Mar 09, 2024
b2366
c2101a2e
·
llama : support Mamba Selective State Space Models (#5328)
·
Mar 08, 2024
b2365
515f7d0d
·
llama : fix quantization of shared token_embd (#5944)
·
Mar 08, 2024
b2364
76e86882
·
server: metrics: add llamacpp:prompt_seconds_total and...
·
Mar 08, 2024
b2363
e457fb35
·
llama : assume tied weights if lm_head/output weights is missing (#5824)
·
Mar 08, 2024
b2362
af37fd8b
·
server : fix EOS token detection with disabled cache (#5938)
·
Mar 08, 2024
b2361
581ed5c4
·
log : fix MSVC compile errors (#5643)
·
Mar 08, 2024
b2360
6cdabe65
·
llama-bench : add embeddings option (#5924)
·
Mar 07, 2024
b2359
89fb735f
·
Revert "[SYCL] fix error when set main gpu to non-zero (#5901)" (#5918)
·
Mar 07, 2024
b2358
55a2a900
·
server : add `/v1/completions` endpoint (#5914)
·
Mar 07, 2024
b2357
2002bc96
·
server : refactor (#5882)
·
Mar 07, 2024
b2356
ceca1aef
·
[SYCL] fix error when set main gpu to non-zero (#5901)
·
Mar 07, 2024
1
…
96
97
98
99
100
101
102
103
104
…
178