Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b1515
36eed0c4
·
stablelm : StableLM support (#3586)
·
Nov 14, 2023
b1513
bd90eca2
·
llava : fix regression for square images in #3613 (#4056)
·
Nov 13, 2023
b1512
3d68f364
·
ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
·
Nov 13, 2023
b1510
4760e7cc
·
sync : ggml (backend v2) (#3912)
·
Nov 13, 2023
b1509
bb50a792
·
Add ReLU and SQR CUDA ops to (partially) fix Persimmon offloading (#4041)
·
Nov 13, 2023
b1505
d96ca7de
·
server : fix crash when prompt exceeds context size (#3996)
·
Nov 10, 2023
b1503
4a4fd3ee
·
server : allow continue edit on completion mode (#3950)
·
Nov 10, 2023
b1502
df9d1293
·
Unbreak persimmon after #3837 (#4010)
·
Nov 10, 2023
b1500
57ad015d
·
server : add min_p param (#3877)
·
Nov 08, 2023
b1499
875fb428
·
ggml-alloc : fix backend assignments of views (#3982)
·
Nov 08, 2023
b1497
413503d4
·
make : do not add linker flags when compiling static llava lib (#3977)
·
Nov 07, 2023
b1496
e9c1cecb
·
ggml : fix backward rope after YaRN (#3974)
·
Nov 07, 2023
b1495
54b4df88
·
Use params when loading models in llava-cli (#3976)
·
Nov 07, 2023
b1494
46876d2a
·
cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)
·
Nov 07, 2023
b1493
381efbf4
·
llava : expose as a shared library for downstream projects (#3613)
·
Nov 07, 2023
b1492
2833a6f6
·
ggml-cuda : fix f16 mul mat (#3961)
·
Nov 05, 2023
b1491
d9ccce2e
·
Allow common process_escapes to handle \x sequences (#3928)
·
Nov 05, 2023
b1489
132d25b8
·
cuda : fix disabling device with --tensor-split 1,0 (#3951)
·
Nov 05, 2023
b1488
3d48f42e
·
llama : mark LLM_ARCH_STARCODER as full offload supported (#3945)
·
Nov 05, 2023
b1487
c41ea36e
·
cmake : MSVC instruction detection (fixed up #809) (#3923)
·
Nov 05, 2023
1
…
125
126
127
128
129
130
131
132
133
…
178