Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
master-cbebf61
cbebf61c
·
Fix assert when free invalid cuda pointer (#2005)
·
Jun 26, 2023
master-bd34cdd
bd34cdde
·
ggml : sync latest ggml (custom operators)
·
Jun 25, 2023
master-c2a08f8
c2a08f87
·
fix server sampling: top k sampler first (#1977)
·
Jun 25, 2023
master-5ec8dd5
5ec8dd5a
·
#1869 Fix null reference errors when training from scratch with CUDA (#1907)
·
Jun 24, 2023
master-65bdd52
65bdd52a
·
tests : sync test-grad0 from ggml
·
Jun 24, 2023
master-f2c754e
f2c754e1
·
ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)
·
Jun 24, 2023
master-b061ba9
b061ba9e
·
llama : fix top-p sampling to match the canonical definition (#1953)
·
Jun 24, 2023
master-527b6fb
527b6fba
·
llama : make model stateless and context stateful (llama_state) (#1797)
·
Jun 24, 2023
master-7487137
74871372
·
rework convert.py to read hyper-parameters from config.json (#1958)
·
Jun 22, 2023
master-bbca06e
bbca06e2
·
cmake: revert CUDA arch default to 52, 61 if f16 (#1959)
·
Jun 21, 2023
master-aacdbd4
aacdbd40
·
llama : fix params struct slignment (#1936)
·
Jun 20, 2023
master-20568fe
20568fe6
·
[Fix] Reenable server embedding endpoint (#1937)
·
Jun 20, 2023
master-18b3562
18b35625
·
ggml : fix bug in LBFGS optimizer (found by ggml tests)
·
Jun 19, 2023
master-ba4e85a
ba4e85a8
·
llama : use aligned memory during ggml_init call from loading saved sessions (#1934)
·
Jun 19, 2023
master-23fc5c2
23fc5c21
·
cmake : fix trailing whitespaces
·
Jun 19, 2023
master-cb40dfc
cb40dfca
·
llama : only use Q6_K for output weights if tensor size is multiple of 256 (#1932)
·
Jun 19, 2023
master-ca7c3f4
ca7c3f4d
·
cuda : faster k-quants on older GPUs (#1930)
·
Jun 19, 2023
master-b97ca43
b97ca431
·
ggml : sync latest ggml repo (#1924)
·
Jun 19, 2023
master-1e3abfc
1e3abfce
·
cmake : fix build shared ggml when CUDA is enabled (#1929)
·
Jun 19, 2023
master-16b9cd1
16b9cd19
·
Convert vector to f16 for dequantize mul mat vec (#1913)
·
Jun 19, 2023
1
…
152
153
154
155
156
157
158
159
160
…
178