Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b1671
8fe03ffd
·
common : remove incorrect --model-draft default (#4568)
·
Dec 21, 2023
b1670
91544948
·
CUDA: mul_mat_id always on GPU for batches >= 32 (#4553)
·
Dec 21, 2023
b1667
66f35a2f
·
cuda : better error message for ggml_get_rows (#4561)
·
Dec 21, 2023
b1666
13988239
·
cuda : replace asserts in wrong architecture checks with __trap (#4556)
·
Dec 21, 2023
b1665
d3223afd
·
llama : disable per-tensor info prints on model load (#4562)
·
Dec 21, 2023
b1664
1d7a1912
·
Fix access violation in ggml_cuda_free_data if tensor->extra is NULL (#4554)
·
Dec 21, 2023
b1663
799fc226
·
CUDA: Faster Mixtral prompt processing (#4538)
·
Dec 20, 2023
b1662
328b83de
·
ggml : fixed check for _MSC_VER (#4535)
·
Dec 19, 2023
b1661
a7aee47b
·
ggml-cuda: Fix HIP build (#4528)
·
Dec 18, 2023
b1660
0e18b2e7
·
llama.swiftui : add tinyllama 1.1B F16
·
Dec 18, 2023
b1659
6ff39b12
·
llama.swiftui : add more models
·
Dec 18, 2023
b1658
b9e74f9b
·
llama : add phi-2 + fix NeoX rope + ggml_mul_mat_set_prec (#4490)
·
Dec 18, 2023
b1657
3c04bf6d
·
llama : fix try_override for bool_value which always return true (#4519)
·
Dec 18, 2023
b1656
2994f0c5
·
decode : fix logits_valid for legacy API (#4516)
·
Dec 17, 2023
b1654
800a489e
·
llama.swiftui : add bench functionality (#4483)
·
Dec 17, 2023
b1652
919c4066
·
build : Check the ROCm installation location (#4485)
·
Dec 17, 2023
b1651
45668633
·
finetune : keep allocs alive until all allocations are done (#4486)
·
Dec 17, 2023
b1650
0ffc92d2
·
server : disable llm logs if SERVER_VERBOSE is off (#3792)
·
Dec 17, 2023
b1649
8edd2b40
·
server : fix grammar being ignored (#4494)
·
Dec 17, 2023
b1648
eb16dae7
·
server : fix possible ambiguity in content type charset (#4501)
·
Dec 17, 2023
1
…
119
120
121
122
123
124
125
126
127
…
178