Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b1600
d5a1cbde
·
llama : support optional tensors (#4283)
·
Dec 01, 2023
b1599
b220222a
·
swift : fix token_to_piece implementation (#4278)
·
Dec 01, 2023
b1598
511f52c3
·
build : enable libstdc++ assertions for debug builds (#4275)
·
Dec 01, 2023
b1597
03562f3a
·
llama : support attention bias on LLaMA architecture (#4283)
·
Dec 01, 2023
b1596
37c746d6
·
llama : add Qwen support (#4281)
·
Dec 01, 2023
b1595
880f5797
·
llama : fix integer overflow during quantization (#4284)
·
Dec 01, 2023
b1593
ef47ec18
·
ggml : add ggml_soft_max_ext (#4256)
·
Dec 01, 2023
b1592
1d144112
·
server : add --log-disable to disable logging to file (#4260)
·
Dec 01, 2023
b1591
f43f0936
·
server : add single-client multi-prompt support (#4232)
·
Dec 01, 2023
b1590
d2809a3b
·
make : fix Apple clang determination bug (#4272)
·
Dec 01, 2023
b1589
15f5d960
·
build : fix build info generation and cleanup Makefile (#3920)
·
Dec 01, 2023
b1587
8efa0f6e
·
main : pass LOG_TEE callback to llama.cpp log (#4033)
·
Nov 30, 2023
b1583
f7f9e062
·
cmake : fix the metal file foder path (#4217)
·
Nov 30, 2023
b1581
b18c66ca
·
llama : fix alignment of general.name in print meta (#4254)
·
Nov 30, 2023
b1579
954e2285
·
llama : fix typical sampling (#4261)
·
Nov 30, 2023
b1575
64e64aa2
·
ggml : restore abort() in GGML_ASSERT (#4242)
·
Nov 28, 2023
b1574
8406b092
·
ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload...
·
Nov 28, 2023
b1573
b38a16df
·
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
·
Nov 27, 2023
b1571
bb03290c
·
examples : iOS example with swift ui (#4159)
·
Nov 27, 2023
b1570
f3b26981
·
ggml : fix -Warray-bounds warning with gcc (#4231)
·
Nov 26, 2023
1
…
122
123
124
125
126
127
128
129
130
…
178