Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b3240
f2d48fff
·
sync : ggml
·
Jun 26, 2024
b3233
494165f3
·
llama : extend llm_build_ffn() to support _scale tensors (#8103)
·
Jun 26, 2024
b3232
9b2f16f8
·
`json`: better support for "type" unions (e.g. nullable arrays w/ typed items) (#7863)
·
Jun 26, 2024
b3231
6777c544
·
`json`: fix additionalProperties, allow space after enum/const (#7840)
·
Jun 26, 2024
b3230
163d50ad
·
fixes #7999 (adds control vectors to all `build_XXX()` functions in...
·
Jun 25, 2024
b3229
6fcbf682
·
llama : implement Unigram tokenizer needed by T5 and FLAN-T5 model families (#5763)
·
Jun 25, 2024
b3228
e6bf0077
·
llama : return nullptr from llama_grammar_init (#8093)
·
Jun 25, 2024
b3227
84631fe1
·
`json`: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797)
·
Jun 25, 2024
b3226
dd047b47
·
disable docker CI on pull requests (#8110)
·
Jun 25, 2024
b3223
49c03c79
·
cvector: better prompt handling, add "mean vector" method (#8069)
·
Jun 25, 2024
b3222
48e6b92c
·
Add chat template support for llama-cli (#8068)
·
Jun 25, 2024
b3220
f702a90e
·
Update control vector help (#8104)
·
Jun 25, 2024
b3219
083bacce
·
[SYCL] Re-enabled mul_mat_batched_sycl (#8095)
·
Jun 25, 2024
b3218
2df373ac
·
CUDA: fix matrix multiplication algorithm choice (#8102)
·
Jun 25, 2024
b3216
a818f302
·
CUDA: use MMQ instead of cuBLAS by default (#8075)
·
Jun 24, 2024
b3212
8cb508d0
·
disable publishing the full-rocm docker image (#8083)
·
Jun 24, 2024
b3211
646ef4a9
·
embedding : more cli arguments (#7458)
·
Jun 24, 2024
b3209
95f57bb5
·
ggml : remove ggml_task_type and GGML_PERF (#8017)
·
Jun 24, 2024
b3208
e112b610
·
llama : add support for BitnetForCausalLM (#7931)
·
Jun 23, 2024
b3206
11318d9a
·
Fix typo in llama_set_embeddings comment (#8077)
·
Jun 23, 2024
1
…
70
71
72
73
74
75
76
77
78
…
178