Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b1304
a03ce384
·
finetune : fix #3404 (#3437)
·
Oct 02, 2023
b1303
a8476769
·
metal : set log callback before initializing (#3427)
·
Oct 02, 2023
b1302
095231df
·
cmake : fix transient definitions in find pkg (#3411)
·
Oct 02, 2023
b1300
c97f01c3
·
infill : add new example + extend server API (#3296)
·
Oct 02, 2023
b1299
f5ef5cfb
·
ggml-cuda : perform cublas mat mul of quantized types as f16 (#3412)
·
Sep 30, 2023
b1298
40e07a60
·
llama.cpp : add documentation about rope_freq_base and scale values (#3401)
·
Sep 29, 2023
b1297
bc34dd4f
·
train : fix KQ_pos allocation (#3392)
·
Sep 29, 2023
b1296
2777a84b
·
llama : quantize up to 31% faster on Linux and Windows with mmap (#3206)
·
Sep 29, 2023
b1292
bc39553c
·
build : enable more non-default compiler warnings (#3200)
·
Sep 28, 2023
b1291
0ccfc62a
·
ggml_tensor: update the structure comments. (#3283)
·
Sep 28, 2023
b1290
7f1a0fe7
·
ggml : release the requested thread pool resource (#3292)
·
Sep 28, 2023
b1289
16bc66d9
·
llama.cpp : split llama_context_params into model and context params (#3301)
·
Sep 28, 2023
b1288
0512d666
·
ci : multithreaded builds (#3311)
·
Sep 28, 2023
b1287
0e76a899
·
train : finetune LORA (#2632)
·
Sep 28, 2023
b1286
2db94d98
·
gguf : basic type checking in gguf_get_* (#3346)
·
Sep 28, 2023
b1285
ecf90b1a
·
gguf : make token scores and types optional (#3347)
·
Sep 28, 2023
b1284
2619109a
·
ci : disable freeBSD builds due to lack of VMs (#3381)
·
Sep 28, 2023
b1283
ec893798
·
llama : custom attention mask + parallel decoding + no context swaps (#3228)
·
Sep 28, 2023
b1280
da040034
·
ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (#3370)
·
Sep 28, 2023
b1277
20c7e1e8
·
gguf : fix a few general keys (#3341)
·
Sep 27, 2023
1
…
133
134
135
136
137
138
139
140
141
…
178