Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b1539
e9370664
·
gguf-py : export chat templates (#4125)
·
Nov 19, 2023
b1538
28a2e6e7
·
tokenize example: Respect normal add BOS token behavior (#4126)
·
Nov 18, 2023
b1536
2923f17f
·
Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)
·
Nov 18, 2023
b1535
bbecf3f4
·
llama : increase max nodes (#4115)
·
Nov 17, 2023
b1534
8e936108
·
build : support ppc64le build for make and CMake (#3963)
·
Nov 17, 2023
b1533
5ad387e9
·
tokenize : fix trailing whitespace
·
Nov 17, 2023
b1532
2fa02b4b
·
examples : add tokenize (#4039)
·
Nov 17, 2023
b1529
9e87ef60
·
common : improve yaml log escaping (#4080)
·
Nov 17, 2023
b1528
c7cce124
·
llava : fix compilation warning that fread return value is not used (#4069)
·
Nov 17, 2023
b1526
ba4cf5c0
·
train : move number of gpu layers argument parsing to common/train.cpp (#4074)
·
Nov 17, 2023
b1525
e85bb1a8
·
llama : add functions to get the model's metadata (#4013)
·
Nov 17, 2023
b1524
3e916a07
·
finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079)
·
Nov 17, 2023
b1523
947f64f1
·
finetune : zero the loraB initial vectors (#4082)
·
Nov 17, 2023
b1522
b83e149e
·
cuda : get_row_rounding F32 (#4095)
·
Nov 17, 2023
b1521
4f447a48
·
llama : fix data units (#4101)
·
Nov 17, 2023
b1520
91f64993
·
Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
·
Nov 16, 2023
b1519
8da46278
·
gguf : fix potential infinite loops while parsing (#4100)
·
Nov 16, 2023
b1518
a6fc554e
·
llama : restore prefix space in llama tokenizer (#4081)
·
Nov 15, 2023
b1517
1cf2850d
·
ggml-cuda : increase max graph size (#4084)
·
Nov 15, 2023
b1516
6bb4908a
·
Fix MacOS Sonoma model quantization (#4052)
·
Nov 14, 2023
1
…
124
125
126
127
128
129
130
131
132
…
178