Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b3873
a7ad5535
·
ggml-backend : add device description to CPU backend (#9720)
·
Oct 03, 2024
b3872
d6fe7abf
·
ggml: unify backend logging mechanism (#9709)
·
Oct 03, 2024
b3870
841713e1
·
rpc : enable vulkan (#9714)
·
Oct 03, 2024
b3869
56399714
·
Fixed dequant precision issues in Q4_1 and Q5_1 (#9711)
·
Oct 03, 2024
b3868
c83ad6d0
·
ggml-backend : add device and backend reg interfaces (#9707)
·
Oct 03, 2024
b3867
a39ab216
·
llama : reduce compile time and binary size (#9712)
·
Oct 02, 2024
b3866
f536f4c4
·
[SYCL] Initial cmake support of SYCL for AMD GPUs (#9658)
·
Oct 02, 2024
b3865
00b7317e
·
vulkan : do not use tensor->extra (#9407)
·
Oct 02, 2024
b3864
76b37d15
·
gguf-split : improve --split and --merge logic (#9619)
·
Oct 02, 2024
b3863
148844fe
·
examples : remove benchmark (#9704)
·
Oct 02, 2024
b3861
f1b8c427
·
sync : ggml
·
Oct 01, 2024
b3856
cad341d8
·
metal : reduce command encoding overhead (#9698)
·
Oct 01, 2024
b3855
a90484c6
·
llama : print correct model type for Llama 3.2 1B and 3B
·
Oct 01, 2024
b3853
6f1d9d71
·
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
·
Sep 30, 2024
b3849
8277a817
·
console : utf-8 fix for windows stdin (#9690)
·
Sep 30, 2024
b3848
c919d5db
·
ggml : define missing HWCAP flags (#9684)
·
Sep 29, 2024
b3847
d0b1d663
·
sync : ggml
·
Sep 29, 2024
b3841
faac0bae
·
common : ensure llama_batch size does not exceed max size (#9668)
·
Sep 29, 2024
b3837
1b2f992c
·
test-backend-ops : use flops for some performance tests (#9657)
·
Sep 28, 2024
b3835
6102037b
·
vocab : refactor tokenizer to reduce init overhead (#9449)
·
Sep 28, 2024
1
…
48
49
50
51
52
53
54
55
56
…
178