Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b4785
581650b7
·
vulkan: improve im2col (#11826)
·
Feb 28, 2025
b4784
b95c8af3
·
cmake: Fix ggml backend dependencies and installation (#11818)
·
Feb 27, 2025
b4783
a800ae46
·
llava : add struct for FFI bindgen (#12079)
·
Feb 26, 2025
b4778
a82c9e7c
·
vulkan: fix assertion when qy_needs_dequant (#12068)
·
Feb 25, 2025
b4777
401af80b
·
server: handle echo=false on /v1/completions (#12060)
·
Feb 25, 2025
b4776
c132239b
·
add OP sigmoid (#12056)
·
Feb 25, 2025
b4775
393fca62
·
ggml-cpu: Fix build with sve (#12059)
·
Feb 25, 2025
b4774
61d4f39d
·
vulkan: implement more backpropagation operators (#11914)
·
Feb 25, 2025
b4773
0b527456
·
server: support add_generation_prompt query param (#12062)
·
Feb 25, 2025
b4771
3e9a2860
·
llama : expose llama_model_n_head_kv in the API (#11997)
·
Feb 25, 2025
b4770
58d07a80
·
metal : copy kernels for quant to F32/F16 conversions (#12017)
·
Feb 25, 2025
b4769
34a846b5
·
opencl: fix for small models (#11950)
·
Feb 24, 2025
b4768
7a2c913e
·
llava : Add Granite Vision Support (#11794)
·
Feb 24, 2025
b4767
08d59862
·
[SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035)
·
Feb 24, 2025
b4765
8303e8b0
·
SYCL: Fix GGML_SYCL_DEBUG macro (#11995)
·
Feb 24, 2025
b4764
7ad0779f
·
run: allow to customize prompt by env var LLAMA_PROMPT_PREFIX (#12041)
·
Feb 23, 2025
b4763
f777a73e
·
Some llama-run cleanups (#11973)
·
Feb 23, 2025
b4762
af7747c9
·
ggml-cpu: Support s390x SIMD Instruction Set (#12019)
·
Feb 22, 2025
b4761
a28e0d5e
·
CUDA: app option to compile without FlashAttention (#12025)
·
Feb 22, 2025
b4760
36c258ee
·
llava: build clip image from pixels (#11999)
·
Feb 22, 2025
1
…
18
19
20
21
22
23
24
25
26
…
178