Admin message
为了安全,强烈建议开启2FA双因子认证:User Settings -> Account -> Enable two-factor authentication!!!
Tags
Tags give the ability to mark specific points in history as being important
b4440
8cef75c7
·
llamafile : ppc64le MMA INT8 implementation (#10912)
·
Jan 08, 2025
b4439
0d52a69e
·
ci : fix cmake option (#11125)
·
Jan 08, 2025
b4438
02f04301
·
Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (#11117)
·
Jan 08, 2025
b4437
bec2183f
·
fix: Vulkan shader gen binary path when Cross-compiling (#11096)
·
Jan 08, 2025
b4435
017cc5f4
·
ggml-backend : only offload from host buffers (fix) (#11124)
·
Jan 07, 2025
b4434
a3d50bc0
·
ggml-backend : only offload from host buffers (#11120)
·
Jan 07, 2025
b4433
a4dd4900
·
rpc : code cleanup (#11107)
·
Jan 07, 2025
b4432
c0d6f790
·
SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6 (#11087)
·
Jan 07, 2025
b4431
dc7cef9f
·
llama-run : fix context size (#11094)
·
Jan 06, 2025
b4430
ecebbd29
·
llama : remove unused headers (#11109)
·
Jan 06, 2025
b4428
e6e7c75d
·
server : fix extra BOS in infill endpoint (#11106)
·
Jan 06, 2025
b4426
96a1dc27
·
llama : prevent system info string accumulation across calls (#11101)
·
Jan 06, 2025
b4425
6369f867
·
llama : rename missed batch params/vars to ubatch (#10059)
·
Jan 06, 2025
b4424
47182dd0
·
llama : update llama_model API names (#11063)
·
Jan 06, 2025
b4423
3e6e7a6b
·
tokenize : escape the prompt (#11058)
·
Jan 06, 2025
b4422
ae2f606b
·
mmap : fix fileno macro clash (#11076)
·
Jan 06, 2025
b4421
727368c6
·
llama : use LLAMA_TOKEN_NULL (#11062)
·
Jan 06, 2025
b4420
5047dd35
·
llama : use _impl suffix instead of _internal (#11060)
·
Jan 06, 2025
b4419
46e3556e
·
CUDA: add BF16 support (#11093)
·
Jan 06, 2025
b4418
b56f079e
·
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)
·
Jan 04, 2025
1
…
29
30
31
32
33
34
35
36
37
…
178