Skip to content

Releases: ggml-org/llama.cpp

b4747

20 Feb 13:46
c5d91a7
Compare
Choose a tag to compare
ggml-cpu: Add CPU backend support for KleidiAI library (#11390)

* ggml-cpu: Add CPU backend support for KleidiAI library

* Add environmental variable GGML_KLEIDIAI_SME

* Add support for multithread LHS conversion

* Switch kernel selection order to dotprod and i8mm

* updates for review comments

* More updates for review comments

* Reorganize and rename KleidiAI files

* Move ggml-cpu-traits.h to source file

* Update cmake for SME build and add alignment for SME

* Remove append GGML_USE_CPU_KLEIDIAI to the GGML_CDEF_PUBLIC list

b4746

20 Feb 10:54
4806498
Compare
Choose a tag to compare
ggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot (#11917)

* Added SVE Implementation for Q3_K Kernel in ggml-cpu-quants.c file

* Improved Formating of code in  ggml-cpu-quants.c file

* style : minor fixes

* style : less whitespaces

* style : ptr spaceing

---------

Co-authored-by: vithulep <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>

b4745

20 Feb 09:09
0d55958
Compare
Choose a tag to compare
run : add --chat-template-file (#11961)

Relates to: https://github.com/ggml-org/llama.cpp/issues/11178

Added --chat-template-file CLI option to llama-run. If specified, the file
will be read and the content passed for overwriting the chat template of
the model to common_chat_templates_from_model.

Signed-off-by: Michael Engel <[email protected]>

b4743

19 Feb 12:23
d07c621
Compare
Choose a tag to compare
common : add llama.vim preset for Qwen2.5 Coder (#11945)

This commit adds a preset for llama.vim to use the default Qwen 2.5
Coder models.

The motivation for this change is to make it easier to start a server
suitable to be used with the llama.vim plugin. For example, the server
can be started with a command like the following:
```console
$ llama.vim --fim-qwen-1.5b-default
```

Refs: https://github.com/ggml-org/llama.cpp/issues/10932

b4742

19 Feb 12:14
abd4d0b
Compare
Choose a tag to compare
speculative : update default params (#11954)

* speculative : update default params

* speculative : do not discard the last drafted token

b4739

18 Feb 18:46
63e489c
Compare
Choose a tag to compare
tool-call: refactor common chat / tool-call api (+ tests / fixes) (#1…

b4738

18 Feb 14:00
63ac128
Compare
Choose a tag to compare
server : add TEI API format for /rerank endpoint (#11942)

* server : add TEI API format for /rerank endpoint

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <[email protected]>

* fix

* also gitignore examples/server/*.gz.hpp

---------

Co-authored-by: Georgi Gerganov <[email protected]>

b4735

17 Feb 13:49
73e2ed3
Compare
Choose a tag to compare
CUDA: use async data loading for FlashAttention (#11894)

* CUDA: use async data loading for FlashAttention

---------

Co-authored-by: Diego Devesa <[email protected]>

b4734

17 Feb 11:54
f7b1116
Compare
Choose a tag to compare
update release requirements (#11897)

b4733

17 Feb 10:53
c4d29ba
Compare
Choose a tag to compare
server : fix divide-by-zero in metrics reporting (#11915)