Samuele Lorefice
|
b309ef2c0e
|
More modifications
|
2025-01-23 16:11:18 +01:00 |
|
Samuele Lorefice
|
ac63019fe6
|
Adds history command
|
2024-12-27 18:00:58 +01:00 |
|
Samuele Lorefice
|
773203127f
|
They can now answer
|
2024-12-26 20:19:59 +01:00 |
|
Samuele Lorefice
|
124a4c66fe
|
Adds correct command check
|
2024-12-26 19:51:17 +01:00 |
|
Samuele Lorefice
|
e90e0200e1
|
Removes ratelimit, refactors everything in more sections, adds tokenization calculation
|
2024-12-26 19:47:07 +01:00 |
|
Samuele Lorefice
|
000b32c41d
|
Last work before refactor
|
2024-12-26 17:26:29 +01:00 |
|
Samuele Lorefice
|
0fe19ce04f
|
Added speaker hinting
|
2024-12-26 16:51:18 +01:00 |
|
Samuele Lorefice
|
50e5ea6533
|
Implemented ratelimit
|
2024-12-26 16:51:03 +01:00 |
|
Samuele Lorefice
|
454dbb7e2a
|
Added GetEnv shorthand, moved Prompt loading to external file
|
2024-12-26 16:22:56 +01:00 |
|
Samuele Lorefice
|
65950e3642
|
Solves context out of bound due to history
|
2024-12-26 04:32:11 +01:00 |
|
Samuele Lorefice
|
4167c75279
|
Added rop of pending updates on bot start, reset command, AnswerChat method, GPU offload, limit to response lenght, context reduced to 2048, flash attention, 4 parallel decode queues, --keep of the original 810 tokens (which is the starting prompt)
|
2024-12-26 03:24:56 +01:00 |
|
Samuele Lorefice
|
b74e5d75e1
|
Fixed code, enabled to also always answer in a private chat
|
2024-12-26 01:34:10 +01:00 |
|
Samuele Lorefice
|
2357c7570c
|
Added llama.cpp and reworked the code
|
2024-12-26 00:35:45 +01:00 |
|
Samuele Lorefice
|
c6302112b2
|
Implemented also OpenAI
|
2024-12-25 21:38:26 +01:00 |
|
Samuele Lorefice
|
4b308b762a
|
Added LMStudio client, upgraded to .net 9.0
|
2024-12-25 19:37:14 +01:00 |
|
Samuele Lorefice
|
0ba298b955
|
Base commit
|
2024-12-24 23:08:08 +01:00 |
|