379 Commits

Author SHA1 Message Date
Louis
24c56027d7 fix: bypass auto-unload for local api server 2026-02-09 10:28:50 +07:00
Louis
05f611d21a feat: patch env config 2026-02-07 17:42:31 +07:00
Louis
474e1e6b12 feat: add new backend - MLX (#7459)
* feat: support mlx plugin

# Conflicts:
#	Makefile
#	web-app/src/routes/settings/providers/$providerName.tsx

* feat: add prompt cache and fix binary bundle

* feat: vision support

* feat: detect vision capability while importing model

* fix: prompt cache

* feat: support mlx model download from hub

* reactor: clean up settings

* fix: add build step for darwin

* feat: add local api server support for mlx model

* fix: notarize mlx bin

* fix: simplify token speed counter

* chore: clean up

* fix: linter

* fix: test

* fix: ci fail

* fix: xcode version

* fix: run target

* fix: xcode version

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Nguyen Ngoc Minh <91668012+Minh141120@users.noreply.github.com>
2026-02-05 14:13:25 +07:00
Louis
0d95371810 feat: support mlx model download from hub 2026-02-04 12:30:54 +07:00
Louis
883c3b7324 feat: support files upload in projects 2026-02-03 14:45:51 +07:00
Louis
73a22637a6 fix: system message from assistant 2026-01-20 19:26:39 +07:00
Dinh Long Nguyen
c74bbbd9e9 Feat/improve file attachments (#7080)
* embedding works for large files

* attachment as inline

* update tan stack router

* attachment works with proper selection

* fix test

* wait for model to start before doing things

* Token Count now counts inline

* Revert "embedding works for large files"

This reverts commit 85184860cde0729a7a795ea6b9caf2bf66754930.

* refactor: add batch processing to embedTexts

Implemented batch‑based embedding for both rag-extension and vector-db-extension.
- Introduced a `batchSize` parameter with a sensible default.
- Processed texts in chunks to avoid large single calls to the LlamaCPP embed API.
- Mapped batch results to global indices and added per‑batch error handling.
- Logged failures and re‑thrown errors with contextual information.

This change improves memory usage, resilience to API timeouts, and overall scalability of the embedding pipeline.

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update web-app/src/locales/ru/common.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update web-app/src/locales/pt-BR/common.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix lint

* attachment works properly now

* update padding

---------

Co-authored-by: Akarshan <akarshan@menlo.ai>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-03 15:49:54 +07:00
Akarshan Biswas
ddbf00b6c9 feat: add embedding flag to model metadata and automatic detection (#7031)
* feat: add embedding flag to model metadata and automatic detection

Add an `embedding` boolean field to both `modelInfo` and `ModelConfig`.
Implement `resolveEmbeddingConfig` to read GGUF metadata for BERT‑based architectures, determine if a model is an embedding model, and persist the result back to `model.yml`.
Update model loading, listing, and validation logic to expose and use the new flag.

This change reduces repeated GGUF reads, speeds up model discovery, and allows the system to distinguish embedding‑capable models for downstream use.

* fix: filter out embeddings model from model list

---------

Co-authored-by: Louis <louis@jan.ai>
2025-11-25 08:04:50 +05:30
Vanalite
f3d645fb61 feat: distinct mcp server 2025-11-13 11:28:54 +07:00
Nghia Doan
045baed3a4 Merge branch 'dev' into feat/retain-interruption-message 2025-11-05 15:16:12 +07:00
Louis
dac5f3faa2 fix: migrate flash_attn settings (#6864)
* fix: migrate flash_attn settings

* Update web-app/src/hooks/useModelProvider.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update core/src/browser/extension.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 19:01:13 +07:00
Vanalite
4752c39d7e feat: Modify on-going response instead of creating new message to avoid message ID duplication 2025-11-03 13:00:35 +07:00
Minh141120
15c426aefc chore: update org name 2025-10-28 17:26:27 +07:00
Nguyen Ngoc Minh
418a48ab39 Merge pull request #6790 from menloresearch/chore/happy-dom-update
chore: update happy dom deps version
2025-10-15 02:53:24 -07:00
Minh141120
f0ca9cce35 chore: update happy-dom version 2025-10-15 14:43:58 +07:00
Dinh Long Nguyen
fc784620e0 fix tests 2025-10-09 04:28:08 +07:00
Dinh Long Nguyen
340042682a ui ux enhancement 2025-10-09 03:48:51 +07:00
Akarshan
7762cea10a feat: Distinguish and preserve embedding model sessions
This commit introduces a new field, `is_embedding`, to the `SessionInfo` structure to clearly mark sessions running dedicated embedding models.

Key changes:
- Adds `is_embedding` to the `SessionInfo` interface in `AIEngine.ts` and the Rust backend.
- Updates the `loadLlamaModel` command signatures to pass this new flag.
- Modifies the llama.cpp extension's **auto-unload logic** to explicitly **filter out** and **not unload** any currently loaded embedding models when a new text generation model is loaded. This is a critical performance fix to prevent the embedding model (e.g., used for RAG) from being repeatedly reloaded.

Also includes minor code style cleanup/reformatting in `jan-provider-web/provider.ts` for improved readability.
2025-10-08 20:03:35 +05:30
Dinh Long Nguyen
510c4a5188 working attachments 2025-10-08 16:08:40 +07:00
Dinh Long Nguyen
4cb3c46f89 feat: disable all web mcp by default (new users) (#6677) 2025-10-01 09:35:09 +07:00
Dinh Long Nguyen
82d29e7a7d add eof new line missing (#6673) 2025-09-30 21:48:38 +07:00
Dinh Long Nguyen
f33c2c205a feat: web add search button for extension (#6671)
* add search button for web extension

* change button color and behavior

* Update extensions-web/src/mcp-web/components/WebSearchButton.tsx

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-30 21:39:08 +07:00
Faisal Amir
b7dae19756 feat: custom downloaded model name (#6588)
* feat: add field edit model name

* fix: update model

* chore: updaet UI form with save button, and handle edit capabilities and  rename folder will need save button

* fix: relocate model

* chore: update and refresh list model provider also update test case

* chore: state loader

* fix: model path

* fix: model config update

* chore: fix remove depencies provider on edit model dialog

* chore: avoid shifted model name or id

---------

Co-authored-by: Louis <louis@jan.ai>
2025-09-26 15:25:44 +07:00
Akarshan Biswas
11b3a60675 fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Louis
57110d2bd7 fix: allow users to download the same model from different authors (#6577)
* fix: allow users to download the same model from different authors

* fix: importing models should have author name in the ID

* fix: incorrect model id show

* fix: tests

* fix: default to mmproj f16 instead of bf16

* fix: type

* fix: build error
2025-09-24 17:57:10 +07:00
Dinh Long Nguyen
df61546942 feat: web remote conversation (#6554)
* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag
2025-09-23 15:09:45 +07:00
Akarshan Biswas
885da29f28 feat: add getTokensCount method to compute token usage (#6467)
* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-23 07:52:19 +05:30
Akarshan Biswas
bf7f176741 feat: Prompt progress when streaming (#6503)
* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-22 20:37:27 +05:30
Dinh Long Nguyen
645548e931 Merge pull request #6516 from menloresearch/release/v0.6.10 2025-09-18 19:15:54 +07:00
Louis
5fa0826ee8 fix: new extension settings aren't populated properly (#6476) 2025-09-16 18:10:00 +07:00
Dinh Long Nguyen
b5b6e1dc19 add mcp for web (#6411)
* add mcp for web

* update /jan/v1 endpoint to /v1

* update mise and makefile

* update yarn lock

* use mcp oauth properly
2025-09-12 12:14:10 +07:00
Dinh Long Nguyen
32a2ca95b6 feat: gguf file size + hash validation (#5266) (#6259)
* feat: gguf file size + hash validation

* fix tests fe

* update cargo tests

* handle asyn download for both models and mmproj

* move progress tracker to models

* handle file download cancelled

* add cancellation mid hash run
2025-08-21 16:17:58 +07:00
Akarshan Biswas
906b87022d chore: re enable reasoning_content in backend (#6228)
* chore: re enable reasoning_content in backend

* chore: handle reasoning_content

* chore: refactor get reasoning content

* chore: update PR review

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-20 13:06:21 +05:30
Dinh Long Nguyen
b0eec07a01 Add contributing section for jan (#6231) (#6232)
* Add contributing section for jan

* Update CONTRIBUTING.md

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-20 10:18:35 +07:00
Louis
55390de070 Merge pull request #6222 from menloresearch/feat/model-tool-use-detection
feat: #5917 - model tool use capability should be auto detected
2025-08-19 13:55:08 +07:00
Louis
bfe671d7b4 feat: #5917 - model tool use capability should be auto detected 2025-08-19 09:51:36 +07:00
Dinh Long Nguyen
2d486d7b3a feat: add support for reasoning fields (OpenRouter) (#6206)
* add support for reasoning fields (OpenRouter)

* reformat

* fix linter

* Update web-app/src/utils/reasoning.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-18 21:59:14 +07:00
Faisal Amir
99567a1102 feat: recommended label llamacpp setting (#6052)
* feat: recommended label llamacpp

* chore: remove log
2025-08-05 13:55:33 +07:00
Louis
d6ad797769 fix: llama.cpp backend shows blank list sometime (#5876) 2025-07-23 20:04:38 +07:00
Faisal Amir
1d443e1f7d fix: support load model configurations (#5843)
* fix: support load model configurations

* chore: remove log

* chore: sampling params add from send completion

* chore: remove comment

* chore: remove comment on predefined file

* chore: update test model service
2025-07-22 19:52:12 +07:00
Louis
bc4fe52f8d fix: llama.cpp integration model load and chat experience (#5823)
* fix: stop generating should not stop running models

* fix: ensure backend ready before loading model

* fix: backend setting should not block onLoad
2025-07-21 09:29:26 +07:00
Akarshan Biswas
92703bceb2 refactor: move thinking toggle to runtime settings for dynamic control (#5800)
* refactor: move thinking toggle to runtime settings for per-message control

Replaces the static `reasoning_budget` config with a dynamic `enable_thinking` flag under `chat_template_kwargs`, allowing models like Jan-nano and Qwen3 to enable/disable thinking behavior at runtime, even mid-conversation.
Requires UI update

* remove engine argument
2025-07-17 20:18:24 +05:30
Louis
c5fd964bf2 test: add missing tests 2025-07-12 20:15:45 +07:00
Louis
963ad448f5 fix: build 2025-07-10 21:23:04 +07:00
Louis
a770e08013 test: migrate jest to vitest 2025-07-10 21:14:21 +07:00
Louis
af8404d627 fix: tests 2025-07-10 20:16:09 +07:00
Louis
ca6f4f8977 test: fix failed tests 2025-07-10 16:25:47 +07:00
Louis
6e0218c084 Merge branch 'release/v0.7.0' into feat/inference-llamacpp-extension
# Conflicts:
#	.devcontainer/buildAppImage.sh
#	.github/workflows/template-tauri-build-linux-x64.yml
#	Makefile
#	core/src/node/extension/index.test.ts
#	package.json
#	src-tauri/tauri.conf.json
#	web-app/package.json
2025-07-10 15:36:41 +07:00
hiento09
3287e8b300 chore: enable test coverage (#5710)
* chore: enable test coverage
2025-07-07 11:24:13 +07:00
Akarshan
d4a3d6a0d6 Refactor session PID types from string to number across backend and extension
- Changed `pid` field in `SessionInfo` from `string` to `number`/`i32` in TypeScript and Rust.
- Updated `activeSessions` map key from `string` to `number` to align with new PID type.
- Adjusted process monitoring logic to correctly handle numeric PIDs.
- Removed fallback UUID-based PID generation in favor of numeric fallback (-1).
- Added PID cleanup logic in `is_process_running` when the process is no longer alive.
- Bumped application version from 0.5.16 to 0.6.900 in `tauri.conf.json`.
2025-07-04 21:40:54 +05:30