455 Commits

Author SHA1 Message Date
Minh141120
7c3802c3fa ci: temporarily disable portable artifact upload to GitHub release and artifact 2026-03-24 08:35:53 +07:00
Louis
facd5e9227 fix: update package bundle 2026-03-03 21:05:39 +07:00
dev-miro26
28e593d29f fix: update Tauri dependencies to use libappindicator3 instead of libayatana-appindicator3-dev (#7570) 2026-02-28 20:11:00 +07:00
Minh141120
c534bdb792 add condition for nightly external 2026-02-13 09:28:13 +07:00
Minh141120
c47c4fec20 fix: jan docs yarn v4 2026-02-11 13:50:04 +07:00
Louis
172fc0708c fix: xcode version 2026-02-05 13:04:23 +07:00
Louis
53fc57665c fix: run target 2026-02-05 10:22:12 +07:00
Louis
f22f4731cc fix: xcode version 2026-02-05 10:15:25 +07:00
Louis
1a43a87af2 fix: add build step for darwin 2026-02-04 16:44:48 +07:00
Minh141120
5ac5802271 ci: use yarn v4 for base branch coverage job 2026-01-30 13:08:39 +07:00
Minh141120
d04ea491a3 ci: move config yarn makefile to ci 2026-01-30 12:08:12 +07:00
Louis
9e2caa7c32 Merge pull request #7376 from janhq/fix/update-renderer-using-plugins
fix: update renderer using plugins
2026-01-22 16:36:34 +07:00
hiento09
40b847c51e chore: refactor updater (#7377) 2026-01-22 16:14:11 +07:00
Minh141120
73cc3c6d27 fix: increase heap size for macos ci 2026-01-22 14:50:51 +07:00
Minh141120
220a26f4b7 ci: update matrix test on windows 2026-01-21 14:05:54 +07:00
Minh141120
08b8d8896d fix: build ref portable 2026-01-10 14:39:53 +07:00
Minh141120
f1e178833a ci: add error handling template windows build 2026-01-10 14:37:00 +07:00
Minh141120
5a26db0c2b ci: add portable build 0.7.5 2026-01-10 14:32:32 +07:00
Minh141120
4b13100f67 ci: add upload portable to release github 2026-01-10 14:22:52 +07:00
Minh141120
0bf91a77d4 feat: upload portable jan 2026-01-10 13:27:59 +07:00
Minh141120
5c6323e5b1 fix: update version ci 2026-01-10 12:54:10 +07:00
Minh141120
2e0b1c61a6 feat: add portable windows 2026-01-10 12:41:56 +07:00
Minh141120
3335b48e2b refactor: jan web ci to use main branch and deprecate old workflows 2025-12-30 12:50:23 +07:00
Minh141120
be64755cf3 ci: add free disk space job for ubuntu 2025-12-05 09:53:44 +07:00
Minh141120
e349ae3552 ci: reproduce linter test out of space on ubuntu 2025-12-05 09:22:50 +07:00
Minh141120
9d7360c33b fix: flatpak upload s3 2025-11-28 09:44:59 +07:00
Minh141120
791bdd528f feat: update flatpak build 2025-11-28 08:35:11 +07:00
Minh141120
3cd847726e fix: glib issue on linux 2025-11-15 21:35:55 +07:00
Minh141120
fd3d8230d0 ci: clean up duplicate issue workflow 2025-11-11 19:29:03 +07:00
Minh141120
bf3d774865 fix: glibc linux 2025-11-07 10:43:56 +07:00
Dinh Long Nguyen
f06e536f65 Web: Rename Jan Namespace (#6860)
* Web: Rename Jan Namespace

* Update extensions-web/src/mcp-web/index.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-03 16:29:06 +07:00
Minh141120
23b03da714 chore: deprecate webhook discord 2025-10-29 11:48:32 +07:00
Minh141120
15c426aefc chore: update org name 2025-10-28 17:26:27 +07:00
hiento09
c854c54c0c chore: update api domain to jan.ai (#6832) 2025-10-28 15:45:42 +07:00
Dinh Long Nguyen
f07e43cfe0 fix: conversation items (#6815) 2025-10-24 09:01:31 +07:00
hiento09
999b7b3cd8 chore: api change domain to menlo.ai (#6764) 2025-10-08 13:22:26 +07:00
Louis
28afafaad7 Update .github/workflows/template-tauri-build-windows-x64.yml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-07 18:36:56 +07:00
Nguyen Ngoc Minh
816d60b22a Merge pull request #6721 from menloresearch/chore/use-custom-nsis-template
chore: use custom nsis template
# Conflicts:
#	Makefile
#	package.json
#	src-tauri/tauri.windows.conf.json
2025-10-07 18:05:14 +07:00
Louis
fe2c2a8687 Merge branch 'dev' into release/v0.7.0
# Conflicts:
#	web-app/src/containers/DropdownModelProvider.tsx
#	web-app/src/containers/ThreadList.tsx
#	web-app/src/containers/__tests__/DropdownModelProvider.displayName.test.tsx
#	web-app/src/hooks/__tests__/useModelProvider.test.ts
#	web-app/src/hooks/useChat.ts
#	web-app/src/lib/utils.ts
2025-10-06 20:42:05 +07:00
Minh141120
c4af638a17 ci: remove upload msi 2025-10-03 11:56:31 +07:00
Minh141120
a5574eaacb ci: revert upload msi to github release 2025-10-01 17:00:03 +07:00
Dinh Long Nguyen
e6bc1182a6 Merge branch 'dev' into feat/sync-release=to-dev 2025-09-30 22:04:27 +07:00
Minh141120
631a95e018 ci: add upload msi installer for windows 2025-09-30 20:12:53 +07:00
Minh141120
dcb511023d ci: add upload .msi artifact 2025-09-30 15:41:04 +07:00
Louis
5fd249c72d refactor: deprecate Vulkan external binaries (#6638)
* refactor: deprecate vulkan binary

refactor: clean up vulkan lib

chore: cleanup

chore: clean up

chore: clean up

fix: build

* fix: skip binaries download env

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update src-tauri/utils/src/system.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-29 17:47:59 +07:00
Minh141120
fb9bbb66b0 refactor: remove mise 2025-09-26 12:42:01 +07:00
Akarshan Biswas
11b3a60675 fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30
Minh141120
45590e3188 ci: fix path for tauri plugins 2025-09-25 12:10:51 +07:00
Minh141120
8205c33176 ci: update package version for tauri plugin 2025-09-25 10:55:10 +07:00
Minh141120
91e30d3c19 docs: add clean output dir step 2025-09-24 12:12:41 +07:00