4.14.x
V4.14.10
FastGPT V4.14.10 Release Notes
Upgrade Guide
1. Add agent-sandbox related configurations
The following configuration adjustments are for docker compose deployments. sealos commercial users can contact support for an online sandbox service solution.
Open the latest yml deployment file and add the following:
- Add the
x-volume-manager-auth-token: &x-volume-manager-auth-token 'vmtoken'variable configuration at the top of the file. - Add 3 new services:
opensandbox-server,volume-manager, andagent-sandbox-image. - Add
configs(you can find this content at the bottom of the file, just copy and append it directly). - Modify the
fastgptenvironment variables to include the following:
# ==================== Agent sandbox config ====================
AGENT_SANDBOX_PROVIDER: opensandbox
# OpenSandbox config (effective when PROVIDER: opensandbox)
AGENT_SANDBOX_OPENSANDBOX_BASEURL: http://opensandbox-server:8090
AGENT_SANDBOX_OPENSANDBOX_API_KEY:
AGENT_SANDBOX_OPENSANDBOX_RUNTIME: docker
AGENT_SANDBOX_OPENSANDBOX_IMAGE_REPO: ghcr.io/labring/fastgpt/fastgpt-agent-sandbox
AGENT_SANDBOX_OPENSANDBOX_IMAGE_TAG: v0.0.2
# Volume persistence config (optional under opensandbox provider)
AGENT_SANDBOX_ENABLE_VOLUME: true
AGENT_SANDBOX_VOLUME_MANAGER_URL: http://volume-manager:3000
AGENT_SANDBOX_VOLUME_MANAGER_TOKEN: *x-volume-manager-auth-token2. Modify the sandbox image name
The image name under the original sandbox services needs to be changed from fastgpt-sandbox to fastgpt-code-sandbox.
3. Update image tags
- Update FastGPT image tag to:
v4.14.10 - Update FastGPT commercial image tag to:
v4.14.10 - Update fastgpt-plugin image tag to:
v0.5.6 - Update code-sandbox image tag to:
v4.14.10
Restart the service after updating.
4. Update system tools and refresh icons
Some system tool icons have been removed and replaced with image links, so some tool icons will be lost. You can update the system tools again (uninstall and reinstall, or directly import the pkg to overwrite).
🚀 Features
- Added OpenSandbox docker deployment and adaptation, with support for data persistence via mounted volumes.
- Added sandbox file link reading tool, allowing AI to directly return file access links.
- Added WeChat Personal Account publishing channel.
- Added streaming output support for Lark publishing channel.
- The maximum directory limit can now be configured via environment variables.
- Added max limit configuration for rerank models to prevent rerank failures caused by exceeding the single document limit.
- Added tiered billing mode for LLMs and unified the billing push method.
⚙️ Optimizations
- Optimized workflow runtime to reduce computational complexity.
- Added calculation limits for large variables to prevent thread blocking caused by high computational complexity.
- Removed configurations like "Used for knowledge base file processing" and "Used for question classification" from model settings, and unified them with a "Test Model" flag. Test models will have a special identifier and can only be used in AI chat; they will be filtered out in other scenarios.
🐛 Bug Fixes
- Fixed an issue where the default values of global variables in sub-workflows were not taking effect.
- Fixed an issue where the configured rerank model was not displaying in agent mode.
- Fixed an issue where the output of the bge-m3 embedding vector model was always 0.
- Fixed a call failure caused by connection exceptions during concurrent MCP calls.
- Fixed security vulnerabilities in the login API.
- Fixed MCP SSRF security vulnerabilities.
- Fixed an issue where workflow tool errors were not properly caught.
- Fixed an issue where the default values of global variables in sub-workflows were not taking effect.