Docs: 1.8 release (#11295)

* langflow-webhook-auth-enable

* add-not-contains-filter-operator

* does-not-contains-operator

* less-redundant-explanation

* docs: add jq and path selection to data operations (#10083)

add-jq-and-path-to-data-operations

* smart transform historical names

* change back to smart transform

* jq expression capitalization/package name

* small edit for clarity of not contains operator

* read/write file component name changes

* docs: add smart router component (#10097)

* init

* add-to-release-notes

* remove-dynamic-output-as-parameter

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: screenshot audit (#10166)

* remove-unused

* agent-examples

* main-ui-screenshots

* components-screenshots

* combine-web-search-components

* simple-agent-flow-in-playground

* round-screenshots

* my-projects

* combine-data-components

* docs: component paths updates for lfx (#10130)

* contributing-bundles-path

* api-monitor-example

* concepts-components-page

* contribute-components-path

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: auto-add projects as MCP servers  (#10096)

* add-mcp-auto-auth-as-default-behavior

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: Edwin Jose <edwin.jose@datastax.com>

* docs: amazon bedrock converse (#10289)

* use-bedrock-converse

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Update docs/docs/Components/bundles-amazon.mdx

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs 1.7 release: add mock data component (#10288)

* add-component-and-release-note

* Apply suggestion from @aimurphy

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: update custom component docs (#10323)

* add-partial

* update-lfx-component-paths

* move-partial

* completed-quickstart

* clean up intro

* try-docker-with-custom-mount

* up-to-typed-annotations

* typed-annotations

* dynamic-fields

* end-of-file

* bundles-naming

* chore: update component index

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* docs: add cometapi back for 1.7 release (#10445)

* add-comet-bundle-back-for-1.7

* add-comet-to-release-notes

* docs: add back docling remote vlm for release 1.7 (#10489)

* add-back-docling-vlm-content

* add-release-note

* docs: ALTK component (#10511)

* broken-anchor

* sidebar-and-page

* add-release-note

* add-context-on-output

* docs: SSRF warning (#10573)

* add-ssrf-protection-env-var

* api-request-component

* Update docs/docs/Components/components-data.mdx

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* move-note-to-table

* release-note

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: dynamic create data component (#10517)

* add-dynamic-create-data-component-and-release-note

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* clarify-message-types

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: cuga component bundle (#10589)

* initlal-content

* cuga-specific-component-connections

* cleanup

* use-the-same-name

* add-lite-mode-remove-api-flag-and-mode

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* public-or-private-internet

* agent-doesnt-check-urls

* peer-review

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: remove docling vlm component from 1.7 release branch (#10630)

remove-vlm-component

* docs: rename component categories and make all components single pages (#10648)

* docs: OpenAPI spec version upgraded from 1.6.5 to 1.6.8 (#10627)

Co-authored-by: github-merge-queue <118344674+github-merge-queue@users.noreply.github.com>
Co-authored-by: Mendon Kissling <59585235+mendonk@users.noreply.github.com>

* up to models and agents

* sidebars

* fix-broken-links

* chore: Fix indentation on bundles-docling.mdx (#10640)

* sidebars

* redo-intros

* link-to-models

* data-components

* files-components-no-kb

* io-components

* helper-utility-components

* llm-ops-components

* logic-components

* processing-pages

* sidebars

* combine-legacy-components-into-one-page

* update-links

* remove-overview-pages-and-redirect

* make-mcp-tools-page

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* no-cap

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-merge-queue <118344674+github-merge-queue@users.noreply.github.com>
Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: combined web search component (#10664)

* combine-pages

* remove-rss-and-news-search-and-update-links

* remove-vlm-link

* leave-old-release-note-but-remove-link

* docs: add altk reflection component (#10660)

* add-new-component

* differentiate-components

* docs: mcp streamable http client (#10621)

* release note

* mcp-client-changes

* update-astra-example

* icons-and-copy

* order-of-names

* docs: add cuga decomposition strategy as advanced parameter (#10672)

* update-component-link

* init

* add-decomp-as-advanced-param

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes (attempt 3/3)

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* update-component-index

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes (attempt 3/3)

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: datastax bundles page (#10686)

* init

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: llm router changed to llm selector (#10663)

* update-component-name

* previous-name-and-release-note

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* [autofix.ci] apply automated fixes (attempt 3/3)

* docs: log alembic to stdout (#10711)

* docs-alembic-log-env-var

* cleanup

* remove-legacy-component-link

* docs: configure s3 for file storage backend (#10678)

* configure-file-storage-s3

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* clarify-s3-credentials

* add-storage-tags-and-cleanup-creds-seciton

* role-link-name

* fix-parse-error

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: allow rest tweaks to mcp tools component (#10833)

* typo

* tweak-mcp-tools-component

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* add-release-note

* docs: use mustache templates in prompts (#11262)

* mustache-templating

* syntax

* release-note

* peer-review

* docs: smart transform supports Message type  (#11306)

* component-supports-message-type

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* peer-review

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: modular dependency imports for langflow-base (#11250)

* modular-base-dependencies

* syntax-and-clarification

* release-note

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* [autofix.ci] apply automated fixes

* clarify-base-and-langflow

* component-index

* delete-component-index

* [autofix.ci] apply automated fixes

* set-agentic-experience

* potential-breaking-changes

* not-audio-package

* cleanup-and-syntax

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* docs: symmetric and asymmetric JWT (#11159)

* initial-content

* cleanup

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* [autofix.ci] apply automated fixes

* docs-peer-review

* [autofix.ci] apply automated fixes

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* docs: add markdown output to url component (#11336)

* add-markdown-output-format

* raw-content

* Apply suggestions from code review

* docs: Add global variable support for MCP server headers (#11397)

* add-global-var-in-mcp-headers

* revert-curl-syntax-change

* remove-duplicate-tab

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* remove-code-block

* add-release-note

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* Update docs/docs/Develop/install-custom-dependencies.mdx

* Update docs/docs/Develop/jwt-authentication.mdx

* docs: global model provider feature (#11231)

* initial-changes-to-model-providers

* add-icon-for-model-partial

* syntax

* adding-custom-language-model

* release-note

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* peer-review

* use-anthropic-model-with-agent

* [autofix.ci] apply automated fixes

* design-changes

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* mustache-limitations

* release-note-for-jwt

* docs: playground refactor and screenshots (#11639)

* screenshots

* new-playground-and-icon

* release-note

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: component inspection panel (#11675)

* docs-component-inspection-panel

* cleanup

* docs: add tool shortlisting and remove web_apps from CUGA component (#11669)

docs-add-shortlist-tools-and-remove-webapps-parameters

* fix-details-tab-error

* docs: workflow API draft build (#11323)

* delete-unused-yaml-file

* initial-content

* add-python-and-ts-to-example-requests

* separate-pages

* test-spec-presentation

* hide-async-and-make-workflows-plural

* fix-broken-link

* add-changes-to-async

* use-workflow-spec-from-sdk-build

* make-setup-partial

* add-fetch-script-for-openapi-spec

* update-workflows-spec

* remove-stream-for-now

* remove-reconnect-to-stream

* consolidate-pages

* remove-force-boolean

* [autofix.ci] apply automated fixes

* docs: add guardrails component (#11674)

* docs-add-guardrails-component

* cleanup

* example-and-heuristic-check

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* [autofix.ci] apply automated fixes

* add-note-about-llm

* add-release-note

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* docs: pass env var to run command and endpoint as header (#11447)

* pass-env-var-to-lfx

* add-env-var-passing-to-run-endpoint

* add-python-and-js-commands

* docs: responses api token usage tracking (#11564)

* initlal-content

* add-release-note

* changes-for-accessing-advanced-parameters

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* small-playground-changes

* [autofix.ci] apply automated fixes

* Revert "docs: OpenAPI spec content updated without version change (#11787)"

This reverts commit a0d5618ac9.

* [autofix.ci] apply automated fixes

* docs: add LiteLLM proxy bundle (#11867)

* docs-add-litellm-proxy-component

* Update docs/docs/Components/bundles-lite-llm.mdx

* docs: 1.8 changes from QA (#11998)

* remove-rightside-playground

* tutorials

* image-size-update

* component-release-notes

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: pass API keys to args and not env (#11997)

* remove-rightside-playground

* tutorials

* image-size-update

* docs-troubleshoot-mcp-proxy-header-keys

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: knowledge bases (#11924)

* docs-add-back-kb-content

* update-with-release-candidate-branch

* fix-linking-error

* remove-advanced-flag

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* add-release-note

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: traces v0 (#12014)

* env-var-release-note-and-sidebars

* traces-and-database

* traces-ui-and-api-retrieval

* cleanup

* space

* move-section

* move-what-traces-capture-section

* docs: remove kb ingestion and rename kb retrieval (#12065)

remove-knowledge-ingestion-and-rename-knowledge-retrieval

* docs: add link to secret key rotation script (#12072)

* add-link-to-secret-key-rotation

* Apply suggestions from code review

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

---------

Co-authored-by: April I. Murphy <36110273+aimurphy@users.noreply.github.com>

* docs: openlayer follow-on (#12073)

* add-openlayer-to-sidebars-and-release-notes

* Update docs/docs/Support/release-notes.mdx

---------

Co-authored-by: April M <36110273+aimurphy@users.noreply.github.com>
Co-authored-by: Edwin Jose <edwin.jose@datastax.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-merge-queue <118344674+github-merge-queue@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
This commit is contained in:
Mendon Kissling
2026-03-06 09:41:10 -05:00
committed by GitHub
parent 4fa9cd3b28
commit d43bf3f588
77 changed files with 4647 additions and 13679 deletions

View File

@@ -3,6 +3,9 @@ title: Flow trigger endpoints
slug: /api-flows-run
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Use the `/run` and `/webhook` endpoints to run flows.
To create, read, update, and delete flows, see [Flow management endpoints](/api-flows).
@@ -20,6 +23,67 @@ Flow IDs can be found on the code snippets on the [**API access** pane](/concept
The following example runs the **Basic Prompting** template flow with flow parameters passed in the request body.
This flow requires a chat input string (`input_value`), and uses default values for all other parameters.
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID"
# Request payload
payload = {
"input_value": "Tell me about something interesting!",
"session_id": "chat-123",
"input_type": "chat",
"output_type": "chat",
"output_component": ""
}
# Request headers
headers = {
"Content-Type": "application/json",
"x-api-key": "LANGFLOW_API_KEY"
}
try:
response = requests.post(url, json=payload, headers=headers)
response.raise_for_status()
print(response.json())
except requests.exceptions.RequestException as e:
print(f"Error making API request: {e}")
```
</TabItem>
<TabItem value="JavaScript" label="JavaScript">
```js
const payload = {
input_value: "Tell me about something interesting!",
session_id: "chat-123",
input_type: "chat",
output_type: "chat",
output_component: ""
};
const options = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': 'LANGFLOW_API_KEY'
},
body: JSON.stringify(payload)
};
fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID', options)
.then(response => response.json())
.then(data => console.log(data))
.catch(err => console.error(err));
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID" \
@@ -30,11 +94,13 @@ curl -X POST \
"session_id": "chat-123",
"input_type": "chat",
"output_type": "chat",
"output_component": "",
"tweaks": null
"output_component": ""
}'
```
</TabItem>
</Tabs>
The response from `/v1/run/$FLOW_ID` includes metadata, inputs, and outputs for the run.
<details>
@@ -84,6 +150,77 @@ With `/v1/run/$FLOW_ID`, the flow is executed as a batch with optional LLM token
To stream LLM token responses, append the `?stream=true` query parameter to the request:
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true"
# Request payload
payload = {
"message": "Tell me something interesting!",
"session_id": "chat-123"
}
# Request headers
headers = {
"accept": "application/json",
"Content-Type": "application/json",
"x-api-key": "LANGFLOW_API_KEY"
}
try:
response = requests.post(url, json=payload, headers=headers, stream=True)
response.raise_for_status()
# Process streaming response
for line in response.iter_lines():
if line:
print(line.decode('utf-8'))
except requests.exceptions.RequestException as e:
print(f"Error making API request: {e}")
```
</TabItem>
<TabItem value="JavaScript" label="JavaScript">
```js
const payload = {
message: "Tell me something interesting!",
session_id: "chat-123"
};
const options = {
method: 'POST',
headers: {
'accept': 'application/json',
'Content-Type': 'application/json',
'x-api-key': 'LANGFLOW_API_KEY'
},
body: JSON.stringify(payload)
};
fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true', options)
.then(async response => {
const reader = response.body?.getReader();
const decoder = new TextDecoder();
if (reader) {
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(decoder.decode(value));
}
}
})
.catch(err => console.error(err));
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID?stream=true" \
@@ -96,6 +233,9 @@ curl -X POST \
}'
```
</TabItem>
</Tabs>
LLM chat responses are streamed back as `token` events, culminating in a final `end` event that closes the connection.
<details>
@@ -132,8 +272,9 @@ The following example is truncated to illustrate a series of `token` events as w
| Header | Info | Example |
|--------|------|---------|
| Content-Type | Required. Specifies the JSON format. | "application/json" |
| accept | Optional. Specifies the response format. | "application/json" |
| x-api-key | Optional. Required only if authentication is enabled. | "sk-..." |
| accept | Optional. Specifies the response format. Defaults to JSON if not specified. | "application/json" |
| x-api-key | Required. Your Langflow API key for authentication. Can be passed as a header or query parameter. | "sk-..." |
| `X-LANGFLOW-GLOBAL-VAR-*` | Optional. Pass global variables to the flow. Variable names are automatically converted to uppercase. These variables take precedence over OS environment variables and are only available during this specific request execution. | `"X-LANGFLOW-GLOBAL-VAR-API_KEY: sk-..."` |
### Run endpoint parameters
@@ -150,6 +291,93 @@ The following example is truncated to illustrate a series of `token` events as w
### Request example with all headers and parameters
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true"
# Request payload with tweaks
payload = {
"input_value": "Tell me a story",
"input_type": "chat",
"output_type": "chat",
"output_component": "chat_output",
"session_id": "chat-123",
"tweaks": {
"component_id": {
"parameter_name": "value"
}
}
}
# Request headers
headers = {
"Content-Type": "application/json",
"accept": "application/json",
"x-api-key": "LANGFLOW_API_KEY"
}
try:
response = requests.post(url, json=payload, headers=headers, stream=True)
response.raise_for_status()
# Process streaming response
for line in response.iter_lines():
if line:
print(line.decode('utf-8'))
except requests.exceptions.RequestException as e:
print(f"Error making API request: {e}")
```
</TabItem>
<TabItem value="JavaScript" label="JavaScript">
```js
const payload = {
input_value: "Tell me a story",
input_type: "chat",
output_type: "chat",
output_component: "chat_output",
session_id: "chat-123",
tweaks: {
component_id: {
parameter_name: "value"
}
}
};
const options = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'accept': 'application/json',
'x-api-key': 'LANGFLOW_API_KEY'
},
body: JSON.stringify(payload)
};
fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true', options)
.then(async response => {
const reader = response.body?.getReader();
const decoder = new TextDecoder();
if (reader) {
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(decoder.decode(value));
}
}
})
.catch(err => console.error(err));
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID?stream=true" \
@@ -170,6 +398,103 @@ curl -X POST \
}'
```
</TabItem>
</Tabs>
### Pass global variables in request headers {#pass-global-variables-in-headers}
You can pass global variables to your flow using HTTP headers with the format `X-LANGFLOW-GLOBAL-VAR-{VARIABLE_NAME}`.
Variables passed in headers take precedence over OS environment variables. If a variable is provided in both a header and an environment variable, the header value is used. Variables are only available during this specific request execution and aren't persisted.
Variable names are automatically converted to uppercase. For example, `X-LANGFLOW-GLOBAL-VAR-api-key` becomes `API_KEY` in your flow.
You don't need to create these variables in Langflow's Global Variables section first. Pass any variable name using this header format.
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID"
# Request payload
payload = {
"input_value": "Tell me about something interesting!",
"input_type": "chat",
"output_type": "chat"
}
# Request headers with global variables
headers = {
"Content-Type": "application/json",
"x-api-key": "LANGFLOW_API_KEY",
"X-LANGFLOW-GLOBAL-VAR-OPENAI_API_KEY": "sk-...",
"X-LANGFLOW-GLOBAL-VAR-USER_ID": "user123",
"X-LANGFLOW-GLOBAL-VAR-ENVIRONMENT": "production"
}
try:
response = requests.post(url, json=payload, headers=headers)
response.raise_for_status()
print(response.json())
except requests.exceptions.RequestException as e:
print(f"Error making API request: {e}")
```
</TabItem>
<TabItem value="JavaScript" label="JavaScript">
```js
const payload = {
input_value: "Tell me about something interesting!",
input_type: "chat",
output_type: "chat"
};
const options = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': 'LANGFLOW_API_KEY',
'X-LANGFLOW-GLOBAL-VAR-OPENAI_API_KEY': 'sk-...',
'X-LANGFLOW-GLOBAL-VAR-USER_ID': 'user123',
'X-LANGFLOW-GLOBAL-VAR-ENVIRONMENT': 'production'
},
body: JSON.stringify(payload)
};
fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID', options)
.then(response => response.json())
.then(data => console.log(data))
.catch(err => console.error(err));
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID" \
-H "Content-Type: application/json" \
-H "x-api-key: $LANGFLOW_API_KEY" \
-H "X-LANGFLOW-GLOBAL-VAR-OPENAI_API_KEY: sk-..." \
-H "X-LANGFLOW-GLOBAL-VAR-USER_ID: user123" \
-H "X-LANGFLOW-GLOBAL-VAR-ENVIRONMENT: production" \
-d '{
"input_value": "Tell me about something interesting!",
"input_type": "chat",
"output_type": "chat"
}'
```
</TabItem>
</Tabs>
If your flow components reference variables that aren't provided in headers or your Langflow database, the flow fails by default. To avoid this, you can set `LANGFLOW_FALLBACK_TO_ENV_VAR=True` in your `.env` file, which allows the flow to use values from OS environment variables if they aren't otherwise specified.
## Webhook run flow
Use the `/webhook` endpoint to start a flow by sending an HTTP `POST` request.

View File

@@ -3,6 +3,9 @@ title: Monitor endpoints
slug: /api-monitor
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
The `/monitor` endpoints are for internal Langflow functionality, primarily related to running flows in the **Playground**, storing chat history, and generating flow logs.
This information is primarily for those who are building custom components or contributing to the Langflow codebase in a way that requires calling or understanding these endpoints.
@@ -630,6 +633,122 @@ HTTP/1.1 204 No Content
</details>
## Get traces
Retrieve trace metadata and span trees for a specific flow.
### Example request
Use `GET /monitor/traces` and filter by `flow_id`:
<Tabs>
<TabItem value="python" label="Python">
```python
import os
import requests
base_url = os.getenv("LANGFLOW_SERVER_URL", "http://localhost:7860")
api_key = os.getenv("LANGFLOW_API_KEY")
flow_id = "YOUR_FLOW_ID"
response = requests.get(
f"{base_url}/api/v1/monitor/traces",
params={"flow_id": flow_id, "page": 1, "size": 50},
headers={"x-api-key": api_key, "accept": "application/json"},
timeout=10,
)
response.raise_for_status()
traces = response.json()
print(traces)
```
</TabItem>
<TabItem value="typescript" label="TypeScript">
```ts
const baseUrl = process.env.LANGFLOW_SERVER_URL ?? "http://localhost:7860";
const apiKey = process.env.LANGFLOW_API_KEY!;
const flowId = "YOUR_FLOW_ID";
async function listTraces() {
const url = new URL("/api/v1/monitor/traces", baseUrl);
url.searchParams.set("flow_id", flowId);
url.searchParams.set("page", "1");
url.searchParams.set("size", "50");
const res = await fetch(url.toString(), {
headers: {
accept: "application/json",
"x-api-key": apiKey,
},
});
if (!res.ok) {
throw new Error(`Request failed with status ${res.status}`);
}
const data = await res.json();
console.log(data);
}
listTraces().catch(console.error);
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
export LANGFLOW_SERVER_URL="http://localhost:7860"
export LANGFLOW_API_KEY="YOUR_LANGFLOW_API_KEY"
export FLOW_ID="YOUR_FLOW_ID"
curl -s "$LANGFLOW_SERVER_URL/api/v1/monitor/traces?flow_id=$FLOW_ID&page=1&size=50" \
-H "accept: application/json" \
-H "x-api-key: $LANGFLOW_API_KEY" \
| jq .
```
</TabItem>
</Tabs>
### Example response
```json
{
"traces": [
{
"id": "426656db-fc3c-4a3a-acf8-c60acf099543",
"name": "Simple Agent - 9e774f60-857b-44b4-bbcd-87bd23848ee8",
"status": "ok",
"startTime": "2026-03-03T19:13:30.692628Z",
"totalLatencyMs": 18693,
"totalTokens": 2050,
"flowId": "9e774f60-857b-44b4-bbcd-87bd23848ee8",
"sessionId": "9e774f60-857b-44b4-bbcd-87bd23848ee8",
"input": {
"input_value": "Use tools to teach me about vertex graphs"
},
"output": {
"message": {
"text_key": "text",
"data": {
"timestamp": "2026-03-03 19:13:30 UTC",
"sender": "Machine",
"sender_name": "AI",
"session_id": "9e774f60-857b-44b4-bbcd-87bd23848ee8",
"text": "I can teach you the concept, but I couldnt pull the Wikipedia pages with the tool ... (truncated)"
}
}
}
}
],
"total": 1,
"pages": 1
}
```
## Get transactions
Retrieve all transactions, which are interactions between components, for a specific flow.

View File

@@ -191,6 +191,7 @@ Fields set dynamically by Langflow:
| `model` | `string` | The flow ID that was executed. |
| `output` | `list[dict]` | Array of output items (messages, tool calls, etc.). |
| `previous_response_id` | `string` | ID of previous response if continuing conversation. |
| `usage` | `dict` | Token usage statistics if the `usage` field is available. Contains `prompt_tokens`, `completion_tokens`, and `total_tokens`. |
<details>
<summary>Fields with OpenAI-compatible default values</summary>
@@ -212,7 +213,7 @@ Fields set dynamically by Langflow:
| `tools` | `list[dict]` | `[]` | Available tools. |
| `top_p` | `float` | `1.0` | Top-p setting. |
| `truncation` | `string` | `"disabled"` | Truncation setting. |
| `usage` | `dict` | `null` | Usage statistics (if any). |
| `usage` | `dict` | `null` | Token usage statistics. Set dynamically when available from flow components, otherwise `null`. See [Token usage tracking](#token-usage-tracking). |
| `user` | `string` | `null` | User identifier (if any). |
| `metadata` | `dict` | `{}` | Additional metadata. |
@@ -596,4 +597,123 @@ To avoid this, you can set the `FALLBACK_TO_ENV_VARS` environment variable is `t
In the above example, `OPENAI_API_KEY` will fall back to the database variable if not provided in the header.
`USER_ID` and `ENVIRONMENT` will fall back to environment variables if `FALLBACK_TO_ENV_VARS` is enabled.
Otherwise, the flow fails.
Otherwise, the flow fails.
## Token usage tracking {#token-usage-tracking}
The OpenAI Responses API endpoint tracks token usage when your flow uses language model components that provide token usage information. The `usage` field in the response contains statistics about the number of tokens used for the request and response.
Token usage is automatically extracted from the flow execution results when the `usage` field is available.
The `usage` field follows OpenAI's format with `prompt_tokens`, `completion_tokens`, and `total_tokens` fields.
If token usage information is not available from the flow components, the `usage` field is `null`.
The `usage` field is always present in the response, either with token counts or as `null`. The conditional checks shown in the examples below are optional defensive programming to handle cases where usage might not be available.
<Tabs groupId="token-usage">
<TabItem value="Python" label="Python" default>
```python
from openai import OpenAI
client = OpenAI(
base_url="LANGFLOW_SERVER_URL/api/v1/",
default_headers={"x-api-key": "LANGFLOW_API_KEY"},
api_key="dummy-api-key"
)
response = client.responses.create(
model="FLOW_ID",
input="Explain quantum computing in simple terms"
)
# Access token usage if available
if response.usage:
print(f"Prompt tokens: {response.usage.get('prompt_tokens', 0)}")
print(f"Completion tokens: {response.usage.get('completion_tokens', 0)}")
print(f"Total tokens: {response.usage.get('total_tokens', 0)}")
else:
print("Token usage not available for this flow")
```
</TabItem>
<TabItem value="TypeScript" label="TypeScript">
```typescript
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "LANGFLOW_SERVER_URL/api/v1/",
defaultHeaders: {
"x-api-key": "LANGFLOW_API_KEY"
},
apiKey: "dummy-api-key"
});
const response = await client.responses.create({
model: "FLOW_ID",
input: "Explain quantum computing in simple terms"
});
// Access token usage if available
if (response.usage) {
console.log(`Prompt tokens: ${response.usage.prompt_tokens || 0}`);
console.log(`Completion tokens: ${response.usage.completion_tokens || 0}`);
console.log(`Total tokens: ${response.usage.total_tokens || 0}`);
} else {
console.log("Token usage not available for this flow");
}
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/responses" \
-H "x-api-key: $LANGFLOW_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "FLOW_ID",
"input": "Explain quantum computing in simple terms",
"stream": false
}'
```
<details>
<summary>Response with token usage</summary>
```json
{
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"object": "response",
"created_at": 1756837941,
"status": "completed",
"model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",
"output": [
{
"type": "message",
"id": "msg_a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Quantum computing is a type of computing that uses quantum mechanical phenomena...",
"annotations": []
}
]
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 145,
"total_tokens": 157
},
"previous_response_id": null
}
```
</details>
</TabItem>
</Tabs>

View File

@@ -0,0 +1,528 @@
---
title: Workflow API (Beta)
slug: /workflow-api
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialAPISetup from '@site/docs/_partial-api-setup.mdx';
:::warning Beta Feature
The Workflow API is currently in **Beta**.
The API endpoints and response formats may change in future releases.
:::
The Workflow API provides programmatic access to execute Langflow workflows synchronously or asynchronously.
Synchronous requests receive complete results immediately upon completion.
Asynchronous requests are queued in the background and will run until complete, or a request is issued to the [Stop Workflow endpoint](#stop-workflow-endpoint).
The Workflow API is part of the Langflow Developer v2 API and offers enhanced workflow execution capabilities compared to the v1 `/run` endpoint.
<PartialAPISetup />
## Execute workflows endpoint (synchronous or asynchronous)
**Endpoint:**
```
POST /api/v2/workflows
```
**Description:** Execute a workflow synchronously and receive complete results immediately upon completion.
Set `background=false` to make the request synchronous.
### Example synchronous request
Execute a workflow synchronously and receive complete results immediately:
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows"
headers = {
"Content-Type": "application/json",
"x-api-key": LANGFLOW_API_KEY
}
payload = {
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"background": False,
"inputs": {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "what is 2+2",
"ChatInput-abc.session_id": "session-123"
}
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())
```
</TabItem>
<TabItem value="TypeScript" label="TypeScript">
```typescript
import axios from 'axios';
const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows`;
const payload = {
flow_id: "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
background: false,
inputs: {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "what is 2+2",
"ChatInput-abc.session_id": "session-123"
}
};
const runWorkflow = async () => {
try {
const response = await axios.post(url, payload, {
headers: {
'Content-Type': 'application/json',
'x-api-key': LANGFLOW_API_KEY
}
});
console.log(response.data);
} catch (error) {
console.error('Error triggering workflow:', error);
}
};
runWorkflow();
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v2/workflows" \
-H "Content-Type: application/json" \
-H "x-api-key: $LANGFLOW_API_KEY" \
-d '{
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"background": false,
"inputs": {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "what is 2+2",
"ChatInput-abc.session_id": "session-123"
}
}'
```
</TabItem>
</Tabs>
### Example asynchronous request
For long-running workflows, set `background=true` to get a `job_id` immediately, and then poll the status [using the GET endpoint](#get-workflow-status-endpoint) until the job is complete.
To stop a job, send a POST request to the [Stop workflow endpoint](#stop-workflow-endpoint).
:::tip
The asynchronous request contains `stream` parameter, but streaming is not yet supported. The parameter is included for future compatibility.
:::
**Example request:**
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows"
headers = {
"Content-Type": "application/json",
"x-api-key": LANGFLOW_API_KEY
}
payload = {
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"background": True,
"stream": False,
"inputs": {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "Process this in the background",
"ChatInput-abc.session_id": "session-456"
}
}
response = requests.post(url, json=payload, headers=headers)
print(response.json()) # Returns job_id immediately
```
</TabItem>
<TabItem value="TypeScript" label="TypeScript">
```typescript
import axios from 'axios';
const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows`;
const payload = {
flow_id: "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
background: true,
stream: false,
inputs: {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "Process this in the background",
"ChatInput-abc.session_id": "session-456"
}
};
const runWorkflow = async () => {
try {
const response = await axios.post(url, payload, {
headers: {
'Content-Type': 'application/json',
'x-api-key': LANGFLOW_API_KEY
}
});
console.log(response.data); // Returns job_id immediately
} catch (error) {
console.error('Error triggering workflow:', error);
}
};
runWorkflow();
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v2/workflows" \
-H "Content-Type: application/json" \
-H "x-api-key: $LANGFLOW_API_KEY" \
-d '{
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"background": true,
"stream": false,
"inputs": {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "Process this in the background",
"ChatInput-abc.session_id": "session-456"
}
}'
```
</TabItem>
</Tabs>
**Response:**
```json
{
"job_id": "job_id_1234567890",
"created_timestamp": "2025-01-15T10:30:00Z",
"status": "queued",
"errors": []
}
```
### Request body
| Field | Type | Required | Default | Description |
|-------|------|----------|---------|-------------|
| `flow_id` | `string` | Yes | - | The ID or endpoint name of the flow to execute. |
| `flow_version` | `string` | No | - | Optional version hash to pin to a specific flow version. |
| `background` | `boolean` | No | `false` | Must be `false` for synchronous execution. |
| `inputs` | `object` | No | `{}` | Inputs for the workflow execution. Uses component identifiers with dot notation (e.g., `ChatInput-abc.input_value`). See [Component identifiers and input structure](#component-identifiers-and-input-structure) for detailed information. |
### Example response
```json
{
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"job_id": "job_id_1234567890",
"object": "response",
"created_at": 1741476542,
"status": "completed",
"errors": [],
"inputs": {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "what is 2+2",
"ChatInput-abc.session_id": "session-123"
},
"outputs": {
"ChatOutput-xyz": {
"type": "message",
"component_id": "ChatOutput-xyz",
"status": "completed",
"content": "2 + 2 equals 4."
}
},
"metadata": {}
}
```
### Response body
The response includes an `outputs` field containing component-level results. Each output has a `type` field indicating the type of content:
| Type | Description | Example |
|------|-------------|---------|
| `message` | Text message content. | Chat responses, summaries |
| `image` | Image URL or data. | Generated images, processed images |
| `sql` | SQL query results. | Database query outputs |
| `data` | Structured data. | JSON objects, arrays |
| `file` | File reference. | Generated documents, reports |
## Get workflow status endpoint
**Endpoint:** `GET /api/v2/workflows`
**Description:** Retrieve the status and results of a workflow execution by job ID.
### Example request
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows"
params = {
"job_id": "job_id_1234567890"
}
headers = {
"accept": "application/json",
"x-api-key": LANGFLOW_API_KEY
}
response = requests.get(url, params=params, headers=headers)
print(response.json())
```
</TabItem>
<TabItem value="TypeScript" label="TypeScript">
```typescript
import axios from 'axios';
const jobId = 'job_id_1234567890';
const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows`;
const getWorkflowStatus = async () => {
try {
const response = await axios.get(url, {
params: {
job_id: jobId
},
headers: {
'accept': 'application/json',
'x-api-key': LANGFLOW_API_KEY
}
});
console.log(response.data);
} catch (error) {
console.error('Error getting workflow status:', error);
}
};
getWorkflowStatus();
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X GET \
"$LANGFLOW_SERVER_URL/api/v2/workflows?job_id=job_id_1234567890" \
-H "accept: application/json" \
-H "x-api-key: $LANGFLOW_API_KEY"
```
</TabItem>
</Tabs>
### Query parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | `string` | Yes | The job ID returned from a workflow execution. |
| `stream` | `boolean` | No | If `true`, returns server-sent events stream. Default: `false`. |
| `sequence_id` | `integer` | No | Optional sequence ID to resume streaming from a specific point. |
### Example response
```json
{
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"job_id": "job_id_1234567890",
"object": "response",
"created_at": 1741476542,
"status": "completed",
"errors": [],
"outputs": {
"ChatOutput-xyz": {
"type": "message",
"component_id": "ChatOutput-xyz",
"status": "completed",
"content": "Processing complete..."
}
},
"input": [
{
"type": "text",
"data": "Input text prompt for the workflow execution",
"role": "User"
}
],
"metadata": {}
}
```
### Response body
The response includes a `status` field that indicates the current state of the workflow execution:
| Status | Description |
|--------|-------------|
| `queued` | Job is queued and waiting to start. |
| `in_progress` | Job is currently executing. |
| `completed` | Job completed successfully. |
| `failed` | Job failed during execution. |
| `error` | Job encountered an error. |
## Stop workflow endpoint
**Endpoint:** `POST /api/v2/workflows/stop`
**Description:** Stop a running workflow execution by job ID.
### Example request
<Tabs>
<TabItem value="Python" label="Python" default>
```python
import requests
url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows/stop"
headers = {
"Content-Type": "application/json",
"x-api-key": LANGFLOW_API_KEY
}
payload = {
"job_id": "job_id_1234567890"
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())
```
</TabItem>
<TabItem value="TypeScript" label="TypeScript">
```typescript
import axios from 'axios';
const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows/stop`;
const payload = {
job_id: "job_id_1234567890"
};
const stopWorkflow = async () => {
try {
const response = await axios.post(url, payload, {
headers: {
'Content-Type': 'application/json',
'x-api-key': LANGFLOW_API_KEY
}
});
console.log(response.data);
} catch (error) {
console.error('Error stopping workflow:', error);
}
};
stopWorkflow();
```
</TabItem>
<TabItem value="curl" label="curl">
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v2/workflows/stop" \
-H "Content-Type: application/json" \
-H "x-api-key: $LANGFLOW_API_KEY" \
-d '{
"job_id": "job_id_1234567890"
}'
```
</TabItem>
</Tabs>
### Request body
| Field | Type | Required | Default | Description |
|-------|------|----------|---------|-------------|
| `job_id` | `string` | Yes | - | The job ID of the workflow to stop. |
### Example response
```json
{
"job_id": "job_id_1234567890",
"message": "Job job_id_1234567890 cancelled successfully."
}
```
## Component identifiers and input structure
The Workflows API uses component identifiers with dot notation to specify inputs for individual components in your workflow. This allows you to pass values to specific components and override component parameters.
Component identifiers use the format `{component_id}.{parameter_name}`.
When making requests to the Workflows API, include component identifiers in the `inputs` object.
For example, this demonstrates targeting multiple components and their parameters in a single request.
```json
{
"flow_id": "your-flow-id",
"inputs": {
"ChatInput-abc.input_type": "chat",
"ChatInput-abc.input_value": "what is 2+2",
"ChatInput-abc.session_id": "session-123",
"OpenSearchComponent-xyz.opensearch_url": "https://opensearch:9200",
"LLMComponent-123.temperature": 0.7,
"LLMComponent-123.max_tokens": 100
}
}
```
To find the component ID in the Langflow UI, open your flow in Langflow, click the component, and then click **Controls**. The component ID is at the top of the **Controls** pane.
You can override any component's parameters.
## Error handling
The API uses standard HTTP status codes to indicate success or failure:
| Status Code | Description |
|-------------|-------------|
| `200 OK` | Request successful. |
| `400 Bad Request` | Invalid request parameters. |
| `401 Unauthorized` | Invalid or missing API key. |
| `404 Not Found` | Flow not found or developer API disabled. |
| `500 Internal Server Error` | Server error during execution. |
| `501 Not Implemented` | Endpoint not yet implemented. |
### Error response format
```json
{
"detail": "Error message describing what went wrong"
}
```

View File

@@ -151,7 +151,7 @@ An agent can use [custom components](/components-custom-components) as tools.
3. Enable **Tool Mode** in the custom component.
4. Connect the custom component's tool output to the **Agent** component's **Tools** input.
5. Open the <Icon name="Play" aria-hidden="true" /> **Playground** and instruct the agent, `Use the text analyzer on this text: "Agents really are thinking machines!"`
5. Open the <Icon name="Play" aria-hidden="true"/> **Playground** and instruct the agent, `Use the text analyzer on this text: "Agents really are thinking machines!"`
Based on your instruction, the agent should call the `analyze_text` action and return the result.
For example:

View File

@@ -6,6 +6,7 @@ slug: /agents
import Icon from "@site/src/components/icon";
import PartialParams from '@site/docs/_partial-hidden-params.mdx';
import PartialAgentsWork from '@site/docs/_partial-agents-work.mdx';
import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Langflow's [**Agent** component](/components-agents) is critical for building agent flows.
This component provides everything you need to create an agent, including multiple Large Language Model (LLM) providers, tool calling, and custom instructions.
@@ -19,22 +20,14 @@ The following steps explain how to create an agent flow in Langflow from a blank
For a prebuilt example, use the **Simple Agent** template or the [Langflow quickstart](/get-started-quickstart).
1. Click **New Flow**, and then click **Blank Flow**.
2. Add an **Agent** component to your flow.
3. Select the provider and model that you want to use.
The default model for the **Agent** component is an OpenAI model.
If you want to use a different provider, edit the **Model Provider** and **Model Name** fields accordingly.
If your preferred model isn't listed, type the complete model name into the **Model Name** field, and then select it from the **Model Name** menu.
Make sure that the model is enabled/verified in your model provider account.
3. <PartialGlobalModelProviders />
4. Select the model that you want to use from the **Language Model** dropdown.
If your preferred model isn't listed, make sure it's enabled in the **Models** configuration.
For more information, see [Agent component parameters](#agent-component-parameters).
4. Enter a valid credential for your selected model provider.
Make sure that the credential has permission to call the selected model.
5. Add [**Chat Input** and **Chat Output** components](/chat-input-and-output) to your flow, and then connect them to the **Agent** component.
At this point, you have created a basic LLM-based chat flow that you can test in the <Icon name="Play" aria-hidden="true" /> **Playground**.
At this point, you have created a basic LLM-based chat flow that you can test in the <Icon name="Play" aria-hidden="true"/> **Playground**.
However, this flow only chats with the LLM.
To enhance this flow and make it truly agentic, add some tools, as explained in the next steps.
@@ -56,7 +49,7 @@ Make sure that the credential has permission to call the selected model.
![A more complex agent chat flow where three components are connected to the Agent component as tools](/img/agent-example-add-tools.png)
8. Open the <Icon name="Play" aria-hidden="true" /> **Playground**, and then ask the agent, `What tools are you using to answer my questions?`
8. Open the <Icon name="Play" aria-hidden="true"/> **Playground**, and then ask the agent, `What tools are you using to answer my questions?`
The agent should respond with a list of the connected tools.
It may also include built-in tools.
@@ -89,28 +82,22 @@ You can configure the **Agent** component to use your preferred provider and mod
### Provider and model
Use the **Model Provider** (`agent_llm`) and **Model Name** (`llm_model`) settings to select the model provider and LLM that you want the agent to use.
Use the **Language Model** (`agent_llm`) setting to select the LLM that you want the agent to use.
<PartialGlobalModelProviders />
To use a model with the **Agent** component, select the model in the **Agent** component's **Language Model** field.
The **Language Model** field lists all language models that you've configured globally. If a provider doesn't have any language models available, they aren't listed.
For example, if a provider offers only embeddings models, those models aren't listed on the **Agent** component.
The **Agent** component includes many models from several popular model providers.
To access other providers or models, you can do either of the following:
* Set **Model Provider** to **Connect other models**, and then connect any [language model component](/components-models).
* Select your preferred provider, type the complete model name into the **Model Name** field, and then select your custom option from the **Model Name** menu.
Make sure that the model is enabled/verified in your model provider account.
* Connect any [language model component](/components-models) to the **Agent** component's **Language Model** port. This option allows you to connect a custom language model component to use models that aren't available in the global model providers list.
* Configure additional providers in the **Models** pane, and then select the model from the **Language Model** dropdown.
If you need to generate embeddings in your flow, use an [embedding model component](/components-embedding-models).
### Model provider API key
In the **API Key** field, enter a valid authentication key for your selected model provider, if you are using a built-in provider.
For example, to use the default OpenAI model, you must provide a valid OpenAI API key for an OpenAI account that has credits and permission to call OpenAI LLMs.
You can enter the key directly, but it is recommended that you follow industry best practices for storing and referencing API keys.
For example, you can use a <Icon name="Globe" aria-hidden="true"/> [global variable](/configuration-global-variables) or [environment variables](/environment-variables).
For more information, see [Add component API keys to Langflow](/api-keys-and-authentication#component-api-keys).
If you select **Connect other models** as the model provider, authentication is handled in the incoming language model component.
### Agent instructions and input
In the **Agent Instructions** (`system_prompt`) field, you can provide custom instructions that you want the **Agent** component to use for every conversation.

View File

@@ -6,6 +6,8 @@ slug: /mcp-client
import Icon from "@site/src/components/icon";
import McpIcon from '@site/static/logos/mcp-icon.svg';
import PartialMcpNodeTip from '@site/docs/_partial-mcp-node-tip.mdx';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Langflow integrates with the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) as both an MCP server and an MCP client.
@@ -24,6 +26,8 @@ This component has two modes, depending on the type of server you want to access
### Connect to a non-Langflow MCP server {#mcp-stdio-mode}
<PartialMcpNodeTip />
1. Add an **MCP Tools** component to your flow.
2. In the **MCP Server** field, select a previously connected server or click <Icon name="Plus" aria-hidden="true"/> **Add MCP Server**.
@@ -36,35 +40,31 @@ This component has two modes, depending on the type of server you want to access
* **HTTP/SSE**: Enter your MCP server's **Name**, **URL**, and any **Headers** and **Environment Variables** the server uses, and then click **Add Server**.
The default **URL** for Langflow MCP servers is `http://localhost:7860/api/v1/mcp/project/PROJECT_ID/streamable` or `http://localhost:7860/api/v1/mcp/streamable`. For more information, see [Connect to a Langflow MCP server](#mcp-http-mode).
<PartialMcpNodeTip />
3. To configure headers for your MCP server, enter each header in the **Headers** fields as key-value pairs.
You can use [global variables](/configuration-global-variables) in header values by entering the global variable name as the header value.
For more information, see [Use global variables in MCP server headers](#use-global-variables-in-mcp-server-headers).
3. To use environment variables in your server command, enter each variable in the **Env** fields as key-value pairs.
4. To use environment variables in your server command, enter each variable in the **Env** fields as key-value pairs.
:::tip
Langflow passes environment variables from the `.env` file to MCP, but it doesn't pass global variables declared in your Langflow **Settings**.
To define an MCP server environment variable as a global variable, add it to Langflow's `.env` file at startup.
For more information, see [global variables](/configuration-global-variables).
:::
4. In the **Tool** field, select a tool that you want this component to use, or leave the field blank to allow access to all tools provided by the MCP server.
5. In the **Tool** field, select a tool that you want this component to use, or leave the field blank to allow access to all tools provided by the MCP server.
If you select a specific tool, you might need to configure additional tool-specific fields. For information about tool-specific fields, see your MCP server's documentation.
At this point, the **MCP Tools** component is serving a tool from the connected server, but nothing is using the tool. The next steps explain how to make the tool available to an [**Agent** component](/components-agents) so that the agent can use the tool in its responses.
5. In the [component's header menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
6. In the [component's header menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
6. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port.
7. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port.
If not already present in your flow, make sure you also attach **Chat Input** and **Chat Output** components to the **Agent** component.
![MCP Tools component in STDIO mode](/img/component-mcp-stdio.png)
7. Test your flow to make sure the MCP server is connected and the selected tool is used by the agent. Open the **Playground**, and then enter a prompt that uses the tool you connected through the **MCP Tools** component.
8. Test your flow to make sure the MCP server is connected and the selected tool is used by the agent. Open the **Playground**, and then enter a prompt that uses the tool you connected through the **MCP Tools** component.
For example, if you use `mcp-server-fetch` with the `fetch` tool, you could ask the agent to summarize recent tech news. The agent calls the MCP server function `fetch`, and then returns the response.
8. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
9. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
### Connect a Langflow MCP server {#mcp-http-mode}
@@ -110,6 +110,167 @@ To add a new MCP server, click **Add MCP Server**, and then follow the steps in
Click <Icon name="Ellipsis" aria-hidden="true"/> **More** to edit or delete an MCP server connection.
## Modify MCP server environment variables with the API {#mcp-api-tweaks}
You can modify MCP server environment variables at runtime when running flows through the [Langflow API](/api-reference-api-examples) by tweaking the **MCP Tools** component.
You can include tweaks with any Langflow API request that supports the `tweaks` parameter, such as POST requests to the `/run` or `/webhook` endpoints.
For more information, see [Input schema (tweaks)](/concepts-publish#input-schema).
To modify the **MCP Tools** component's environment variables with tweaks, do the following:
1. Open the flow that contains your **MCP Tools** component.
2. To find the **MCP Tools** component's unique ID, in your **MCP Tools** component, click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**.
The component's ID is displayed in the **Controls** pane, such as `MCPTools-Bzahc`.
3. Send a POST request to the Langflow server's `/run` endpoint, and include tweaks to the **MCP Tools** component.
The following examples demonstrate a request structure with the `env` object nested under `mcp_server` in the `tweaks` payload:
<Tabs groupId="language">
<TabItem value="python" label="Python" default>
```python
import requests
import os
LANGFLOW_SERVER_ADDRESS = "http://localhost:7860"
FLOW_ID = "your-flow-id"
LANGFLOW_API_KEY = os.getenv("LANGFLOW_API_KEY")
MCP_TOOLS_COMPONENT_ID = "MCPTools-Bzahc"
url = f"{LANGFLOW_SERVER_ADDRESS}/api/v1/run/{FLOW_ID}?stream=false"
headers = {
"Content-Type": "application/json",
"x-api-key": LANGFLOW_API_KEY
}
payload = {
"output_type": "chat",
"input_type": "chat",
"input_value": "What sales data is available to me?",
"tweaks": {
MCP_TOOLS_COMPONENT_ID: {
"mcp_server": {
"env": {
"API_URL": "https://api.example.com",
"API_KEY": "your-mcp-server-api-key",
"ENVIRONMENT": "production"
}
}
}
}
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())
```
</TabItem>
<TabItem value="typescript" label="TypeScript">
```typescript
const LANGFLOW_SERVER_ADDRESS = "http://localhost:7860";
const FLOW_ID = "your-flow-id";
const LANGFLOW_API_KEY = process.env.LANGFLOW_API_KEY || "";
const MCP_TOOLS_COMPONENT_ID = "MCPTools-Bzahc";
const url = `${LANGFLOW_SERVER_ADDRESS}/api/v1/run/${FLOW_ID}?stream=false`;
const response = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-api-key": LANGFLOW_API_KEY,
},
body: JSON.stringify({
output_type: "chat",
input_type: "chat",
input_value: "What sales data is available to me?",
tweaks: {
[MCP_TOOLS_COMPONENT_ID]: {
mcp_server: {
env: {
API_URL: "https://api.example.com",
API_KEY: "your-mcp-server-api-key",
ENVIRONMENT: "production",
},
},
},
},
}),
});
const data = await response.json();
console.log(data);
```
</TabItem>
<TabItem value="curl" label="cURL">
```bash
curl --request POST \
--url "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID?stream=false" \
--header "Content-Type: application/json" \
--header "x-api-key: LANGFLOW_API_KEY" \
--data '{
"output_type": "chat",
"input_type": "chat",
"input_value": "What sales data is available to me?",
"tweaks": {
"MCP_TOOLS_COMPONENT_ID": {
"mcp_server": {
"env": {
"API_URL": "https://api.example.com",
"API_KEY": "your-mcp-server-api-key",
"ENVIRONMENT": "production"
}
}
}
}
}'
```
</TabItem>
</Tabs>
Replace `MCP_TOOLS_COMPONENT_ID`, `LANGFLOW_API_KEY`, `LANGFLOW_SERVER_ADDRESS`, and `FLOW_ID` with the actual values from your Langflow deployment.
Langflow doesn't automatically discover or expose which environment variables your MCP server accepts from the **MCP Tools** component.
To determine which environment variables your MCP server accepts, see the MCP server's documentation. For example, the [Astra DB MCP server](https://github.com/datastax/astra-db-mcp) requires `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT`, with an optional variable for `ASTRA_DB_KEYSPACE`, as documented in its repository.
## Use global variables in MCP server headers {#use-global-variables-in-mcp-server-headers}
You can use [global variables](/configuration-global-variables) in MCP server header values to securely store and reference API keys, authentication tokens, and other sensitive values. This is particularly useful for deployment scenarios where you need to pass user-specific credentials at runtime.
Enter a global variable name as the header value, and Langflow resolves the global variable name to its actual value before making the MCP server request. Langflow only passes the token value to your server; it doesn't validate tokens on behalf of your MCP server.
For example, to create a global variable named `TEST_BEARER_TOKEN` for MCP server bearer authentication, do the following:
1. To open the **Global Variables** pane, click your profile icon, select **Settings**, and then click <Icon name="Globe" aria-hidden="true"/> **Global Variables**.
2. Create a **Credential** global variable named `TEST_BEARER_TOKEN`.
3. In the **Value** field, enter your MCP server's bearer token value. The value must include the `Bearer` prefix with a space, for example: `Bearer eyJhbG...`.
4. Click **Save Variable**.
5. To manage MCP server connections for your Langflow client, click <McpIcon /> **MCP servers** in the visual editor, or click your profile icon, select **Settings**, and then click **MCP Servers**.
6. Click <Icon name="Plus" aria-hidden="true"/> **Add MCP Server**.
7. Select the following:
* **Name**: test-mcp-server
* **Streamable HTTP/SSE URL**: Your MCP server's URL, such as `http://127.0.0.1:8000/mcp`.
* **Headers**: In the key field, enter the literal string `Authorization`. For the key's value, enter `TEST_BEARER_TOKEN`, or the exact name of your global variable.
8. Click **Create Server**.
If the connection succeeds, Langflow shows the number of tools exposed by the server.
After creating the server and global variable, you can connect to the server with the **MCP Tools** component, as explained in the next steps.
9. Add the **MCP Tools** component to a flow.
10. In the **MCP Tools** component, select the **MCP Server** you created.
The MCP server configuration already includes the headers you configured earlier, so no further configuration is needed in the component. The global variable `TEST_BEARER_TOKEN` is automatically resolved when the component makes requests to the MCP server.
11. Optional: To override headers or add additional headers to the **MCP Tools** component, click the component to view the **Headers** parameter in the [component inspection panel](/concepts-components#component-menus), and then add header key values. Headers configured in the component take precedence over the headers configured in the MCP server settings.
12. Test your flow to make sure the agent uses your server to respond to queries. Open the **Playground**, and then enter a prompt that uses a tool that you connected through the **MCP Tools** component.
Langflow automatically resolves `TEST_BEARER_TOKEN` to its actual value before sending the request to the MCP server. When your MCP server receives the request, the `Authorization` header contains the resolved token value.
## See also
- [Use Langflow as an MCP server](/mcp-server)

View File

@@ -28,7 +28,7 @@ This guide demonstrates how to [use Langflow as an MCP client](/mcp-client) by u
1. In the **MCP Server** field, click <Icon name="Plus" aria-hidden="true"/> **Add MCP Server**.
2. Select **Stdio** mode.
3. EIn the **Name** field, enter a name for the MCP server.
3. In the **Name** field, enter a name for the MCP server.
4. In the **Commmand** field, add the following code to connect to an Astra DB MCP server:
```bash

View File

@@ -12,6 +12,7 @@ Langflow integrates with the [Model Context Protocol (MCP)](https://modelcontext
This page describes how to use Langflow as an MCP server that exposes your flows as [tools](https://modelcontextprotocol.io/docs/concepts/tools) that [MCP clients](https://modelcontextprotocol.io/clients) can use when generating responses.
Langflow MCP servers support both the **streamable HTTP** transport and **Server-Sent Events (SSE)** as a fallback.
The default project MCP server configuration uses streamable HTTP transport at the URL path `/streamable`.
For information about using Langflow as an MCP client and managing MCP server connections within flows, see [Use Langflow as an MCP client](/mcp-client).
@@ -146,6 +147,8 @@ For example:
"command": "uvx",
"args": [
"mcp-proxy",
"--transport",
"streamablehttp",
"http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
]
}
@@ -161,7 +164,7 @@ For example:
If your Langflow server requires authentication, you must include your Langflow API key or OAuth settings in the configuration.
For more information, see [MCP server authentication](#authentication).
6. To include other environment variables with your MCP server command, add an `env` object with key-value pairs of environment variables:
6. To include other environment variables with your MCP server command, add an `env` object with key-value pairs of environment variables. For example:
```json
{
@@ -170,6 +173,8 @@ For example:
"command": "uvx",
"args": [
"mcp-proxy",
"--transport",
"streamablehttp",
"http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
],
"env": {
@@ -180,6 +185,10 @@ For example:
}
```
Don't add API keys in the `env` object, as these variables are specifically for the `mcp-proxy` process.
Instead, add API keys under `args`.
For an example, see [MCP server authentication](#authentication).
7. Save and close your client's MCP configuration file.
8. Confirm that your Langflow MCP server is on the client's list of MCP servers.
@@ -226,11 +235,32 @@ To configure authentication for a Langflow MCP server, go to the **Projects** pa
<Tabs groupId="auth-type">
<TabItem value="API key" label="API key">
When authenticating your MCP server with a Langflow API key, your project's MCP server **JSON** code snippets and **Auto install** configuration automatically include the `--headers` and `x-api-key` arguments.
When authenticating your MCP server with a Langflow API key, your project's MCP server **JSON** code snippets and **Auto install** configuration automatically include the `--headers` and `x-api-key` arguments in the **args** array (for streamable transport).
Click <Icon name="key" aria-hidden="true"/> **Generate API key** to automatically insert a new Langflow API key into the code template.
Alternatively, you can replace `YOUR_API_KEY` with an existing Langflow API key.
To add your API key to the configuration, use three separate entries in `args`: `"--headers"`, `"x-api-key"`, and your key value. For example:
```json
{
"mcpServers": {
"PROJECT_NAME": {
"command": "uvx",
"args": [
"mcp-proxy",
"--transport",
"streamablehttp",
"--headers",
"x-api-key",
"YOUR_API_KEY",
"http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
]
}
}
}
```
</TabItem>
<TabItem value="OAuth" label="OAuth">

View File

@@ -34,7 +34,7 @@ For example, if you want to extract text from a `name` column in a CSV file, ent
4. Connect the **Batch Run** component's **Batch Results** output to a **Parser** component's **DataFrame** input.
5. Optional: In the **Batch Run** [component's header menu](/concepts-components#component-menus), click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**, enable the **System Message** parameter, click **Close**, and then enter an instruction for how you want the LLM to process each cell extracted from the file.
5. Optional: In the **Batch Run** [component menu](/concepts-components#component-menus), enable the **System Message** parameter, click **Close**, and then enter an instruction for how you want the LLM to process each cell extracted from the file.
For example, `Create a business card for each name.`
6. In the **Parser** component's **Template** field, enter a template for processing the **Batch Run** component's new `DataFrame` columns (`text_input`, `model_response`, and `batch_index`):

View File

@@ -62,8 +62,8 @@ For example, this schema definition creates the following DataFrame output:
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | DataFrame | Optional. Example DataFrame to learn from; only first 50 rows used. If not provided, Schema is used. |
| Schema | Table | Define columns to generate when no Input DataFrame is provided. See [Output Schema Format](#output-schema-format). |
| Instructions | String | **Advanced.** Optional instructions for generation. |
| Schema | Table | Define columns to generate when no Input DataFrame is provided. See the component's schema definition. |
| Instructions | String | Optional instructions for generation. |
| Number of Rows to Generate | Integer | How many synthetic rows to create. Default: 10. |
## aMap component
@@ -98,7 +98,7 @@ For example, **aMap** keeps each input row and fills in `sentiment`, `confidence
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | DataFrame | Input DataFrame (list of dicts or DataFrame). Each row is processed independently. |
| Schema | Table | Define the structure and types for generated columns. See [Output Schema Format](#output-schema-format). |
| Schema | Table | Define the structure and types for generated columns. See the component's schema definition. |
| Instructions | String | Natural language instructions for transforming each row into the output schema. |
| As List | Boolean | If true, generate multiple instances of the schema per row and concatenate. |
| Keep Source Columns | Boolean | If `true`, append new columns to original data; if false, return only generated columns. Ignored if As List is true. Default: `true`. |
@@ -139,7 +139,7 @@ It sums revenue into `total_revenue`, identifies the best-selling product in `be
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. |
| Input DataFrame | DataFrame | Input DataFrame (list of dicts or DataFrame). Required. |
| Schema | Table | Define the structure and types for the aggregated output. See [Output Schema Format](#output-schema-format). |
| Schema | Table | Define the structure and types for the aggregated output. See the component's schema definition. |
| As List | Boolean | If true, output is a list of instances of the schema. |
| Instructions | String | Optional instructions for aggregation. If omitted, the LLM infers from field descriptions. |

View File

@@ -206,7 +206,7 @@ All single-service Composio components have the same parameters, and the **Compo
| Name | Type | Description |
|------|------|-------------|
| entity_id | String | Input parameter. The entity ID for the Composio account. Default: `default`. This parameter is hidden by default in the visual editor. If you need to set this parameter, you can access it through the <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus). |
| entity_id | String | Input parameter. The entity ID for the Composio account. Default: `default`. This parameter is hidden by default in the visual editor. If you need to set this parameter, you can access it through the [component inspection panel](/concepts-components#component-menus). |
| api_key | SecretString | Input parameter. The Composio API key for authentication with the Composio platform. Make sure the key authorizes the specific service that you want to use. For more information, see [Composio authentication](#composio-authentication). |
| tool_name | Connection | Input parameter for the **Composio Tools** component only. Select the Composio service (tool) to connect to. |
| action | List | Input parameter. Select actions to use. Available actions vary by service. Some actions might require premium access to a particular service. |

View File

@@ -19,7 +19,7 @@ Like the core **Agent** component, the **CUGA** component can use tools connecte
It also includes some additional features:
* Browser automation for web scraping with [Playwright](https://playwright.dev/docs/intro).
To enable web scraping, set the component's `browser_enabled` parameter to `true`, and specify a single URL in the `web_apps` parameter, in the format `https://example.com`.
To enable web scraping, set the component's `browser_enabled` parameter to `true`.
* Load custom instructions for the agent to execute.
To use this feature, use the component's **Instructions** input to attach markdown files containing agent instructions.
@@ -90,6 +90,6 @@ This example asked about the sales data provided by the MCP Server, such as `Whi
| add_current_date_tool | Boolean | If true, adds a tool that returns the current date. Default: `true`. |
| lite_mode | Boolean | Set to `true` to enable CugaLite mode for faster execution when using a smaller number of tools. Default: `true`. |
| lite_mode_tool_threshold | Integer | The threshold to automatically enable CugaLite. If the CUGA component has fewer tools connected than this threshold, CugaLite is activated. Default: `25`. |
| shortlisting_tool_threshold | Integer | The threshold for tool shortlisting. When the total number of tools exceeds this threshold, the CUGA component enables its `find_tools` feature to filter tools down to a smaller subset before making tool selection decisions. This helps reduce token usage and improve performance when working with large numbers of tools. Default: `35`. |
| decomposition_strategy | Dropdown | Strategy for task decomposition. `flexible` allows multiple subtasks per app. `exact` enforces one subtask per app. Default: `flexible`. |
| browser_enabled | Boolean | Enable a built-in browser for web scraping and search. Allows the agent to use general web search in its responses. Disable (`false`) to restrict the agent to the context provided in the flow. Default: `false`. |
| web_apps | Multiline String | When `browser_enabled` is `true`, specify a single URL such as `https://example.com` that the agent can open with the built-in browser. The CUGA component can access both public and private internet resources. There is no built-in mechanism in the CUGA component to restrict access to only public internet resources. |

View File

@@ -112,7 +112,7 @@ This input only appears after connecting a collection that support hybrid search
7. Update the **Structured Output** template:
1. Click the **Structured Output** component to expose the [component's header menu](/concepts-components#component-menus), and then click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**.
1. Click the **Structured Output** component to expose the [component inspection panel](/concepts-components#component-menus).
2. Find the **Format Instructions** row, click <Icon name="Expand" aria-hidden="true"/> **Expand**, and then replace the prompt with the following text:
```text

View File

@@ -0,0 +1,39 @@
---
title: LiteLLM
slug: /bundles-lite-llm
---
import Icon from "@site/src/components/icon";
import PartialParams from '@site/docs/_partial-hidden-params.mdx';
<Icon name="Blocks" aria-hidden="true" /> [**Bundles**](/components-bundle-components) contain custom components that support specific third-party integrations with Langflow.
The **LiteLLM** bundle component connects to models through a LiteLLM proxy, which routes requests to multiple LLM providers.
Using a proxy lets you change model providers without changing credentials in your flows.
You authenticate to the proxy using a single key, and the proxy then uses its own configured credentials to call providers.
Virtual keys are created by the proxy administrator. For more information on managing virtual keys, see [Virtual Keys](https://docs.litellm.ai/docs/proxy/virtual_keys) in the LiteLLM documentation.
## LiteLLM Proxy text generation
The **LiteLLM Proxy** component generates text using an LLM provider.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Use the **Language Model** output when you want to use a LiteLLM proxy-backed model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
### LiteLLM Proxy parameters
<PartialParams />
| Name | Type | Description |
|------|------|-------------|
| api_base | String | Input parameter. Base URL of the LiteLLM proxy. Default: `"http://localhost:4000/v1"`. |
| api_key | String | Input parameter. Virtual key for authentication with the LiteLLM proxy. |
| model_name | String | Input parameter. Model name to use, such as `gpt-4o`, `claude-3-opus`. |
| temperature | Float | Input parameter. Controls randomness. Lower values are more deterministic. Range: `[0.0, 2.0]`. Default: `0.7`. |
| max_tokens | Integer | Input parameter. Maximum number of tokens to generate. Set to `0` for no limit. Range: `[0, 128000]`. Advanced. |
| timeout | Integer | Input parameter. Request timeout in seconds. Default: `60`. |
| max_retries | Integer | Input parameter. Maximum number of retries on failure. Default: `2`. |
| stream | Boolean | Input parameter. Whether to stream the response. |

View File

@@ -28,7 +28,7 @@ To use the **Ollama** component in a flow, connect Langflow to your locally runn
To refresh the server's list of models, click <Icon name="RefreshCw" aria-hidden="true"/> **Refresh**.
4. Optional: To configure additional parameters, such as temperature or max tokens, click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
4. Optional: To configure additional parameters, such as temperature or max tokens, click the component to open the [component inspection panel](/concepts-components#component-menus).
5. Connect the **Ollama** component to other components in the flow, depending on how you want to use the model.
@@ -55,7 +55,7 @@ To use this component in a flow, connect Langflow to your locally running Ollama
To refresh the server's list of models, click <Icon name="RefreshCw" aria-hidden="true"/> **Refresh**.
4. Optional: To configure additional parameters, such as temperature or max tokens, click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
4. Optional: To configure additional parameters, such as temperature or max tokens, click the component to open the [component inspection panel](/concepts-components#component-menus).
Available parameters depend on the selected model.
5. Connect the **Ollama Embeddings** component to other components in the flow.

View File

@@ -22,7 +22,7 @@ Some bundles have no documentation.
To find documentation for a specific bundled component, browse the Langflow docs and your provider's documentation.
If available, you can also find links to relevant documentation, such as API endpoints, through the component itself:
1. Click the component to expose the [component's header menu](/concepts-components#component-menus).
1. Click the component to expose the [component inspection panel](/concepts-components#component-menus).
2. Click <Icon name="Ellipsis" aria-hidden="true" /> **More**.
3. Select **Docs**.

View File

@@ -4,6 +4,7 @@ slug: /components-embedding-models
---
import Icon from "@site/src/components/icon";
import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Embedding model components in Langflow generate text embeddings using a specified Large Language Model (LLM).
@@ -21,33 +22,34 @@ This flow loads a text file, splits the text into chunks, generates embeddings f
1. Create a flow, add a **Read File** component, and then select a file containing text data, such as a PDF, that you can use to test the flow.
2. Add the **Embedding Model** core component, and then provide a valid OpenAI API key.
You can enter the API key directly or use a <Icon name="Globe" aria-hidden="true"/> [global variable](/configuration-global-variables).
2. <PartialGlobalModelProviders />
:::tip My preferred provider or model isn't listed
If your preferred embedding model provider or model isn't supported by the **Embedding Model** core component, you can use any [additional embedding models](#additional-embedding-models) in place of the core component.
If your preferred embedding model provider or model isn't available in Langflow's global <Icon name="BrainCog" aria-hidden="true" /> **Models**, you can use any [additional embedding models](#additional-embedding-models) in place of the core component.
Browse <Icon name="Blocks" aria-hidden="true" /> [**Bundles**](/components-bundle-components) or <Icon name="Search" aria-hidden="true" /> **Search** for your preferred provider to find additional embedding models, such as the [**Hugging Face Embeddings Inference** component](/bundles-huggingface#hugging-face-embeddings-inference).
:::
3. Add a [**Split Text** component](/split-text) to your flow.
3. Add the **Embedding Model** core component to your flow, and then select your configured embedding model from the **Embedding Model** dropdown.
4. Add a [**Split Text** component](/split-text) to your flow.
This component splits text input into smaller chunks to be processed into embeddings.
4. Add a vector store component, such as the **Chroma DB** component, to your flow, and then configure the component to connect to your vector database.
5. Add a vector store component, such as the **Chroma DB** component, to your flow, and then configure the component to connect to your vector database.
This component stores the generated embeddings so they can be used for similarity search.
5. Connect the components:
6. Connect the components:
* Connect the **Read File** component's **Loaded Files** output to the **Split Text** component's **Data or DataFrame** input.
* Connect the **Split Text** component's **Chunks** output to the vector store component's **Ingest Data** input.
* Connect the **Embedding Model** component's **Embeddings** output to the vector store component's **Embedding** input.
6. To query the vector store, add [**Chat Input and Output** components](/chat-input-and-output):
7. To query the vector store, add [**Chat Input and Output** components](/chat-input-and-output):
* Connect the **Chat Input** component to the vector store component's **Search Query** input.
* Connect the vector store component's **Search Results** output to the **Chat Output** component.
7. Click **Playground**, and then enter a search query to retrieve text chunks that are most semantically similar to your query.
8. Click **Playground**, and then enter a search query to retrieve text chunks that are most semantically similar to your query.
## Embedding Model parameters
@@ -60,9 +62,8 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
| Name | Display Name | Type | Description |
|------|--------------|------|-------------|
| provider | Model Provider | List | Input parameter. Select the embedding model provider. |
| model | Model Name | List | Input parameter. Select the embedding model to use.|
| api_key | OpenAI API Key | Secret[String] | Input parameter. The API key required for authenticating with the provider. |
| provider | Model Provider | List | Input parameter. Select the embedding model provider. Models are configured globally in the **Models** pane. |
| model | Model Name | List | Input parameter. Select the embedding model to use. Options depend on the selected provider and are configured globally in the **Models** pane. |
| api_base | API Base URL | String | Input parameter. Base URL for the API. Leave empty for default. |
| dimensions | Dimensions | Integer | Input parameter. The number of dimensions for the output embeddings. |
| chunk_size | Chunk Size | Integer | Input parameter. The size of text chunks to process. Default: `1000`. |
@@ -74,7 +75,7 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
## Additional embedding models
If your provider or model isn't supported by the **Embedding Model** core component, you can replace this component with any other component that generates embeddings.
If your provider or model isn't available in Langflow's global <Icon name="BrainCog" aria-hidden="true" /> **Models**, you can replace the **Embedding Model** core component with any other component that generates embeddings.
To find additional embedding model components, browse <Icon name="Blocks" aria-hidden="true" /> [**Bundles**](/components-bundle-components) or <Icon name="Search" aria-hidden="true" /> **Search** for your preferred provider.

View File

@@ -6,6 +6,7 @@ slug: /components-models
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Language model components in Langflow generate text using a specified Large Language Model (LLM).
These components accept inputs like chat messages, files, and instructions in order to generate a text response.
@@ -24,18 +25,23 @@ One of the most common use cases of language model components is to chat with LL
The following example uses a language model component in a chatbot flow similar to the **Basic Prompting** template.
1. Add the **Language Model** core component to your flow, and then enter your OpenAI API key.
1. <PartialGlobalModelProviders />
This example uses the **Language Model** core component's default OpenAI model.
If you want to use a different provider or model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
:::tip My preferred provider or model isn't listed
If you want to use a provider or model that isn't built-in to the **Language Model** core component, you can replace this component with any [additional language model](#additional-language-models).
If you want to use a provider or model that isn't built-in to Langflow's global <Icon name="BrainCog" aria-hidden="true" /> **Models**, you can replace the **Language Model** component with any [additional language model component](#additional-language-models).
Browse <Icon name="Blocks" aria-hidden="true" /> [**Bundles**](/components-bundle-components) or <Icon name="Search" aria-hidden="true" /> **Search** for your preferred provider to find additional language models.
Alternatively, you can use Ollama to host your preferred model, and then configure your Ollama service in Langflow's global <Icon name="BrainCog" aria-hidden="true" /> **Models**.
Or, create your own custom component to support any provider and model of your choice, and then use your custom component in place of the **Language Model** core component. As a shortcut, use an existing language model component as the basis for your custom component.
:::
3. In the [component's header menu](/concepts-components#component-menus), click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**, enable the **System Message** parameter, and then click **Close**.
2. Add the **Language Model** core component to your flow, and then select your model from the **Language Model** field.
Optionally, to configure API keys and enable or disable models, click **Manage Model Providers** to open the **Model Providers** pane.
3. In the [component inspection panel](/concepts-components#component-menus), enable the **System Message** parameter.
4. Add a [**Prompt Template** component](/components-prompts) to your flow.
@@ -65,8 +71,8 @@ These components are required for direct chat interaction with an LLM.
</details>
10. Optional: Try a different model or provider to see how the response changes.
For example, if you are using the **Language Model** core component, you could try an Anthropic model.
If you enabled multiple models in Langflow's global **Model Providers** pane, select a different model in the **Language Model** field. To open the **Model Providers** pane, click your profile icon, select **Settings**, and then click <Icon name="Brain" aria-hidden="true"/> **Model Providers**.
Then, open the **Playground**, ask the same question as you did before, and then compare the content and format of the responses.
This helps you understand how different models handle the same request so you can choose the best model for your use case.
@@ -103,7 +109,7 @@ For more information, see [Language Model output types](#language-model-output-t
</TabItem>
<TabItem value="agents" label="Agents">
If you don't want to use the **Agent** component's built-in LLMs, you can use a language model component to connect your preferred model:
If you don't want to use the **Agent** component's built-in LLM, you can use a language model component to connect your preferred model:
1. Add a language model component to your flow.
@@ -111,17 +117,14 @@ If you don't want to use the **Agent** component's built-in LLMs, you can use a
Components in bundles may not have `language model` in the name.
For example, Azure OpenAI LLMs are provided through the [**Azure OpenAI** component](/bundles-azure#azure-openai).
2. Configure the language model component as needed to connect to your preferred model.
2. Select your preferred model from the **Language Model** dropdown. The model must be configured globally in the **Models** pane.
3. Change the language model component's output type from **Model Response** to **Language Model**.
The output port changes to a `LanguageModel` port.
This is required to connect the language model component to the **Agent** component.
For more information, see [Language Model output types](#language-model-output-types).
4. Add an **Agent** component to the flow, and then set **Model Provider** to **Connect other models**.
The **Model Provider** field changes to a **Language Model** (`LanguageModel`) input.
4. Add an **Agent** component to the flow.
5. Connect the language model component's output to the **Agent** component's **Language Model** input.
The **Agent** component now inherits the language model settings from the connected language model component instead of using any of the built-in models.
@@ -139,9 +142,8 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
| Name | Type | Description |
|------|------|-------------|
| provider | String | Input parameter. The model provider to use. |
| model_name | String | Input parameter. The name of the model to use. Options depend on the selected provider. |
| api_key | SecretString | Input parameter. The API Key for authentication with the selected provider. |
| provider | String | Input parameter. The model provider to use. Options depend on your global <Icon name="BrainCog" aria-hidden="true" /> **Models** configuration. |
| model_name | String | Input parameter. The name of the model to use. Options depend on the selected provider and your global <Icon name="BrainCog" aria-hidden="true" /> **Models** configuration. |
| input_value | String | Input parameter. The input text to send to the model. |
| system_message | String | Input parameter. A system message that helps set the behavior of the assistant. |
| stream | Boolean | Input parameter. Whether to stream the response. Default: `false`. |

View File

@@ -19,10 +19,10 @@ The **Prompt Template** component can also output variable instructions to other
## Prompt Template parameters
| Name | Display Name | Description |
|----------|----------------|-------------------------------------------------------------------|
| template | Template | Input parameter. Create a prompt template with dynamic variables in curly braces, such as `{VARIABLE_NAME}`. <PartialCurlyBraces /> |
| prompt | Prompt Message | Output parameter. The built prompt message returned by the `build_prompt` method. |
| Name | Display Name | Description |
|---------------------|---------------------|-------------------------------------------------------------------|
| template | Template | Input parameter. Create a prompt template with dynamic variables in curly braces, such as `{VARIABLE_NAME}`. <PartialCurlyBraces /> |
| use_double_brackets | Use Double Brackets | When enabled, use Mustache syntax `{{variable}}` instead of f-string syntax `{variable}`. For more information, see [Use Mustache templating in prompt templates](#use-mustache-templating-in-prompt-templates). |
## Define variables in prompts
@@ -70,6 +70,42 @@ The following steps demonstrate how to add variables to a **Prompt Template** co
You can add as many variables as you like in your template.
For example, you could add variables for `{references}` and `{instructions}`, and then feed that information in from other components, such as **Text Input**, **URL**, or **Read File** components.
### Use Mustache templating in prompt templates
F-string escaping can become confusing when you mix escaped braces with variables in the same template.
For example:
```text
Generate a response in this JSON format:
{{"name": "{name}", "age": {age}, "city": "{city}"}}
The user's name is {name}, age is {age}, and they live in {city}.
```
The characters `{{` and `}}` are escaped literal braces for the JSON structure, but `{name}` is a variable.
This can make prompts error-prone and difficult to parse.
Use [Mustache](https://mustache.github.io) in your prompt templates to make the differences clearer.
To enable Mustache templating, do the following:
1. In the **Prompt Template** component, enable **Use Double Brackets**.
2. In your prompt template, change the variables from `{variable}` to `{{variable}}`.
Mustache uses `{` `}` for literal braces and `{{variable}}` for variables.
```text
Generate a response in this JSON format:
{"name": "{{name}}", "age": {{age}}, "city": "{{city}}"}
The user's name is {{name}}, age is {{age}}, and they live in {{city}}.
```
3. Click **Check & Save**.
The component lints the template code and returns **Prompt is ready** if there are no errors.
Your prompt is now ready to use in a flow.
Langflow supports variable replacement with double brackets, but does not support the full Mustache engine.
The prompt component validation rejects syntax for other Mustache features such as loops and conditionals.
## See also
* [**LangChain Prompt Hub** component](/bundles-langchain#prompt-hub)

View File

@@ -36,7 +36,13 @@ After adding a component to a flow, configure the component's parameters and con
Each component has inputs, outputs, parameters, and controls related to the component's purpose.
By default, components show only required and common options.
To access additional settings and controls, including meta settings, use the [component's header menu](#component-header-menus).
To access additional settings and controls, including meta settings, use the [component inspection panel](#component-inspection-panel).
### Component inspection panel {#component-inspection-panel}
When you select a component in the workspace, a component inspection panel appears on the right side of the screen.
The inspection panel displays all of a component's parameters, including hidden or advanced parameters.
### Component header menus
@@ -44,11 +50,10 @@ To access a component's header menu, click the component in your workspace.
![Agent component](/img/agent-component.png)
A few options are available directly on the header menu.
For example:
The following options are available directly on the header menu:
- **Code**: Modify component settings by directly editing the component's Python code.
- **Controls**: Adjust all component parameters, including optional settings that are hidden by default.
- **Freeze**: Freeze a component and all upstream components to prevent re-running. For more information, see [Freeze a component](#freeze-a-component).
- **Tool Mode**: Enable this option when combining a component with an **Agent** component.
For all other options, including **Delete** and **Duplicate** controls, click <Icon name="Ellipsis" aria-hidden="true" /> **Show More**.
@@ -80,7 +85,7 @@ Use the freeze option if you expect consistent output from a component _and all
Freezing a component prevents that component and all upstream components from re-running, and it preserves the last output state for those components.
Any future flow runs use the preserved output.
To freeze a component, click the component in the workspace to expose the component's header menu, click <Icon name="Ellipsis" aria-hidden="true" /> **Show More**, and then select **Freeze**.
To freeze a component, click the component in the workspace to expose the component's header menu, and then click **Freeze**.
## Component ports

View File

@@ -0,0 +1,64 @@
---
title: Guardrails
slug: /guardrails
---
import Icon from "@site/src/components/icon";
import PartialParams from '@site/docs/_partial-hidden-params.mdx';
The **Guardrails** component validates input text against security and safety guardrails by issuing prompts to a language model (LLM) to check for violations.
The following guardrails can be validated against:
- **PII**: Detects personal identifiable information such as names, addresses, phone numbers, email addresses, social security numbers, credit card numbers, or other personal data.
- **Tokens/Passwords**: Detects API tokens, passwords, API keys, access keys, secret keys, authentication credentials, or other sensitive credentials.
- **Jailbreak**: Detects attempts to bypass AI safety guidelines, manipulate the model's behavior, or make it ignore its instructions.
- **Offensive Content**: Detects offensive, hateful, discriminatory, violent, or inappropriate content.
- **Malicious Code**: Detects potentially malicious code, scripts, exploits, or harmful commands.
- **Prompt Injection**: Detects attempts to inject malicious prompts, override system instructions, or manipulate the AI's behavior through embedded instructions.
When validation passes, the input continues through the **Pass** output.
When validation fails, the input is blocked and sent through the **Fail** output with a justification explaining why it failed.
The **Jailbreak** and **Prompt Injection** guardrails include additional heuristic detection first, and then fall back to LLM validation if needed. This additional stage identifies obvious patterns quickly and reduces API costs by avoiding unnecessary LLM calls for clear violations.
The **Guardrails** component uses a language model to analyze input and can produce false positives or miss some violations.
Use this component **in addition to** other data-sanitization best practices, such as personnel training and scripts that check for literal values or regex patterns, rather than as a sole safeguard.
## Use the Guardrails component in a flow
1. Connect a **Chat Input** or other text source to the **Guardrails** component's **Input Text** port.
2. Select a **Language Model** to use for validation. The component uses the connected LLM to analyze the input text against the enabled guardrails.
3. From the **Guardrails** dropdown, select one or more guardrails to enable.
For example, select **Tokens/Passwords** to block API keys and credentials.
4. Connect the **Pass** output to components to receive validated input.
5. Optionally, connect the **Fail** output to handle blocked inputs, such as a [**Chat Output** component](/chat-input-and-output) or [**Write File** component](/write-file).
## Create custom guardrails
Use the **Enable Custom Guardrail** parameter to create your own, specific guardrail validations.
In the *Custom Guardrail Description** field, enter a natural language guardrail description of disallowed data that you want to detect.
Custom guardrails can work simultaneously with the built-in guardrails, and follow the same validation process.
For example, to block inputs that mention competitor names or products, enter the following in the **Custom Guardrail Description** field:
```
competitor company names, competitor product names, or references to competing services
```
When this custom guardrail is enabled, the LLM analyzes the input text against your criteria. If it detects content matching your description, such as mentions of competitors, validation fails and the input is blocked. Otherwise, validation passes and the input continues through the **Pass** output.
## Guardrails parameters
<PartialParams />
| Name | Type | Description |
|------|------|-------------|
| Language Model (`model`) | `LanguageModel` | Input parameter. Connect a **Language Model** component to use as the driver for this component. The model reviews the data, compares it against the guardrails, and determines if any data is in violation of the guardrails. |
| API Key (`api_key`) | Secret String | Input parameter. Model provider API key. Required if the model provider needs authentication. |
| Guardrails (`enabled_guardrails`) | Multiselect | Input parameter. Select one or more security guardrails to validate the input against. Options: `PII`, `Tokens/Passwords`, `Jailbreak`, `Offensive Content`, `Malicious Code`, `Prompt Injection`. Default: `["PII", "Tokens/Passwords", "Jailbreak"]`. |
| Input Text (`input_text`) | Multiline String | Input parameter. The text to validate against guardrails. Accepts `Message` input types. |
| Enable Custom Guardrail (`enable_custom_guardrail`) | Boolean | Input parameter. Enable a custom guardrail with your own validation criteria. Default: `false`. |
| Custom Guardrail Description (`custom_guardrail_explanation`) | Multiline String | Input parameter. Describe what the custom guardrail should check for. This description is used by the LLM to validate the input. Be specific and clear about what you want to detect. Only used when `enable_custom_guardrail` is `true`. |
| Heuristic Detection Threshold (`heuristic_threshold`) | Slider | Input parameter. Score threshold (0.0-1.0) for heuristic jailbreak/prompt injection detection. Strong patterns such as "ignore instructions" and "jailbreak" have high weights, while weak patterns such as "bypass" and "act as" have low weights. If the cumulative score meets or exceeds this threshold, the input fails immediately. Lower values are more strict. Higher values defer more cases to LLM validation. Default: `0.7`. |

View File

@@ -39,7 +39,7 @@ The following example uses the **If-Else** component to check incoming chat mess
* **Operator**: Select **regex**.
* **Case True**: In the [component's header menu](/concepts-components#component-menus), click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**, enable the **Case True** parameter, click **Close**, and then enter `New Message Detected` in the field.
* **Case True**: In the [component inspection panel](/concepts-components#component-menus), enable the **Case True** parameter, click **Close**, and then enter `New Message Detected` in the field.
The **Case True** message is sent from the **True** output port when the match condition evaluates to true.

View File

@@ -0,0 +1,46 @@
---
title: Knowledge Base
slug: /knowledge-base
---
import Icon from "@site/src/components/icon";
import PartialParams from '@site/docs/_partial-hidden-params.mdx';
import PartialKbSummary from '@site/docs/_partial-kb-summary.mdx';
<PartialKbSummary />
The **Knowledge Base** component reads from an existing knowledge base using semantic search.
The output is a [`DataFrame`](/data-types#dataframe) containing the top matching results from the queried knowledge base.
## Knowledge Base parameters
<PartialParams />
| Name | Display Name | Info |
|------|--------------|------|
| knowledge_base | Knowledge | Input parameter. Select the knowledge base to retrieve data from. |
| api_key | Embedding Provider API Key | Input parameter. Optional API key for the embedding provider to override a previously-provided key. The embedding provider and model are chosen when you create a knowledge base. |
| search_query | Search Query | Input parameter. Optional search query to filter knowledge base data using semantic similarity. If omitted, the top results are returned from an arbitrary sort. |
| top_k | Top K Results | Input parameter. Number of search results to return. Default: `5`. |
| include_metadata | Include Metadata | Input parameter. Whether to include all metadata and embeddings in the output. If enabled, each output row includes all metadata, embeddings, and content. If disabled, only the content is returned. Default: Enabled (true). |
## Use the Knowledge Base component in a flow
After you create and load data to a [knowledge base](/knowledge), you can use the **Knowledge Base** component in any flow to retrieve data from your knowledge base using semantic search:
1. Add a **Knowledge Base** component to your flow.
2. In the **Knowledge** field, select the knowledge base you want to search, such as the customer sales data knowledge base created in the previous steps.
3. To view the search results as chat messages, connect the **Results** output to a **Chat Output** component.
4. In **Search query**, enter a query that relates to your embedded data.
For the customer sales data example, enter a product name like `laptop` or `wireless devices`.
5. Click <Icon name="Play" aria-hidden="true"/> **Run component** on the **Knowledge Base** component, and then open the **Playground** to view the output.
## See also
* [Manage vector data](/knowledge)

View File

@@ -42,9 +42,9 @@ The following steps explain how to create a chat-based flow that uses **Message
2. At the beginning of the flow, add a **Message History** component, and then set it to **Retrieve** mode.
3. Optional: In the **Message History** [component's header menu](/concepts-components#component-menus), click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** to enable parameters for memory sorting, filtering, and limits.
3. Optional: To enable parameters for memory sorting, filtering, and limits, click the **Message History** component to expose the [component inspection panel](/concepts-components#component-menus).
3. Add a **Prompt Template** component, add a `{memory}` variable to the **Template** field, and then connect the **Message History** output to the **memory** input.
4. Add a **Prompt Template** component, add a `{memory}` variable to the **Template** field, and then connect the **Message History** output to the **memory** input.
The **Prompt Template** component supplies instructions and context to LLMs, separate from chat messages passed through a **Chat Input** component.
The template can include any text and variables that you want to supply to the LLM, for example:
@@ -64,19 +64,19 @@ The following steps explain how to create a chat-based flow that uses **Message
In this example, the `{memory}` variable is populated by the retrieved chat memories, which are then passed to a **Language Model** or **Agent** component to provide additional context to the LLM.
4. Connect the **Prompt Template** component's output to a **Language Model** component's **System Message** input.
5. Connect the **Prompt Template** component's output to a **Language Model** component's **System Message** input.
This example uses the **Language Model** core component as the central chat driver, but you can also use another language model component or the **Agent** component.
5. Add a **Chat Input** component, and then connect it to the **Language Model** component's **Input** field.
6. Add a **Chat Input** component, and then connect it to the **Language Model** component's **Input** field.
6. Connect the **Language Model** component's output to a **Chat Output** component.
7. Connect the **Language Model** component's output to a **Chat Output** component.
7. At the end of the flow, add another **Message History** component, and then set it to **Store** mode.
8. At the end of the flow, add another **Message History** component, and then set it to **Store** mode.
Configure any additional parameters in the second **Message History** component as needed, taking into consideration that this particular component will store chat messages rather than retrieve them.
8. Connect the **Chat Output** component's output to the **Message History** component's **Message** input.
9. Connect the **Chat Output** component's output to the **Message History** component's **Message** input.
Each response from the LLM is output from the **Language Model** component to the **Chat Output** component, and then stored in chat memory by the final **Message History** component.
@@ -94,7 +94,7 @@ Other options include the [**Mem0 Chat Memory** component](/bundles-mem0) and [*
1. Configure the **Redis Chat Memory** component to connect to your Redis database. For more information, see the [Redis documentation](https://redis.io/docs/latest/).
2. Set the **Message History** component to **Retrieve** mode.
3. In the **Message History** [component's header menu](/concepts-components#component-menus), click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**, enable **External Memory**, and then click **Close**.
3. In the **Message History** [component inspection pane](/concepts-components#component-menus), enable **External Memory**.
In **Controls**, you can also enable parameters for memory sorting, filtering, and limits.
@@ -132,7 +132,7 @@ Other options include the [**Mem0 Chat Memory** component](/bundles-mem0) and [*
1. Configure the **Redis Chat Memory** component to connect to your Redis database.
2. Set the **Message History** component to **Store** mode.
3. In the **Message History** [component's header menu](/concepts-components#component-menus), click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**, enable **External Memory**, and then click **Close**.
3. In the **Message History** [component inspection pane](/concepts-components#component-menus), enable **External Memory**.
Configure any additional parameters in this component as needed, taking into consideration that this particular component will store chat messages rather than retrieve them.

View File

@@ -114,7 +114,7 @@ To use advanced parsing, do the following:
3. Enable **Advanced Parsing**.
4. Click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus) to configure advanced parsing parameters, which are hidden by default:
4. To configure advanced parsing parameters, click the component to open the [component inspection panel](/concepts-components#component-menus).
| Name | Display Name | Info |
|------|--------------|------|

View File

@@ -12,15 +12,17 @@ import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
This component has been renamed multiple times.
Its previous names include **Lambda Filter** and **Smart Function**.
The **Smart Transform** component uses an LLM to generate a Lambda function to filter or transform structured data based on natural language instructions.
The **Smart Transform** component uses an LLM and natural language instructions to generate a Lambda function that can filter or transform [`Data`](/data-types#data), [`DataFrame`](/data-types#dataframe), or [`Message`](/data-types#message) input.
You must connect this component to a [language model component](/components-models), which is used to generate a function based on the natural language instructions you provide in the **Instructions** parameter.
The LLM runs the function against the data input, and then outputs the results as [`Data`](/data-types#data).
The LLM runs the function against the input, and then outputs the results as [`Data`](/data-types#data), [`DataFrame`](/data-types#dataframe), or [`Message`](/data-types#message).
:::tip
Provide brief, clear instructions, focusing on the desired outcome or specific actions, such as `Filter the data to only include items where the 'status' is 'active'`.
One sentence or less is preferred because end punctuation, like periods, can cause errors or unexpected behavior.
If you need to provide more details instructions that aren't directly relevant to the Lambda function, you can input them in the **Language Model** component's **Input** field or through a **Prompt Template** component.
For the most reliable results, the **Smart Transform** component's output type must match the input type. For example, select **Message** output for [`Message`](/data-types#message) input.
:::
The following example uses the **API Request** endpoint to pass JSON data from the `https://jsonplaceholder.typicode.com/users` endpoint to the **Smart Transform** component.
@@ -35,9 +37,8 @@ From there, the LLM generates a filter function that extracts email addresses fr
| Name | Display Name | Info |
|------|--------------|------|
| data | Data | Input parameter. The structured data to filter or transform using a Lambda function. |
| llm | Language Model | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component. |
| data | Data | Input parameter. The [`Data`](/data-types#data), [`DataFrame`](/data-types#dataframe), or [`Message`](/data-types#message) input to filter or transform using the generated Lambda function. |
| model | Language Model | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component. |
| filter_instruction | Instructions | Input parameter. The natural language instructions for how to filter or transform the data. The LLM uses these instructions to create a Lambda function. |
| sample_size | Sample Size | Input parameter. For large datasets, the number of characters to sample from the dataset head and tail. Only applied if the dataset meets or exceeds `max_size`. Default: `1000`. |
| max_size | Max Size | Input parameter. The number of characters for the dataset to be considered large, which triggers sampling by the `sample_size` value. Default: `30000`. |
| max_size | Max Size | Input parameter. The number of characters for the dataset to be considered large, which triggers sampling by the `sample_size` value. Default: `30000`. |

View File

@@ -10,7 +10,7 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
The **URL** component fetches content from one or more URLs, processes the content, and returns it in various formats.
It follows links recursively to a given depth, and it supports output in plain text or raw HTML.
It follows links recursively to a given depth, and it supports output in plain text, Markdown, or raw HTML.
## URL parameters
@@ -24,7 +24,7 @@ Some of the available parameters include the following:
| max_depth | Depth | Input parameter. Controls link traversal: how many "clicks" away from the initial page the crawler will go. A depth of 1 limits the crawl to the first page at the given URL only. A depth of 2 means the crawler crawls the first page plus each page directly linked from the first page, then stops. This setting exclusively controls link traversal; it doesn't limit the number of URL path segments or the domain. |
| prevent_outside | Prevent Outside | Input parameter. If enabled, only crawls URLs within the same domain as the root URL. This prevents the crawler from accessing sites outside the given URL's domain, even if they are linked from one of the crawled pages. |
| use_async | Use Async | Input parameter. If enabled, uses asynchronous loading which can be significantly faster but might use more system resources. |
| format | Output Format | Input parameter. Sets the desired output format as **Text** or **HTML**. The default is **Text**. For more information, see [URL output](#url-output).|
| format | Output Format | Input parameter. Sets the desired output format as **Text**, **Markdown**, or **HTML**. The default is **Text**. For more information, see [URL output](#url-output).|
| timeout | Timeout | Input parameter. Timeout for the request in seconds. |
| headers | Headers | Input parameter. The headers to send with the request if needed for authentication or otherwise. |
@@ -37,12 +37,13 @@ There are two settings that control the output of the **URL** component at diffe
* **Output Format**: This optional parameter controls the content extracted from the crawled pages:
* **Text (default)**: The component extracts only the text from the HTML of the crawled pages.
* **Markdown**: The component converts the HTML content to markdown format using [Markitdown](https://github.com/microsoft/markitdown).
* **HTML**: The component extracts the entire raw HTML content of the crawled pages.
* **Output data type**: In the component's output field (near the output port) you can select the structure of the outgoing data when it is passed to other components:
* **Extracted Pages**: Outputs a [`DataFrame`](/data-types#dataframe) that breaks the crawled pages into columns for the entire page content (`text`) and metadata like `url` and `title`.
* **Raw Content**: Outputs a [`Message`](/data-types#message) containing the entire text or HTML from the crawled pages, including metadata, in a single block of text.
* **Raw Content**: Outputs a [`Message`](/data-types#message) containing the entire text, Markdown, or HTML from the crawled pages, including metadata, in a single block of text.
When used as a standard component in a flow, the **URL** component must be connected to a component that accepts the selected output data type (`DataFrame` or `Message`).
You can connect the **URL** component directly to a compatible component, or you can use a [**Type Convert** component](/type-convert) to convert the output to another type before passing the data to other components if the data types aren't directly compatible.

View File

@@ -150,6 +150,8 @@ This section describes the available authentication configuration variables.
You can use the [`.env.example`](https://github.com/langflow-ai/langflow/blob/main/.env.example) file in the Langflow repository as a template for your own `.env` file.
For JWT authentication configuration, including algorithm selection and key management, see [JWT authentication](/jwt-authentication).
### LANGFLOW_AUTO_LOGIN {#langflow-auto-login}
This variable controls whether authentication is required to access your Langflow server, including the visual editor, API, and Langflow CLI:
@@ -207,8 +209,9 @@ These defaults don't apply when using the Langflow CLI command [`langflow superu
### LANGFLOW_SECRET_KEY {#langflow-secret-key}
This environment variable stores a secret key used for encrypting sensitive data like API keys.
This environment variable stores a secret key used for encrypting sensitive data like API keys and for JWT signing when using the HS256 algorithm.
Langflow uses the [Fernet](https://pypi.org/project/cryptography/) library for secret key encryption.
For JWT-specific configuration, see [JWT authentication](/jwt-authentication).
If no secret key is provided, Langflow automatically generates one.
@@ -273,6 +276,13 @@ To generate a secret encryption key for `LANGFLOW_SECRET_KEY`, do the following:
- LANGFLOW_SECRET_KEY=${LANGFLOW_SECRET_KEY}
```
#### Rotate the secret key {#rotating-the-secret-key}
Rotate `LANGFLOW_SECRET_KEY` if the key might have been compromised and as part of your routine credential management practices.
Langflow provides a migration script that re-encrypts stored credentials and other sensitive data with a new key so you can rotate without losing access.
For more information, see [Secret Key Rotation](https://github.com/langflow-ai/langflow/blob/main/SECURITY.md#secret-key-rotation) in the Langflow Security Policy.
### LANGFLOW_NEW_USER_IS_ACTIVE {#langflow-new-user-is-active}
When `LANGFLOW_NEW_USER_IS_ACTIVE=False` (default), accounts created by superusers are inactive by default and must be explicitly activated before users can sign in to the visual editor.
@@ -553,3 +563,4 @@ Next, you can add users to your Langflow server to collaborate with others on fl
## See also
* [Langflow environment variables](/environment-variables)
* [Langflow Security Policy](https://github.com/langflow-ai/langflow/blob/main/SECURITY.md) — reporting vulnerabilities, security configuration, and [secret key rotation](https://github.com/langflow-ai/langflow/blob/main/SECURITY.md#secret-key-rotation)

View File

@@ -5,10 +5,11 @@ slug: /install-custom-dependencies
Langflow provides optional dependency groups and support for custom dependencies to extend Langflow functionality. This guide covers how to add dependencies for different Langflow installations, including Langflow Desktop and Langflow OSS.
The Langflow codebase uses two `pyproject.toml` files to manage dependencies, with one for `base` and one for `main`:
The Langflow codebase uses three packages, each with its own `pyproject.toml` file:
* The `main` package is managed by the root level `pyproject.toml`, and it includes end-user features and main application code, such as Langchain and OpenAI.
* The `base` package is managed at `src/backend/base/pyproject.toml`, and it includes core infrastructure, such as the FastAPI web framework.
* The `main` package (`langflow`) is managed by the root level `pyproject.toml`, and it includes end-user features and main application code, such as Langchain and OpenAI. The `main` package depends on the `base` package.
* The `base` package (`langflow-base`) is managed at `src/backend/base/pyproject.toml`, and it includes core infrastructure, such as the FastAPI web framework. The `base` package depends on the `lfx` package.
* The `lfx` package is managed at `src/lfx/pyproject.toml`. LFX is a lightweight CLI tool for executing and serving Langflow flows. The `lfx` package does not provide optional dependency groups for end users.
## Install custom dependencies in Langflow Desktop {#langflow-desktop}
@@ -33,13 +34,15 @@ If you're working within a cloned Langflow repository, add dependencies with `uv
uv add DEPENDENCY
```
### Install optional dependency groups
### Install optional dependency groups for `langflow`
Langflow OSS provides optional dependency groups that extend its functionality.
The `langflow` package (main) provides optional dependency groups that extend its functionality.
These dependencies are listed in the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml#L194) file under `[project.optional-dependencies]`.
By default, installing `langflow` without any extras includes all dependencies listed in the `[project.dependencies]` section. Optional dependency groups are not installed by default and must be explicitly requested.
Install dependency groups using pip's `[extras]` syntax. For example, to install Langflow with the `postgresql` dependency group, enter the following command:
These optional dependencies are listed in the [langflow `pyproject.toml`](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml) file under `[project.optional-dependencies]`.
Install dependency groups using pip's `[extras]` syntax. For example, to install `langflow` with the `postgresql` dependency group, enter the following command:
```bash
uv pip install "langflow[postgresql]"
@@ -48,14 +51,42 @@ uv pip install "langflow[postgresql]"
To install multiple extras, use commas to separate each dependency group:
```bash
uv pip install "langflow[local,postgresql]"
uv pip install "langflow[postgresql,openai]"
```
### Install optional dependency groups for `langflow-base`
`langflow-base` is recommended when you want to deploy Langflow with specific dependencies only.
It contains the same codebase as `langflow`, but `langflow` includes `langflow-base` as a dependency and adds many additional dependencies on top of it.
The `langflow-base` package provides its own optional dependency groups that are separate from those in the `langflow` package. The `langflow-base` package can be installed as a standalone package with these optional dependency groups.
By default, installing `langflow-base` without any extras includes all dependencies listed in the `[project.dependencies]` section. Optional dependency groups are not installed by default and must be explicitly requested.
These optional dependency groups are listed in the [langflow-base `pyproject.toml`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/pyproject.toml) file under `[project.optional-dependencies]`.
Install `langflow-base` with optional dependency groups using pip's `[extras]` syntax. For example, to install `langflow-base` with the `postgresql` dependency group:
```bash
uv pip install "langflow-base[postgresql]"
```
To install multiple extras, use commas to separate each dependency group:
```bash
uv pip install "langflow-base[postgresql,openai]"
```
To install all optional dependencies for `langflow-base`, use the `complete` extra:
```bash
uv pip install "langflow-base[complete]"
```
### Use a virtual environment to test custom dependencies
When testing locally, use a virtual environment to isolate your dependencies and prevent conflicts with other Python projects.
For example, if you want to experiment with `matplotlib` with Langflow:
For example, if you want to experiment with a custom dependency like `matplotlib` with Langflow:
```bash
# Create and activate a virtual environment
@@ -66,20 +97,25 @@ source YOUR_LANGFLOW_VENV/bin/activate
uv pip install langflow matplotlib
```
You can also install `langflow-base` with specific optional dependency groups in your virtual environment:
```bash
# Install langflow-base with only the dependencies you need
uv pip install "langflow-base[postgresql,openai]" matplotlib
```
If you're working within a cloned Langflow repository, add dependencies with `uv add` to reference the existing `pyproject.toml` files:
```bash
uv add matplotlib
```
The `uv add` commands automatically update the `uv.lock` file in the appropriate location.
The `uv add` command automatically updates the `uv.lock` file in the appropriate location.
## Add dependencies to the Langflow codebase
When contributing to the Langflow codebase, you might need to add dependencies to Langflow.
Langflow uses a workspace with two packages, each with different types of dependencies.
To add a dependency to the `main` package, run `uv add DEPENDENCY` from the project root.
For example:
@@ -91,7 +127,7 @@ Dependencies can be added to the `main` package as regular dependencies at `[pro
To add a dependency to the `base` package, navigate to `src/backend/base` and run:
```bash
cd src/backend/base && uv add DEPENDENCY
uv add DEPENDENCY
```
To add a development dependency for testing, linting, or debugging, navigate to `src/backend/base` and run:

View File

@@ -0,0 +1,331 @@
---
title: JWT authentication
slug: /jwt-authentication
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Langflow supports symmetric or asymmetric JSON Web Tokens (JWT) for user authentication and authorization.
JWT is an [open standard](https://tools.ietf.org/html/rfc7519) for securely transmitting information between parties as a JSON object.
Use JWT to create credentials that automatically expire, enable stateless authentication without database storage, and work across distributed systems.
JWT authentication with the HS256 algorithm is enabled by default, but can be configured further with the `LANGFLOW_ALGORITHM` environment variable.
<details closed>
<summary>About the JWT structure and contents</summary>
When a user logs in with their username and password at the `/api/v1/login` endpoint, Langflow validates the credentials and creates a JWT token containing the user's identity and expiration time. This token is then used for subsequent API requests instead of sending credentials with each request.
A JWT consists of three parts separated by dots (`.`):
```
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
```
* The header contains the token type and signing algorithm.
* The payload contains _claims_, which are token data for user information and expiration time.
* The signature is a secret key that ensures the token hasn't been tampered with.
Each part of the JWT is Base64URL-encoded.
You can paste this example JWT to decode the actual JSON data at [jwt.io](https://jwt.io/).
</details>
## Configure JWT environment variables
Configure JWT authentication in Langflow using the following environment variables:
| Variable | Description | Default |
|----------|-------------|---------|
| `LANGFLOW_ALGORITHM` | JWT signing algorithm (`HS256`, `RS256`, or `RS512`) | `HS256` |
| `LANGFLOW_SECRET_KEY` | Secret key for HS256 signing | Auto-generated |
| `LANGFLOW_PRIVATE_KEY` | RSA private key for RS256/RS512 signing | Auto-generated |
| `LANGFLOW_PUBLIC_KEY` | RSA public key for RS256/RS512 verification | Derived from private key |
| `LANGFLOW_ACCESS_TOKEN_EXPIRE_SECONDS` | Access token expiration time | `3600` (1 hour) |
| `LANGFLOW_REFRESH_TOKEN_EXPIRE_SECONDS` | Refresh token expiration time | `604800` (7 days) |
## Configure signing algorithms
Langflow supports multiple signing algorithms and both symmetric (HS256) and asymmetric (RS256, RS512) JWTs.
Which method you choose depends upon your deployment's requirements.
### HS256 (Default)
HS256 is the default JWT algorithm, with a good security level for single-server deployments.
Langflow automatically generates and persists a secret key.
No configuration is necessary, but if you want to explicitly set it in the Langflow `.env`, the default value is `LANGFLOW_ALGORITHM=HS256`.
To generate a custom secure key instead of using the Langflow-generated secret key, do the following:
1. Generate a secure secret key with the Python secrets module or OpenSSL.
The key must be at least 32 characters long.
**Using Python:**
```bash
python -c "import secrets; print(secrets.token_urlsafe(32))"
```
**Using OpenSSL:**
```bash
openssl rand -base64 32
```
2. Set the value for `LANGFLOW_SECRET_KEY` in your `.env` file.
```bash
LANGFLOW_ALGORITHM="HS256"
LANGFLOW_SECRET_KEY="your-custom-secret-key"
```
### RS256
The RS256 signing algorithm provides better security for production deployments by using a pair of private and public keys.
The private key signs tokens, and the public verifies them.
The private key must be kept secret, while the public key can be safely shared.
To automatically generate a private and public key pair and store it in the Langflow [`LANGFLOW_CONFIG_DIR`](/logging), set `LANGFLOW_ALGORITHM="RS256"` in your Langflow `.env`.
When Langflow starts, it will:
1. Check if RSA keys exist in the configuration directory.
2. If not, generate a new 2048-bit RSA key pair.
3. Save the keys to `private_key.pem` and `public_key.pem`.
4. Reuse the same keys on subsequent startups.
To use a custom private key instead of the auto-generated keys, set the following in your `.env` file.
The `LANGFLOW_PUBLIC_KEY` will be automatically derived from the private key.
```bash
LANGFLOW_ALGORITHM=RS256
LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEF...
-----END PRIVATE KEY-----"
```
To use a custom key pair, set both keys in your Langflow `.env` file.
```bash
LANGFLOW_ALGORITHM=RS256
LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEF...
-----END PRIVATE KEY-----"
LANGFLOW_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOC...
-----END PUBLIC KEY-----"
```
To generate an RSA key pair manually, do the following:
1. Generate a 2048-bit private key:
```bash
openssl genrsa -out private_key.pem 2048
```
2. Extract the public key from the private key:
```bash
openssl rsa -in private_key.pem -pubout -out public_key.pem
```
3. Verify the keys were created:
```bash
cat private_key.pem
cat public_key.pem
```
### RS512
RS512 uses the same RSA format of private and public keys as RS256, but uses the SHA-512 hashing algorithm for greater security.
The private key signs tokens, and the public verifies them.
The private key must be kept secret, while the public key can be safely shared.
To automatically generate a private and public key pair and store it in the Langflow [`LANGFLOW_CONFIG_DIR`](/logging), set `LANGFLOW_ALGORITHM="RS512"` in your Langflow `.env`.
When Langflow starts, it does the following:
1. Check if RSA keys exist in the configuration directory.
2. If not, generate a new 2048-bit RSA key pair.
3. Save the keys to `private_key.pem` and `public_key.pem`.
4. Reuse the same keys on subsequent startups.
To use a custom private key instead of the auto-generated keys, set the following in your `.env` file.
The `LANGFLOW_PUBLIC_KEY` will be automatically derived from the private key.
```bash
LANGFLOW_ALGORITHM=RS512
LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEF...
-----END PRIVATE KEY-----"
```
To use a custom key pair, set both keys in your Langflow `.env` file.
```bash
LANGFLOW_ALGORITHM=RS512
LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEF...
-----END PRIVATE KEY-----"
LANGFLOW_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOC...
-----END PUBLIC KEY-----"
```
To generate an RSA key pair manually, do the following:
1. Generate a 2048-bit private key:
```bash
openssl genrsa -out private_key.pem 2048
```
2. Extract the public key from the private key:
```bash
openssl rsa -in private_key.pem -pubout -out public_key.pem
```
3. Verify the keys were created:
```bash
cat private_key.pem
cat public_key.pem
```
## Configure Docker and Kubernetes deployments
Use Docker with HS256 (symmetric) for single-server deployments or development environments where simplicity is preferred.
Use Docker or Kubernetes with RS256 (asymmetric) for production deployments requiring enhanced security with private/public key pairs.
### Docker with HS256
1. Add the value for your JWT secret key to the Langflow `.env` file.
```bash
JWT_SECRET_KEY=your-secret-key
```
2. Set the signing algorithm and include a variable for the secret key in the Docker Compose file.
```yaml
version: "3.8"
services:
langflow:
image: langflowai/langflow:latest
environment:
- LANGFLOW_ALGORITHM=HS256
- LANGFLOW_SECRET_KEY=${JWT_SECRET_KEY} # Set in .env file
volumes:
- langflow_data:/app/langflow
volumes:
langflow_data:
```
### Docker with RS256
To use Langflow's automatically generated key pair, set the `RS256` signing algorithm in the Docker Compose file.
```yaml
# docker-compose.yml
version: "3.8"
services:
langflow:
image: langflowai/langflow:latest
environment:
- LANGFLOW_ALGORITHM=RS256
volumes:
- langflow_data:/app/langflow # Keys stored here
volumes:
langflow_data:
```
To mount an existing key pair, set the `RS256` signing algorithm and mount the private and public keys as volumes.
```yaml
# docker-compose.yml
version: "3.8"
services:
langflow:
image: langflowai/langflow:latest
environment:
- LANGFLOW_ALGORITHM=RS256
volumes:
- ./keys/private_key.pem:/app/langflow/private_key.pem:ro
- ./keys/public_key.pem:/app/langflow/public_key.pem:ro
- langflow_data:/app/langflow
volumes:
langflow_data:
```
### Kubernetes with RS256
Store JWT keys as Kubernetes Secrets and reference them in your Langflow deployment configuration.
```yaml
# jwt-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: langflow-jwt-keys
type: Opaque
stringData:
algorithm: "RS256"
private-key: |
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEF...
-----END PRIVATE KEY-----
public-key: |
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOC...
-----END PUBLIC KEY-----
---
# langflow-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: langflow
spec:
template:
spec:
containers:
- name: langflow
image: langflowai/langflow:latest
env:
- name: LANGFLOW_ALGORITHM
valueFrom:
secretKeyRef:
name: langflow-jwt-keys
key: algorithm
- name: LANGFLOW_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: langflow-jwt-keys
key: private-key
- name: LANGFLOW_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: langflow-jwt-keys
key: public-key
```
## Configure token expiration
To configure access and refresh token expiration times, set the values in the Langflow `.env`.
```bash
LANGFLOW_ACCESS_TOKEN_EXPIRE_SECONDS=3600 # 1 hour
LANGFLOW_REFRESH_TOKEN_EXPIRE_SECONDS=604800 # 7 days
```
Access tokens authenticate API requests and typically expire within 15 minutes to 1 hour to limit security risks.
Refresh tokens obtain new access tokens without requiring the user to log in again.
Refresh tokens typically expire within 7 to 30 days.
When an access token expires, the client can use the refresh token to get a new access token from the `/api/v1/refresh` endpoint.
This maintains the user's session without prompting for credentials again.
## See also
- [Langflow API keys and authentication](/api-keys-and-authentication)
- [JWT.io](https://jwt.io/)
- [RFC 7519 specification](https://tools.ietf.org/html/rfc7519)
- [OWASP JWT Security Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/JSON_Web_Token_for_Java_Cheat_Sheet.html)
- [Langflow Security Best Practices](/security)

View File

@@ -0,0 +1,131 @@
---
title: Manage vector data
slug: /knowledge
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Icon from "@site/src/components/icon";
import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Vector data is critical to AI applications.
Langflow provides several components to help you store and retrieve vector data in your flows, including embedding models, vector stores, and knowledge bases.
## Embedding models
Embedding model components generate text embeddings using a specified Large Language Model (LLM).
There are two common use cases for these components:
* **Store vectors**: Generate embeddings for content written to a vector database.
* **Search vectors**: Generate an embedding from a query to run a similarity search.
In both cases the embedding model component is attached to a vector store component.
For more information, examples, and available options, see [Embedding model components](/components-embedding-models).
Alternatively, you can use [knowledge bases](#knowledge-bases), which include built-in support for several embedding models.
## Vector stores
Vector store components read and write to vector databases.
Typically, these components connect to remote databases, but some vector store components support local databases.
import PartialVectorRagBlurb from '@site/docs/_partial-vector-rag-blurb.mdx';
<PartialVectorRagBlurb />
<details>
<summary>Example: Vector search flow</summary>
import PartialVectorRagFlow from '@site/docs/_partial-vector-rag-flow.mdx';
<PartialVectorRagFlow />
</details>
## Knowledge bases
import PartialKbSummary from '@site/docs/_partial-kb-summary.mdx';
<PartialKbSummary />
### Knowledge base storage locations
Each knowledge base is a [ChromaDB](https://docs.trychroma.com/docs/overview/introduction) vector database.
Each database is stored in a separate directory that contains the following:
- **Vector embeddings**: Embeddings are stored using the Chroma vector database.
- **Metadata files**: Configuration and embedding model information.
- **Source data**: The original data used to create the knowledge base.
Knowledge bases are stored local to your Langflow instance.
The default storage location depends on your operating system and installation method:
- **Langflow Desktop**:
- **macOS**: `/Users/<username>/.langflow/knowledge_bases`
- **Windows**: `C:\Users\<name>\AppData\Roaming\com.LangflowDesktop\knowledge_bases`
- **Langflow OSS**:
- **macOS/Windows/Linux/WSL with `uv pip install`**: `<path_to_venv>/lib/python3.12/site-packages/langflow/knowledge_bases` (Python version can vary. Knowledge bases aren't shared between virtual environments.)
- **macOS/Windows/Linux/WSL with `git clone`**: `<path_to_clone>/src/backend/base/langflow/knowledge_bases`
If you set the `LANGFLOW_CONFIG_DIR` environment variable, the `knowledge_bases` subdirectory is created relative to that path.
To change the default `knowledge_bases` directory path, set the `LANGFLOW_KNOWLEDGE_BASES_DIR` environment variable:
```bash
export LANGFLOW_KNOWLEDGE_BASES_DIR="/path/to/parent/directory"
```
### Create a knowledge base
In this example, you'll create a knowledge base of chunked customer orders.
To follow along with this example, download [`customer-orders.csv`](/files/customer_orders.csv) to your local machine, or adapt the steps for your own structured data.
1. On the [**Projects** page](/concepts-flows#projects) page, click <Icon name="Library" aria-hidden="true"/>**Knowledge** below the list of projects to view and manage your knowledge bases.
2. To create a new knowledge base, click <Icon name="Plus" aria-hidden="true"/>**Add Knowledge**.
3. In the **Create Knowledge Base** pane, enter a name for your knowledge base, and select an embedding model.
<PartialGlobalModelProviders />
4. To configure sources for your knowledge base, click **Configure Sources**.
Optionally, to create an empty knowledge base, click **Create**.
5. In the **Configure Sources** pane, configure the sources for your knowledge base's data, and also how the embedded data will be chunked for vector search retrieval.
For this example, click <Icon name="Upload" aria-hidden="true"/>**Add Sources**, and then select the downloaded [`customer-orders.csv`](/files/customer_orders.csv) file from your local machine.
The default settings for **Chunk Size**, **Chunk Overlap**, and **Separator** are fine.
To continue, click **Next Step**.
6. The **Review & Build** pane allows you to preview your first chunk before you commit to spending tokens to embedall of the data into the knowledge base.
If the chunk isn't what you want to embed, click **Back** to configure your chunking strategy.
To embed this data, click **Create**.
7. Your data is embedded as a **Knowledge**.
When it is available to use, the **Status** changes to **Ready**.
To use the new knowledge base in a flow, see [Use the Knowledge Base component in a flow](/knowledge-base).
### Manage knowledge bases
On the [**Projects** page](/concepts-flows#projects) page, click <Icon name="Library" aria-hidden="true"/>**Knowledge** below the list of projects to view and manage your knowledge bases.
For each knowledge base, you can see the following information:
* Name
* Embedding model
* Size on disk
* Number of words, characters, and chunks
* The average length and size of chunks
* The knowledge base's status
Chunking behavior is determined by the embedding model, and the embedding model is set when you create the knowledge base.
If you need to change the embedding model, you must delete and recreate the knowledge base.
To update a knowledge base with , click <Icon name="EllipsisVertical" aria-hidden="true"/> **More**, and then select <Icon name="RefreshCW" aria-hidden="true"/> **Update Knowledge Base**.
To view a knowledge base's chunks, click <Icon name="EllipsisVertical" aria-hidden="true"/> **More**, and then select <Icon name="Layers" aria-hidden="true"/> **View Chunks**.
To delete a knowledge base, click <Icon name="EllipsisVertical" aria-hidden="true"/> **More**, and then click <Icon name="Trash2" aria-hidden="true"/> **Delete**.
If any flows use the deleted knowledge base, you must update them to use a different knowledge base.
For more information on using knowledge bases in a flow, see the [**Knowledge Base** component](/knowledge-base) documentation.
## See also
* [Use Langflow agents](/agents)
* [Language model components](/components-models)

View File

@@ -43,6 +43,7 @@ To customize log storage locations and behaviors, set the following [Langflow en
| `LANGFLOW_LOG_ROTATION` | String | `1 day` | Controls when the log file is rotated, either based on time or file size. For time-based rotation, set to `1 day`, `12 hours`, or `1 week`. For size-based rotation, set to `10 MB` or `1 GB`. To disable rotation, set to `None`. If disabled, log files grow without limit. |
| `LANGFLOW_ENABLE_LOG_RETRIEVAL` | Boolean | `False` | Enables retrieval of logs from your Langflow instance with [Logs endpoints](/api-logs). |
| `LANGFLOW_LOG_RETRIEVER_BUFFER_SIZE` | Integer | `10000` | Set the buffer size for log retrieval if `LANGFLOW_ENABLE_LOG_RETRIEVAL=True`. Must be greater than `0` for log retrieval to function. |
| `LANGFLOW_NATIVE_TRACING` | Boolean | `true` | Enables the tracer to record execution traces directly in the Langflow database for use in Trace View. Set to `false` to disable tracing. |
## View logs in real-time

View File

@@ -55,6 +55,8 @@ The following tables are stored in `langflow.db`:
• **Message**: Stores chat messages and interactions that occur between components. For more information, see [Message objects](/data-types#message) and [Store chat memory](#store-chat-memory).
• **Trace** and **Span**: Stores traces and spans for flows and components. For more information, see [Traces](/traces).
• **Transactions**: Records execution history and results of flow runs. This information is used for [logging](/logging).
• **User**: Stores user account information including credentials, permissions, profiles, and user management settings. For more information, see [API keys and authentication](/api-keys-and-authentication).

View File

@@ -0,0 +1,55 @@
---
title: Traces
slug: /traces
---
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Langflows **Traces** feature records detailed execution traces for your flows and components so that you can debug issues, measure latency, and track token usage without relying on external observability services.
Trace data is stored in the Langflow database in the `trace` and `span` tables.
Trace data is presented in the **Flow Activity** and **Trace Details** pages in the UI, and can be retrieved from the `/monitor/traces` API endpoint.
Traces are enabled by default.
To disable Langflow tracing and use a different tracing provider, set `LANGFLOW_NATIVE_TRACING` to `false`.
## What traces capture
The tracer records:
- **Flow-level traces**: A trace for each flow run, including total runtime and status.
- **Component spans**: Spans for each component in the flow, including inputs, outputs, latency, and errors.
- **LangChain spans**: Deeper spans for chains, tools, retrievers, and LLM calls, including model name and token usage where available.
Each span includes:
- **Name** and **type** (for example, chain, LLM, tool, retriever)
- **Start and end time** and **latency (ms)**
- **Inputs and outputs** (serialized)
- **Error details**, if the span failed
- **Attributes** such as token counts and model metadata
## View traces in the UI
To view traces in the Langflow UI, do the following:
1. Run a flow, such as the Simple Agent starter flow in the [Quickstart](/get-started-quickstart).
2. Click <Icon name="Activity" aria-hidden="true"/> **Traces**.
The **Flow Activity** page opens.
Each flow run is displayed as a single trace of all of its spans.
Flow runs can be sorted further by session ID, status, or time range.
Optionally, click <Icon name="Download" aria-hidden="true"/> **Download** to download a JSON file of that flow's trace to your local machine.
3. Click a flow run to open the **Trace Details** pane.
The **Trace Details** pane displays spans for your flow run, including a flow-level span for the entire run, and a span for each component.
Individual component spans include the component's inputs and outputs, timing, and token usage.
## Retrieve traces with the API
To programmatically query traces, use the `/monitor/traces` endpoints.
For full parameter details and code examples in Python, TypeScript, and curl, see [Monitor endpoints: Get traces](/api-monitor#get-traces).
## See also
- [Logs](/logging)
- [Monitor endpoints](/api-monitor)

View File

@@ -48,12 +48,11 @@ Use these shortcuts, gestures, and functionality to navigate the workspace:
If your flow has a **Chat Input** component, you can use the **Playground** to run your flow, chat with your flow, view inputs and outputs, and modify the LLM's memories to tune the flow's responses in real time.
To try this for yourself, create a flow based on the **Basic Prompting** template, and then click <Icon name="Play" aria-hidden="true"/> **Playground** when editing the flow in the workspace.
To try this for yourself, create a flow based on the **Simple Agent** template, and then click <Icon name="Play" aria-hidden="true"/> **Playground** when editing the flow in the workspace.
![Playground](/img/playground.png)
If you have an **Agent** component in your flow, the **Playground** displays its tool calls and outputs so you can monitor the agent's tool use and understand the reasoning behind its responses.
To try an agent flow in the **Playground**, use the **Simple Agent** template or the [Langflow quickstart](/get-started-quickstart).
![Playground with agent response](/img/playground-with-agent.png)

View File

@@ -19,6 +19,10 @@ The **Playground** allows you to quickly iterate over your flow's logic and beha
To run a flow in the **Playground**, open the flow, and then click <Icon name="Play" aria-hidden="true"/> **Playground**.
Then, if your flow has a [**Chat Input** component](/chat-input-and-output), enter a prompt or [use voice mode](/concepts-voice-mode) to trigger the flow and start a chat session.
To expand the **Playground** view, click <Icon name="Expand" aria-hidden="true"/> **Enter fullscreen** within the **Playground** panel.
![The Langflow visual builder with the Playground active](/img/playground.png)
:::tip
If there is no message input field in the **Playground**, make sure your flow has a **Chat Input** component that is connected, directly or indirectly, to the **Input** port of a **Language Model** or **Agent** component.
@@ -85,12 +89,11 @@ You can set custom session IDs in the visual editor and programmatically.
In your [input and output components](/chat-input-and-output), use the **Session ID** field:
1. Click the component where you want to set a custom session ID.
2. In the [component's header menu](/concepts-components#component-menus), click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls**.
3. Enable **Session ID**.
4. Click **Close**.
5. Enter a custom session ID.
2. In the [component inspection panel](/concepts-components#component-menus), enable **Session ID**.
3. Click **Close**.
4. Enter a custom session ID.
If the field is empty, the flow uses the default session ID.
6. Open the **Playground** to start a chat under your custom session ID.
5. Open the **Playground** to start a chat under your custom session ID.
Make sure to change the **Session ID** when you want to start a new chat session or continue an earlier chat session with a different session ID.

View File

@@ -57,7 +57,7 @@ To use the **Webhook** component in a flow, do the following:
Alternatively, to get a complete `POST /v1/webhook/$FLOW_ID` code snippet, open the flow's [**API access** pane](/concepts-publish#api-access), and then click the **Webhook curl** tab.
You can also modify the default curl command in the **Webhook** component's **curl** field.
If this field isn't visible by default, click the **Webhook** component, and then click <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
If this field isn't visible by default, click the **Webhook** component to expose the [component inspection panel](/concepts-components#component-menus).
7. Send a POST request with `data` to the flow's `webhook` endpoint to trigger the flow.

View File

@@ -6,6 +6,7 @@ slug: /get-started-quickstart
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Get started with Langflow by loading a template flow, running it, and then serving it at the `/run` API endpoint.
@@ -61,24 +62,32 @@ The **Simple Agent** template consists of an [**Agent** component](/agents) conn
Many components can be tools for agents, including [Model Context Protocol (MCP) servers](/mcp-server). The agent decides which tools to call based on the context of a given query.
2. In the **Agent** component, enter your OpenAI API key directly or use a <Icon name="Globe" aria-hidden="true"/> [global variable](/configuration-global-variables).
2. In the **Agent** component, click **Setup Provider** to select your language model provider.
<PartialGlobalModelProviders />
3. In the **Agent** component, select your configured model from the **Language Model** dropdown.
This example uses the **Agent** component's built-in OpenAI model.
If you want to use a different provider, edit the model provider, model name, and credentials accordingly.
If your preferred provider or model isn't listed, set **Model Provider** to **Connect other models**, and then connect any [language model component](/components-models#additional-language-models).
<details>
<summary>Access more models and providers</summary>
3. To run the flow, click <Icon name="Play" aria-hidden="true"/> **Playground**.
There are two ways to access more models and providers:
4. To test the **Calculator** tool, ask the agent a simple math question, such as `I want to add 4 and 4.`
* Edit Langflow's global <Icon name="BrainCog" aria-hidden="true" /> **Models** configuration. These providers and models are part of Langflow's core functionality. Use the **Ollama** provider to connect to any model hosted on a local or remote Ollama instance.
* Connect any [additional language model component](/components-models#additional-language-models) to the **Agent** component's **Language Model** port.
</details>
4. To run the flow, click <Icon name="Play" aria-hidden="true"/> **Playground**.
5. To test the **Calculator** tool, ask the agent a simple math question, such as `I want to add 4 and 4.`
To help you test and evaluate your flows, the **Playground** shows the agent's reasoning process as it analyzes the prompt, selects a tool, and then uses the tool to generate a response.
In this case, a math question causes the agent to select the **Calculator** tool and use an action like `evaluate_expression`.
![Playground with Agent tool](/img/quickstart-simple-agent-playground.png)
5. To test the **URL** tool, ask the agent about current events.
6. To test the **URL** tool, ask the agent about current events.
For this request, the agent selects the **URL** tool's `fetch_content` action, and then returns a summary of current news headlines.
6. When you are done testing the flow, click <Icon name="X" aria-hidden="true"/>**Close**.
7. When you are done testing the flow, click <Icon name="X" aria-hidden="true"/>**Close**.
:::tip Next steps
Now that you've run your first flow, try these next steps:
@@ -535,7 +544,8 @@ To assist with formatting, you can define tweaks in Langflow's **Input Schema**
1. To open the **Input Schema** pane, from the **API access** pane, click **Input Schema**.
2. In the **Input Schema** pane, select the parameter you want to modify in your next request.
Enabling parameters in the **Input Schema** pane doesn't permanently change the listed parameters. It only adds them to the sample code snippets.
3. For example, to change the LLM provider from OpenAI to Groq, and include your Groq API key with the request, select the values **Model Providers**, **Model**, and **Groq API Key**.
3. For example, to change the agent's LLM model from OpenAI to Anthropic and include your Anthropic API key with the request, select the **Agent** component in the **Input Schema** pane and enable the **Language Model** field.
Langflow updates the `tweaks` object in the code snippets based on your input parameters, and includes default values to guide you.
Use the updated code snippets in your script to run your flow with your overrides.
@@ -546,9 +556,9 @@ payload = {
"input_value": "hello world!",
"tweaks": {
"Agent-ZOknz": {
"agent_llm": "Groq",
"api_key": "GROQ_API_KEY",
"model_name": "llama-3.1-8b-instant"
"agent_llm": "Anthropic",
"api_key": "ANTHROPIC_API_KEY",
"model_name": "claude-opus-4-5-20251101"
}
}
}

View File

@@ -47,6 +47,114 @@ To avoid the impact of potential breaking changes and test new versions, the Lan
If you made changes to your flows in the isolated installation, you might want to export and import those flows back to your upgraded primary installation so you don't have to repeat the component upgrade process.
## 1.8.x
Highlights of this release include the following changes.
For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases).
### Breaking changes
- `langflow-base` dependency structure refactored
The `langflow-base` package now uses granular optional dependency groups. As a result, many dependencies that were previously included in the `langflow-base` installation were moved to optional extras.
If you installed Langflow with `uv pip install langflow`, this isn't a breaking change. Installing `langflow` automatically installs `langflow-base[complete]`, which includes all optional dependencies and maintains the same functionality as before.
However, if you installed Langflow with `uv pip install langflow-base` without specifying extra dependencies, this _is_ a breaking change.
Some dependencies that were previously included by default are now available only through optional extras.
Therefore, installing `langflow-base` directly only installs the [core base dependencies](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/pyproject.toml).
If you installed `langflow-base`, there are two ways to resolve dependency errors that result from this breaking change:
* If you need the full set of dependencies, you must install `langflow-base` with the `complete` extra:
```bash
uv pip install "langflow-base[complete]"
```
* If you need specific dependencies, you must install `langflow-base` with those optional dependency groups. For example:
```bash
uv pip install "langflow-base[postgresql,openai,chroma]"
```
For more information about available optional dependency groups, see [Install optional dependency groups for `langflow-base`](/install-custom-dependencies#install-optional-dependency-groups-for-langflow-base).
### New features and enhancements
- Global model provider configuration
Model providers for language models, embedding models, and agents are now configured globally in the **Model providers** pane, instead of within individual components.
For more information, see the [Language Model component](/components-models).
- Component inspection panel
The component inspection panel replaces the component header menu for managing component parameters and settings.
For more information, see [Component inspection panel](/concepts-components#component-inspection-panel).
- Developer API: `/workflow` synchronous endpoints (Beta)
The Developer API is part of a larger effort to improve Langflow's APIs with enhanced capabilities and better developer experience.
The Developer API now includes `/v2/workflow` endpoints for executing flows with enhanced error handling, timeout protection, and structured responses.
The synchronous execution endpoint is available at `POST /api/v2/workflows`.
For more information, see [Workflow API (Beta)](/workflow-api).
- Traces and trace view
Langflow now records execution traces for flows and components.
View your traces in the **Trace Details** pane, and inspect span trees, latencies, and errors.
For more information, see [Traces](/traces).
- Knowledge bases
Knowledge bases let you organize documents and other reference data into reusable vector databases that can be attached to multiple flows.
This makes it easier to centralize domain knowledge and reuse the same data across agents and retrieval workflows.
For more information, see [Manage vector data](/knowledge).
- Mustache templating support for Prompt Template component
The **Prompt Template** component now supports Mustache templating syntax.
Mustache templating eliminates the need to escape curly braces when including JSON structures in your prompts. For more information, see [Prompt Template](/components-prompts#use-mustache-templating-in-prompt-templates).
- More configuration options for JWT-based session authentication
Langflow 1.8 offers additional configuration options for JWT algorithms, including support for RS256/RS512 algorithms, configurable keys, and token lifetimes. For more information, see [JWT authentication](/jwt-authentication).
- Global variables in MCP server headers
You can now use [global variables](/configuration-global-variables) in MCP server header values to securely store and reference sensitive values. For more information, see [Use global variables in MCP server headers](/mcp-client#use-global-variables-in-mcp-server-headers).
- Pass environment variables to flows in API headers and CLI
The ability to pass environment variables in HTTP headers (previously available for the [`/responses` endpoint](/api-openai-responses#global-var)) is now also available for the [`/run` endpoint](/api-flows-run#pass-global-variables-in-headers).
- Guardrails component
The **Guardrails** component validates input text against security and safety guardrails by using a connected language model to check for content such as PII, tokens/passwords, or offensive content. For more information, see [Guardrails](/guardrails).
- Token usage tracking for OpenAI Responses API
The OpenAI Responses API endpoint now tracks and returns token usage statistics when your flow uses language model APIs that provide token usage information.
For more information, see [Token usage tracking](/api-openai-responses#token-usage-tracking).
- Docker AMD vs ARM image sizes
Langflow 1.8.0 addresses the AMD vs ARM Docker image size gap.
We reconfigured our Python dependencies to use CPU-only PyTorch wheels through `uv` sources, which removes large CUDA-related dependencies from the AMD64 images.
With this change, both AMD64 and ARM64 images are now smaller than 2 GB.
- New [**Agentics** bundle](/bundles-agentics)
Uses LLMs to transform tabular data, including mapping, reducing, and generating DataFrame rows based on a defined schema.
- New [**LiteLLM** bundle](/bundles-lite-llm)
Connects to models through a LiteLLM proxy so you can route requests to multiple LLM providers and switch providers without changing flow credentials.
- New [**Openlayer** observability integration](/integrations-openlayer)
Configures Langflow to send tracing data to Openlayer for analysis, monitoring, and evaluation of your flow executions.
## 1.7.x
:::warning Version yanked

View File

@@ -261,6 +261,36 @@ To fully remove a Langflow Desktop macOS installation, you must also delete `~/.
The following issues can occur when using Langflow as an MCP server or client.
### Default project MCP server only works when authentication is None
If the default project MCP server works without authentication, but fails after adding an API key to the server configuration, the API key might have been added to the wrong section of the configuration.
The default MCP server uses streamable HTTP transport.
The API key must be added to the `args` array that is passed to `mcp-proxy`, not the `env` object.
Your `args` array must include `"--headers"`, `"x-api-key"`, and your key value. For example:
```json
{
"mcpServers": {
"PROJECT_NAME": {
"command": "uvx",
"args": [
"mcp-proxy",
"--transport",
"streamablehttp",
"--headers",
"x-api-key",
"YOUR_API_KEY",
"http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
]
}
}
}
```
For more information, see [Connect clients to your Langflow MCP server](/mcp-server#connect-clients-to-use-the-servers-actions).
### Claude for Desktop doesn't use MCP server tools correctly
If Claude for Desktop doesn't use your server's tools correctly, try explicitly defining the path to your local `uvx` or `npx` executable file in the `claude_desktop_config.json` configuration file:

View File

@@ -0,0 +1,57 @@
## Prerequisites
Before using the API, you need:
* [Install and start Langflow](/get-started-installation) with the developer API enabled
The Workflows API endpoints require the `developer_api_enabled` setting to be enabled. If this setting is disabled, these endpoints will return a 404 Not Found error.
To enable the developer API endpoint, do the following:
1. In the Langflow `.env` file, set the environment variable to `true`:
```
LANGFLOW_DEVELOPER_API_ENABLED=true
```
2. Start your Langflow server with the `.env` file enabled:
```
uv run langflow run --env-file .env
```
For more information about configuring environment variables, see [Environment variables](/environment-variables).
* [Create a Langflow API key](/api-keys-and-authentication)
* [Create a flow](/concepts-flows) that you want to execute
* [Get the flow ID](/concepts-publish#api-access) or endpoint name of the flow you want to execute
### Set environment variables
All code examples in this documentation assume you have set the following environment variables:
**Python:**
```python
import os
LANGFLOW_SERVER_URL = os.getenv("LANGFLOW_SERVER_URL")
LANGFLOW_API_KEY = os.getenv("LANGFLOW_API_KEY")
```
**TypeScript/JavaScript:**
```typescript
const LANGFLOW_SERVER_URL = process.env.LANGFLOW_SERVER_URL;
const LANGFLOW_API_KEY = process.env.LANGFLOW_API_KEY;
```
Set these environment variables before running the examples, or replace the variable references in the code examples with your actual Langflow server URL and API key.
The default `LANGFLOW_SERVER_URL` for a local Langflow deployment is `http://localhost:7860`.
For remote deployments, the domain is set by your hosting service, such as `https://UUID.ngrok.app`.
### Authentication and headers
All Workflows API requests require authentication using a Langflow API key. The API key is passed in the `x-api-key` header.
For more information, see [Create a Langflow API key](/api-keys-and-authentication).
| Header | Description | Example |
|--------|-------------|---------|
| `Content-Type` | Specifies the JSON format. | `application/json` |
| `x-api-key` | Your Langflow API key. | `sk-...` |
| `accept` | Optional. Specifies the response format. | `application/json` |

View File

@@ -1,2 +1,5 @@
If your template includes literal text and variables, you can use double curly braces to escape literal curly braces in the template and prevent interpretation of that text as a variable.
For example: `This is a template with {{literal text in curly braces}} and a {variable}`.
For example: `This is a template with {{literal text in curly braces}} and a {variable}`.
If your template contains many literal curly braces, such as JSON structures, consider using Mustache templating instead.
For more information, see [Use Mustache templating in prompt templates](/components-prompts#use-mustache-templating-in-prompt-templates).

View File

@@ -0,0 +1,17 @@
import Icon from "@site/src/components/icon";
To edit Langflow's global model provider configuration, do the following:
1. To open the **Model Providers** pane, click your profile icon, select **Settings**, and then click <Icon name="Brain" aria-hidden="true"/> **Model Providers**.
2. In the **Model Providers** pane, select a provider.
3. In the **API Key** field, add your provider's API key.
The key must have permission to call the models you want to use in your flow, and your account must have sufficient credits for the actions you want to perform.
You can only add one key for each provider. Make sure the key has access to _all_ models that you want to use in Langflow.
4. Enable the specific models that you want to use in Langflow.
The available models depend on the provider and your API key's permissions.
Models that generate text are listed under **Language Models**.
Models that generate embeddings are listed under **Embedding Models**.
After you enable a model in Langflow's global model configuration, you can use that model in any model-driven component in your flows.

View File

@@ -1,4 +1,2 @@
import Icon from "@site/src/components/icon";
Some parameters are hidden by default in the visual editor.
You can modify all parameters through the <Icon name="SlidersHorizontal" aria-hidden="true"/> **Controls** in the [component's header menu](/concepts-components#component-menus).
You can modify all component parameters through the [component inspection panel](/concepts-components#component-inspection-panel) that appears when you select a component.

View File

@@ -0,0 +1,17 @@
A Langflow knowledge base is a local vector database that is stored in Langflow storage.
Because knowledge bases are local, the data isn't remotely requested and re-ingested with every flow run.
This can be more efficient than using a remote vector database, and it is a good choice for flows that use custom, domain-specific datasets, like slices of customer and product data.
You can use knowledge base components in much the same way that you use vector store components.
However, there are several key differences:
* **Local storage**: Langflow knowledge bases are exclusively local.
In contrast, only some vector store components support local databases.
* **Built-in embedding models**: Langflow knowledge bases include built-in support for several embedding models.
Other models aren't supported for use with knowledge bases.
To use a different provider or model, you must use a vector store component along with your preferred embedding model component.
* **Basic similarity search**: When querying Langflow knowledge bases, only standard similarity search is supported.
For more advanced searches, you must use a vector store component for a vector database provider that supports your desired functionality.
* **Structured data**: Langflow knowledge bases only support structured data.
For unstructured data, you must use a compatible vector store component.

View File

@@ -154,6 +154,11 @@ const config = {
spec: "openapi/openapi.json",
route: "/api",
},
{
id: "workflow",
spec: "openapi/langflow-workflows-openapi.json",
route: "/api/workflow",
},
],
theme: {
primaryColor: "#7528FC",

View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python3
"""Pull OpenAPI spec files from the langflow-ai/sdk repository.
Usage:
python3 fetch_openapi_spec.py # Download all files
python3 fetch_openapi_spec.py --file <filename> # Download specific file
python3 fetch_openapi_spec.py --branch <branch> # Use different branch
"""
import base64
import json
import sys
import urllib.error
import urllib.request
from pathlib import Path
REPO = "langflow-ai/sdk"
BRANCH = "main"
SPECS_DIR = "specs"
FILES = ["langflow-workflows-openapi.json", "langflow-openapi.json"]
def fetch_file(repo: str, filepath: str, branch: str) -> str:
"""Fetch and decode file from GitHub."""
url = f"https://api.github.com/repos/{repo}/contents/{filepath}?ref={branch}"
with urllib.request.urlopen(url) as r: # noqa: S310
data = json.loads(r.read().decode())
return base64.b64decode(data["content"]).decode("utf-8")
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--file", action="append", dest="files")
parser.add_argument("--branch", default=BRANCH)
args = parser.parse_args()
files = args.files or FILES
local_dir = Path(__file__).parent
for filename in files:
if filename not in FILES:
sys.stderr.write(f"Error: {filename} not in {FILES}\n")
sys.exit(1)
try:
content = fetch_file(REPO, f"{SPECS_DIR}/{filename}", args.branch)
(local_dir / filename).write_text(content, encoding="utf-8")
sys.stdout.write(f"{filename}\n")
except (urllib.error.HTTPError, urllib.error.URLError, KeyError, json.JSONDecodeError) as e:
sys.stderr.write(f"{filename}: {e}\n")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,621 @@
{
"openapi": "3.1.0",
"info": {
"title": "Langflow V2 Workflow API",
"description": "Filtered API for Langflow V2 workflow operations (3 endpoints)",
"version": "1.8.0"
},
"paths": {
"/api/v2/workflows": {
"post": {
"tags": [
"Workflow"
],
"summary": "Execute Workflow",
"description": "Execute a workflow with support for sync, stream, and background modes",
"operationId": "execute_workflow_api_v2_workflows_post",
"security": [
{
"API key query": []
},
{
"API key header": []
}
],
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/WorkflowExecutionRequest"
}
}
}
},
"responses": {
"200": {
"description": "Workflow execution response",
"content": {
"application/json": {
"schema": {
"anyOf": [
{
"$ref": "#/components/schemas/WorkflowExecutionResponse"
},
{
"$ref": "#/components/schemas/WorkflowJobResponse"
}
],
"title": "Response Execute Workflow Api V2 Workflows Post",
"oneOf": [
{
"$ref": "#/components/schemas/WorkflowExecutionResponse"
},
{
"$ref": "#/components/schemas/WorkflowJobResponse"
}
],
"discriminator": {
"propertyName": "object",
"mapping": {
"response": "#/components/schemas/WorkflowExecutionResponse",
"job": "#/components/schemas/WorkflowJobResponse"
}
}
}
},
"text/event-stream": {
"schema": {
"$ref": "#/components/schemas/WorkflowStreamEvent"
},
"description": "Server-sent events for streaming execution"
}
}
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HTTPValidationError"
}
}
}
}
}
},
"get": {
"tags": [
"Workflow"
],
"summary": "Get Workflow Status",
"description": "Get status of workflow job by job ID",
"operationId": "get_workflow_status_api_v2_workflows_get",
"security": [
{
"API key query": []
},
{
"API key header": []
}
],
"parameters": [
{
"name": "job_id",
"in": "query",
"required": false,
"schema": {
"anyOf": [
{
"type": "string"
},
{
"type": "string",
"format": "uuid"
},
{
"type": "null"
}
],
"description": "Job ID to query",
"title": "Job Id"
},
"description": "Job ID to query"
}
],
"responses": {
"200": {
"description": "Workflow status response",
"content": {
"application/json": {
"schema": {
"anyOf": [
{
"$ref": "#/components/schemas/WorkflowExecutionResponse"
},
{
"$ref": "#/components/schemas/WorkflowJobResponse"
}
],
"title": "Response Get Workflow Status Api V2 Workflows Get",
"$ref": "#/components/schemas/WorkflowExecutionResponse"
}
},
"text/event-stream": {
"schema": {
"$ref": "#/components/schemas/WorkflowStreamEvent"
},
"description": "Server-sent events for streaming status"
}
}
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HTTPValidationError"
}
}
}
}
}
}
},
"/api/v2/workflows/stop": {
"post": {
"tags": [
"Workflow"
],
"summary": "Stop Workflow",
"description": "Stop a running workflow execution",
"operationId": "stop_workflow_api_v2_workflows_stop_post",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/WorkflowStopRequest"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/WorkflowStopResponse"
}
}
}
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HTTPValidationError"
}
}
}
}
},
"security": [
{
"API key query": []
},
{
"API key header": []
}
]
}
}
},
"components": {
"schemas": {
"ComponentOutput": {
"properties": {
"type": {
"type": "string",
"title": "Type",
"description": "Type of the component output (e.g., 'message', 'data', 'tool', 'text')"
},
"status": {
"$ref": "#/components/schemas/JobStatus"
},
"content": {
"anyOf": [
{},
{
"type": "null"
}
],
"title": "Content"
},
"metadata": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"title": "Metadata"
}
},
"type": "object",
"required": [
"type",
"status"
],
"title": "ComponentOutput",
"description": "Component output schema."
},
"ErrorDetail": {
"properties": {
"error": {
"type": "string",
"title": "Error"
},
"code": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"title": "Code"
},
"details": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"title": "Details"
}
},
"type": "object",
"required": [
"error"
],
"title": "ErrorDetail",
"description": "Error detail schema."
},
"HTTPValidationError": {
"properties": {
"detail": {
"items": {
"$ref": "#/components/schemas/ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
},
"JobStatus": {
"type": "string",
"enum": [
"queued",
"in_progress",
"completed",
"failed",
"cancelled",
"timed_out"
],
"title": "JobStatus",
"description": "Job execution status."
},
"ValidationError": {
"properties": {
"loc": {
"items": {
"anyOf": [
{
"type": "string"
},
{
"type": "integer"
}
]
},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"WorkflowExecutionRequest": {
"properties": {
"background": {
"type": "boolean",
"title": "Background",
"default": false
},
"stream": {
"type": "boolean",
"title": "Stream",
"default": false
},
"flow_id": {
"type": "string",
"title": "Flow Id"
},
"inputs": {
"anyOf": [
{
"additionalProperties": true,
"type": "object"
},
{
"type": "null"
}
],
"title": "Inputs",
"description": "Component-specific inputs in flat format: 'component_id.param_name': value"
}
},
"additionalProperties": false,
"type": "object",
"required": [
"flow_id"
],
"title": "WorkflowExecutionRequest",
"description": "Request schema for workflow execution.",
"examples": [
{
"background": false,
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"inputs": {
"ChatInput-abc.input_value": "Hello, how can you help me today?",
"ChatInput-abc.session_id": "session-123",
"LLM-xyz.max_tokens": 100,
"LLM-xyz.temperature": 0.7,
"OpenSearch-def.opensearch_url": "https://opensearch:9200"
},
"stream": false
},
{
"background": true,
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"inputs": {
"ChatInput-abc.input_value": "Process this in the background"
},
"stream": false
},
{
"background": false,
"flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"inputs": {
"ChatInput-abc.input_value": "Stream this conversation"
},
"stream": true
}
]
},
"WorkflowExecutionResponse": {
"properties": {
"flow_id": {
"type": "string",
"title": "Flow Id"
},
"job_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "string",
"format": "uuid"
},
{
"type": "null"
}
],
"title": "Job Id"
},
"object": {
"type": "string",
"const": "response",
"title": "Object",
"default": "response"
},
"created_timestamp": {
"type": "string",
"title": "Created Timestamp"
},
"status": {
"$ref": "#/components/schemas/JobStatus"
},
"errors": {
"items": {
"$ref": "#/components/schemas/ErrorDetail"
},
"type": "array",
"title": "Errors",
"default": []
},
"inputs": {
"additionalProperties": true,
"type": "object",
"title": "Inputs",
"default": {}
},
"outputs": {
"additionalProperties": {
"$ref": "#/components/schemas/ComponentOutput"
},
"type": "object",
"title": "Outputs",
"default": {}
}
},
"type": "object",
"required": [
"flow_id",
"status"
],
"title": "WorkflowExecutionResponse",
"description": "Synchronous workflow execution response."
},
"WorkflowJobResponse": {
"properties": {
"job_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "string",
"format": "uuid"
}
],
"title": "Job Id"
},
"flow_id": {
"type": "string",
"title": "Flow Id"
},
"object": {
"type": "string",
"const": "job",
"title": "Object",
"default": "job"
},
"created_timestamp": {
"type": "string",
"title": "Created Timestamp"
},
"status": {
"$ref": "#/components/schemas/JobStatus"
},
"links": {
"additionalProperties": {
"type": "string"
},
"type": "object",
"title": "Links"
},
"errors": {
"items": {
"$ref": "#/components/schemas/ErrorDetail"
},
"type": "array",
"title": "Errors",
"default": []
}
},
"type": "object",
"required": [
"job_id",
"flow_id",
"status"
],
"title": "WorkflowJobResponse",
"description": "Background job response."
},
"WorkflowStopRequest": {
"properties": {
"job_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "string",
"format": "uuid"
}
],
"title": "Job Id"
}
},
"type": "object",
"required": [
"job_id"
],
"title": "WorkflowStopRequest",
"description": "Request schema for stopping workflow."
},
"WorkflowStopResponse": {
"properties": {
"job_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "string",
"format": "uuid"
}
],
"title": "Job Id"
},
"message": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"title": "Message"
}
},
"type": "object",
"required": [
"job_id"
],
"title": "WorkflowStopResponse",
"description": "Response schema for stopping workflow."
}
},
"securitySchemes": {
"OAuth2PasswordBearerCookie": {
"type": "oauth2",
"flows": {
"password": {
"scopes": {},
"tokenUrl": "api/v1/login"
}
}
},
"API key query": {
"type": "apiKey",
"in": "query",
"name": "x-api-key"
},
"API key header": {
"type": "apiKey",
"in": "header",
"name": "x-api-key"
}
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -95,6 +95,7 @@ module.exports = {
label: "Develop",
items: [
"Develop/api-keys-and-authentication",
"Develop/jwt-authentication",
"Develop/install-custom-dependencies",
"Develop/configuration-global-variables",
"Develop/environment-variables",
@@ -123,6 +124,7 @@ module.exports = {
id: "Develop/enterprise-database-guide",
label: "Database guide for enterprise administrators"
},
"Develop/knowledge",
],
},
{
@@ -130,6 +132,7 @@ module.exports = {
label: "Observability",
items: [
"Develop/logging",
"Develop/traces",
{
type: "category",
label: "Monitoring",
@@ -138,6 +141,7 @@ module.exports = {
"Develop/integrations-langfuse",
"Develop/integrations-langsmith",
"Develop/integrations-langwatch",
"Develop/integrations-openlayer",
"Develop/integrations-opik",
"Develop/integrations-instana-traceloop",
],
@@ -299,9 +303,10 @@ module.exports = {
},
{
type: "category",
label: "Files",
label: "Files and Knowledge",
items: [
"Components/directory",
"Components/knowledge-base",
"Components/read-file",
"Components/write-file",
]
@@ -321,6 +326,7 @@ module.exports = {
label: "LLM Operations",
items: [
"Components/batch-run",
"Components/guardrails",
"Components/llm-selector",
"Components/smart-router",
"Components/smart-transform",
@@ -392,6 +398,7 @@ module.exports = {
"Components/bundles-ibm",
"Components/bundles-icosacomputing",
"Components/bundles-langchain",
"Components/bundles-lite-llm",
"Components/bundles-lmstudio",
"Components/bundles-maritalk",
"Components/bundles-mem0",
@@ -444,6 +451,22 @@ module.exports = {
id: "API-Reference/api-flows-run",
label: "Flow trigger endpoints",
},
{
type: "category",
label: "Developer API (Beta)",
items: [
{
type: "doc",
id: "API-Reference/workflows-api",
label: "Workflow API (Beta)",
},
{
type: "link",
label: "Workflow API specification (Beta)",
href: "/api/workflow",
},
],
},
{
type: "doc",
id: "API-Reference/api-openai-responses",

Binary file not shown.

Before

Width:  |  Height:  |  Size: 743 KiB

After

Width:  |  Height:  |  Size: 616 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 370 KiB

After

Width:  |  Height:  |  Size: 798 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 452 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 552 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 494 KiB

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 483 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 475 KiB

After

Width:  |  Height:  |  Size: 670 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 322 KiB

After

Width:  |  Height:  |  Size: 835 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 468 KiB

After

Width:  |  Height:  |  Size: 835 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 935 KiB

View File

@@ -1,5 +0,0 @@
<svg width="470" height="470" viewBox="0 0 470 470" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M342.604 243.34H389.75C398.998 243.34 406.489 250.831 406.489 260.079V287.892C406.489 297.14 398.998 304.631 389.75 304.631H348.629C344.186 304.631 339.928 306.4 336.787 309.54L266.225 380.091C263.084 383.232 258.827 385 254.383 385H220.463C211.39 385 203.956 377.765 203.724 368.691L202.991 340.297C202.747 330.886 210.308 323.115 219.73 323.115H248.927C253.371 323.115 257.629 321.347 260.769 318.206L330.739 248.237C333.879 245.097 338.137 243.328 342.58 243.328L342.604 243.34Z" fill="black"/>
<path d="M202.619 85H249.765C259.013 85 266.504 92.4913 266.504 101.739V129.552C266.504 138.8 259.013 146.291 249.765 146.291H208.644C204.201 146.291 199.943 148.06 196.802 151.2L126.24 221.763C123.099 224.904 118.842 226.672 114.398 226.672H80.4777C71.4044 226.672 63.9712 219.436 63.7386 210.363L63.0058 181.968C62.7615 172.558 70.3226 164.799 79.7449 164.799H108.942C113.386 164.799 117.643 163.031 120.784 159.89L190.753 89.9205C193.894 86.7798 198.152 85.0116 202.595 85.0116L202.619 85Z" fill="black"/>
<path d="M342.603 120.829H389.75C398.997 120.829 406.489 128.32 406.489 137.568V165.381C406.489 174.629 398.997 182.12 389.75 182.12H348.629C344.185 182.12 339.928 183.888 336.787 187.029L266.225 257.591C263.084 260.732 258.826 262.5 254.383 262.5H213.169C208.853 262.5 204.701 264.164 201.583 267.153L122.366 343.067C119.248 346.056 115.096 347.72 110.78 347.72H81.9083C72.6605 347.72 65.1692 340.217 65.1692 330.981V302.4C65.1692 293.152 72.6605 285.661 81.9083 285.661H110.571C115.014 285.661 119.272 283.892 122.413 280.752L197.64 205.525C200.78 202.384 205.038 200.616 209.481 200.616H248.927C253.371 200.616 257.628 198.848 260.769 195.707L330.738 125.738C333.879 122.597 338.136 120.829 342.58 120.829H342.603Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.8 KiB

View File

@@ -1,21 +0,0 @@
<svg width="1520" height="470" viewBox="0 0 1520 470" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_666_1801)">
<path d="M1520 0H0V470H1520V0Z" fill="white"/>
<path d="M1327.43 229.28C1327.02 229.28 1326.66 229.56 1326.55 229.95C1319.12 256.1 1313.3 276.33 1309.11 290.65C1308.38 293.13 1306.11 294.82 1303.53 294.82H1274.7C1272.14 294.82 1269.88 293.15 1269.14 290.7L1236.81 184.82C1235.67 181.08 1238.47 177.3 1242.38 177.3H1264.83C1267.44 177.3 1269.73 179.04 1270.44 181.56L1289.31 249.31C1289.42 249.7 1289.78 249.98 1290.19 249.98C1290.6 249.98 1290.96 249.71 1291.07 249.31C1294.97 235.54 1297.97 225.18 1300.08 218.24L1310.6 181.52C1311.32 179.02 1313.6 177.3 1316.19 177.3H1340.98C1343.59 177.3 1345.87 179.03 1346.58 181.54L1365.67 249.31C1365.78 249.7 1366.14 249.98 1366.55 249.98C1366.96 249.98 1367.32 249.71 1367.43 249.31C1375.76 220.15 1382.13 197.57 1386.53 181.58C1387.23 179.05 1389.52 177.3 1392.14 177.3H1412.5C1416.41 177.3 1419.21 181.08 1418.07 184.82L1385.75 290.7C1385 293.15 1382.74 294.82 1380.18 294.82H1351.81C1349.23 294.82 1346.96 293.12 1346.23 290.65L1328.34 229.94C1328.23 229.55 1327.87 229.28 1327.46 229.28H1327.43Z" fill="black"/>
<path d="M1234.55 239.4C1234.55 251.21 1232.02 261.48 1226.96 270.22C1222.05 278.96 1215.23 285.63 1206.49 290.23C1197.75 294.83 1187.94 297.13 1177.05 297.13H1173.37C1162.48 297.13 1152.67 294.83 1143.93 290.23C1135.19 285.63 1128.29 278.96 1123.23 270.22C1118.32 261.48 1115.87 251.21 1115.87 239.4V232.73C1115.87 220.92 1118.32 210.65 1123.23 201.91C1128.29 193.17 1135.19 186.5 1143.93 181.9C1152.67 177.3 1162.48 175 1173.37 175H1177.05C1187.93 175 1197.75 177.3 1206.49 181.9C1215.23 186.5 1222.05 193.17 1226.96 201.91C1232.02 210.65 1234.55 220.92 1234.55 232.73V239.4ZM1200.51 226.75C1200.51 217.4 1198.13 210.27 1193.38 205.36C1188.78 200.45 1182.73 198 1175.21 198C1167.69 198 1161.56 200.45 1156.81 205.36C1152.21 210.27 1149.91 217.4 1149.91 226.75V245.38C1149.91 254.73 1152.21 261.86 1156.81 266.77C1161.56 271.68 1167.7 274.13 1175.21 274.13C1182.72 274.13 1188.78 271.68 1193.38 266.77C1198.13 261.86 1200.51 254.73 1200.51 245.38V226.75Z" fill="black"/>
<path d="M1050.84 130.89C1048.38 130.09 1045.49 129.38 1042.16 128.78C1037.1 127.86 1032.65 127.4 1028.82 127.4C1005.21 127.4 993.4 138.9 993.4 161.9V177.31H985.61C982.4 177.31 979.79 179.91 979.79 183.13V194.02C979.79 197.23 982.39 199.84 985.61 199.84H993.4V289.01C993.4 292.22 996.01 294.83 999.22 294.83H1021.16C1024.37 294.83 1026.98 292.22 1026.98 289.01V206.12C1026.98 202.91 1029.58 200.3 1032.8 200.3H1048.53C1051.74 200.3 1054.35 197.7 1054.35 194.48V183.12C1054.35 179.91 1051.74 177.3 1048.53 177.3H1032.8C1029.59 177.3 1026.98 174.69 1026.98 171.48V163.96C1026.98 158.44 1028.21 154.76 1030.66 152.92C1033.27 151.08 1036.41 150.16 1040.09 150.16C1041.78 150.16 1044.08 150.47 1046.99 151.08C1050.69 151.66 1054.35 148.9 1054.35 145.15V135.98C1054.35 133.69 1053.01 131.59 1050.84 130.88V130.89Z" fill="black"/>
<path d="M1096.07 129.24H1074.13C1070.92 129.24 1068.31 131.846 1068.31 135.06V289.01C1068.31 292.224 1070.92 294.83 1074.13 294.83H1096.07C1099.28 294.83 1101.89 292.224 1101.89 289.01V135.06C1101.89 131.846 1099.28 129.24 1096.07 129.24Z" fill="black"/>
<path d="M755.92 181.92C756.97 186.82 763.54 188.36 767.39 185.15C775.49 178.39 785.06 175.01 796.11 175.01C808.07 175.01 817.88 178.23 825.55 184.67C833.37 191.11 837.28 200.46 837.28 212.73V289.01C837.28 292.22 834.68 294.83 831.46 294.83H809.52C806.31 294.83 803.7 292.22 803.7 289.01V220.08C803.7 214.1 801.94 209.19 798.41 205.36C795.04 201.37 790.67 199.38 785.3 199.38C779.32 199.38 774.19 201.68 769.89 206.28C765.6 210.73 763.45 216.17 763.45 222.61V289.01C763.45 292.22 760.84 294.83 757.63 294.83H735.69C732.48 294.83 729.87 292.22 729.87 289.01V183.13C729.87 179.92 732.48 177.31 735.69 177.31H750.23C752.97 177.31 755.35 179.24 755.92 181.92Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M676.88 202.83C680.25 206.05 681.94 210.27 681.94 215.48V217.77L681.92 218.01C681.81 219.29 681.7 220.53 681.7 221.58C681.7 222.53 680.93 223.3 679.98 223.31C663.39 223.38 649.79 224.38 639.17 226.3C628.13 228.29 619.47 232.2 613.18 238.03C607.05 243.7 603.98 251.98 603.98 262.87C603.98 276.21 607.74 285.25 615.25 290.01C622.76 294.76 632.35 297.14 644 297.14C655.65 297.14 665 293.81 673.64 287.14C676.7 284.78 681.7 286.77 681.7 290.64C681.7 292.96 683.58 294.84 685.91 294.84H709.46C712.67 294.84 715.28 292.23 715.28 289.02V212.74C715.28 200.47 711.37 191.12 703.55 184.68C697.44 179.55 690.79 177.53 682.01 176.57C676.71 175.54 670.78 175.02 664.23 175.02H659.4C649.28 175.02 640.31 176.48 632.49 179.39C624.67 182.3 618.54 186.37 614.09 191.58C609.8 196.79 607.65 202.7 607.65 209.29C607.65 211.7 609.87 213.43 612.28 213.43H638.7C639.72 213.43 640.54 212.61 640.54 211.59C640.54 209.75 641.31 207.76 642.84 205.61C644.53 203.46 646.9 201.7 649.97 200.32C653.04 198.79 656.72 198.02 661.01 198.02C668.37 198.02 673.66 199.63 676.88 202.85V202.83ZM675.87 242.25C679.08 242.15 681.7 244.77 681.7 247.98V251.43C681.7 251.71 681.68 251.98 681.64 252.25C681.09 255.64 679.82 258.79 677.8 261.7C675.19 265.69 671.9 268.75 667.91 270.9C663.92 273.05 659.94 274.12 655.95 274.12C650.12 274.12 645.75 273.12 642.84 271.13C640.08 268.98 638.7 265.38 638.7 260.32C638.7 253.88 642.23 249.28 649.28 246.52C655.45 244.03 664.31 242.6 675.87 242.24V242.25Z" fill="black"/>
<path d="M510.44 294.82C507.23 294.82 504.62 292.21 504.62 289V142.87C504.62 139.66 507.23 137.05 510.44 137.05H534.45C537.66 137.05 540.27 139.66 540.27 142.87V259.56C540.27 262.77 542.88 265.38 546.09 265.38H586.44C589.15 265.38 591.51 267.26 592.11 269.9L596.17 287.7C597 291.34 594.23 294.81 590.5 294.81H510.44V294.82Z" fill="black"/>
<path d="M963.6 277.34C957.93 271.21 949.72 268.14 938.99 268.14H898.05C893.14 268.14 889.46 267.3 887.01 265.61C884.71 263.92 883.56 261.55 883.56 258.48C883.56 256.03 884.79 253.8 887.24 251.81C889.85 249.82 893.45 248.82 898.05 248.82H913.92C922.97 248.82 930.94 247.21 937.84 243.99C944.89 240.62 950.33 236.17 954.17 230.65C958 224.98 959.92 218.77 959.92 212.02C959.92 207.87 959.16 203.97 957.66 200.33H964.52C967.73 200.33 970.34 197.72 970.34 194.51V183.62C970.34 180.41 967.74 177.8 964.52 177.8H929.05C928.99 177.78 928.93 177.76 928.87 177.75C923.2 175.91 916.76 174.99 909.55 174.99H905.87C889.46 174.99 876.97 178.44 868.38 185.34C859.95 192.24 855.73 201.13 855.73 212.02C855.73 217.69 857.19 223.06 860.1 228.12C860.9 229.47 861.8 230.76 862.78 231.99C865.89 235.88 864.99 243.1 861.21 246.33C860.08 247.3 859.02 248.36 858.03 249.51C854.35 253.65 852.51 258.56 852.51 264.23C852.51 268.22 853.43 271.9 855.27 275.27C855.47 275.63 855.68 275.99 855.89 276.34C858.31 280.24 857.89 286.5 854.57 289.67C853.16 291.01 851.86 292.5 850.66 294.13C847.44 298.58 845.83 303.56 845.83 309.08C845.83 318.13 848.97 325.18 855.26 330.24C861.7 335.45 869.59 338.06 878.95 338.06H927.48C935.76 338.06 943.27 336.45 950.02 333.23C956.92 330.16 962.36 325.79 966.35 320.12C970.34 314.45 972.33 308.08 972.33 301.03C972.33 291.22 969.42 283.32 963.59 277.34H963.6ZM893.68 199.61C897.51 196.54 902.34 195.01 908.17 195.01C914 195.01 919.06 196.54 922.89 199.61C926.72 202.52 928.64 206.66 928.64 212.03C928.64 217.4 926.72 221.61 922.89 224.68C919.06 227.59 914.15 229.05 908.17 229.05C902.19 229.05 897.51 227.59 893.68 224.68C889.85 221.61 887.93 217.4 887.93 212.03C887.93 206.66 889.85 202.52 893.68 199.61ZM936.92 313.68C934.01 316.13 930.56 317.36 926.57 317.36H887.93C884.56 317.36 881.64 316.21 879.19 313.91C876.74 311.61 875.51 308.93 875.51 305.86C875.51 302.79 876.74 300.11 879.19 297.81C881.64 295.66 884.56 294.59 887.93 294.59H927.95C931.94 294.59 935.16 295.59 937.61 297.58C940.06 299.73 941.29 302.33 941.29 305.4C941.29 308.62 939.83 311.38 936.92 313.68Z" fill="black"/>
<path d="M342.05 242.17H382.58C390.53 242.17 396.97 248.61 396.97 256.56V280.47C396.97 288.42 390.53 294.86 382.58 294.86H347.23C343.41 294.86 339.75 296.38 337.05 299.08L276.39 359.73C273.69 362.43 270.03 363.95 266.21 363.95H237.05C229.25 363.95 222.86 357.73 222.66 349.93L222.03 325.52C221.82 317.43 228.32 310.75 236.42 310.75H261.52C265.34 310.75 269 309.23 271.7 306.53L331.85 246.38C334.55 243.68 338.21 242.16 342.03 242.16L342.05 242.17Z" fill="#7528FC"/>
<path d="M221.71 106.05H262.24C270.19 106.05 276.63 112.49 276.63 120.44V144.35C276.63 152.3 270.19 158.74 262.24 158.74H226.89C223.07 158.74 219.41 160.26 216.71 162.96L156.05 223.62C153.35 226.32 149.69 227.84 145.87 227.84H116.71C108.91 227.84 102.52 221.62 102.32 213.82L101.69 189.41C101.48 181.32 107.98 174.65 116.08 174.65H141.18C145 174.65 148.66 173.13 151.36 170.43L211.51 110.28C214.21 107.58 217.87 106.06 221.69 106.06L221.71 106.05Z" fill="#FF3276"/>
<path d="M342.05 136.85H382.58C390.53 136.85 396.97 143.29 396.97 151.24V175.15C396.97 183.1 390.53 189.54 382.58 189.54H347.23C343.41 189.54 339.75 191.06 337.05 193.76L276.39 254.42C273.69 257.12 270.03 258.64 266.21 258.64H230.78C227.07 258.64 223.5 260.07 220.82 262.64L152.72 327.9C150.04 330.47 146.47 331.9 142.76 331.9H117.94C109.99 331.9 103.55 325.45 103.55 317.51V292.94C103.55 284.99 109.99 278.55 117.94 278.55H142.58C146.4 278.55 150.06 277.03 152.76 274.33L217.43 209.66C220.13 206.96 223.79 205.44 227.61 205.44H261.52C265.34 205.44 269 203.92 271.7 201.22L331.85 141.07C334.55 138.37 338.21 136.85 342.03 136.85H342.05Z" fill="#F480FF"/>
</g>
<defs>
<clipPath id="clip0_666_1801">
<rect width="1520" height="470" fill="white"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 9.3 KiB

View File

@@ -1,21 +0,0 @@
<svg width="1520" height="470" viewBox="0 0 1520 470" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_666_1801)">
<path d="M1520 0H0V470H1520V0Z" fill="rgb(12, 16, 21)"/>
<path d="M1327.43 229.28C1327.02 229.28 1326.66 229.56 1326.55 229.95C1319.12 256.1 1313.3 276.33 1309.11 290.65C1308.38 293.13 1306.11 294.82 1303.53 294.82H1274.7C1272.14 294.82 1269.88 293.15 1269.14 290.7L1236.81 184.82C1235.67 181.08 1238.47 177.3 1242.38 177.3H1264.83C1267.44 177.3 1269.73 179.04 1270.44 181.56L1289.31 249.31C1289.42 249.7 1289.78 249.98 1290.19 249.98C1290.6 249.98 1290.96 249.71 1291.07 249.31C1294.97 235.54 1297.97 225.18 1300.08 218.24L1310.6 181.52C1311.32 179.02 1313.6 177.3 1316.19 177.3H1340.98C1343.59 177.3 1345.87 179.03 1346.58 181.54L1365.67 249.31C1365.78 249.7 1366.14 249.98 1366.55 249.98C1366.96 249.98 1367.32 249.71 1367.43 249.31C1375.76 220.15 1382.13 197.57 1386.53 181.58C1387.23 179.05 1389.52 177.3 1392.14 177.3H1412.5C1416.41 177.3 1419.21 181.08 1418.07 184.82L1385.75 290.7C1385 293.15 1382.74 294.82 1380.18 294.82H1351.81C1349.23 294.82 1346.96 293.12 1346.23 290.65L1328.34 229.94C1328.23 229.55 1327.87 229.28 1327.46 229.28H1327.43Z" fill="white"/>
<path d="M1234.55 239.4C1234.55 251.21 1232.02 261.48 1226.96 270.22C1222.05 278.96 1215.23 285.63 1206.49 290.23C1197.75 294.83 1187.94 297.13 1177.05 297.13H1173.37C1162.48 297.13 1152.67 294.83 1143.93 290.23C1135.19 285.63 1128.29 278.96 1123.23 270.22C1118.32 261.48 1115.87 251.21 1115.87 239.4V232.73C1115.87 220.92 1118.32 210.65 1123.23 201.91C1128.29 193.17 1135.19 186.5 1143.93 181.9C1152.67 177.3 1162.48 175 1173.37 175H1177.05C1187.93 175 1197.75 177.3 1206.49 181.9C1215.23 186.5 1222.05 193.17 1226.96 201.91C1232.02 210.65 1234.55 220.92 1234.55 232.73V239.4ZM1200.51 226.75C1200.51 217.4 1198.13 210.27 1193.38 205.36C1188.78 200.45 1182.73 198 1175.21 198C1167.69 198 1161.56 200.45 1156.81 205.36C1152.21 210.27 1149.91 217.4 1149.91 226.75V245.38C1149.91 254.73 1152.21 261.86 1156.81 266.77C1161.56 271.68 1167.7 274.13 1175.21 274.13C1182.72 274.13 1188.78 271.68 1193.38 266.77C1198.13 261.86 1200.51 254.73 1200.51 245.38V226.75Z" fill="white"/>
<path d="M1050.84 130.89C1048.38 130.09 1045.49 129.38 1042.16 128.78C1037.1 127.86 1032.65 127.4 1028.82 127.4C1005.21 127.4 993.4 138.9 993.4 161.9V177.31H985.61C982.4 177.31 979.79 179.91 979.79 183.13V194.02C979.79 197.23 982.39 199.84 985.61 199.84H993.4V289.01C993.4 292.22 996.01 294.83 999.22 294.83H1021.16C1024.37 294.83 1026.98 292.22 1026.98 289.01V206.12C1026.98 202.91 1029.58 200.3 1032.8 200.3H1048.53C1051.74 200.3 1054.35 197.7 1054.35 194.48V183.12C1054.35 179.91 1051.74 177.3 1048.53 177.3H1032.8C1029.59 177.3 1026.98 174.69 1026.98 171.48V163.96C1026.98 158.44 1028.21 154.76 1030.66 152.92C1033.27 151.08 1036.41 150.16 1040.09 150.16C1041.78 150.16 1044.08 150.47 1046.99 151.08C1050.69 151.66 1054.35 148.9 1054.35 145.15V135.98C1054.35 133.69 1053.01 131.59 1050.84 130.88V130.89Z" fill="white"/>
<path d="M1096.07 129.24H1074.13C1070.92 129.24 1068.31 131.846 1068.31 135.06V289.01C1068.31 292.224 1070.92 294.83 1074.13 294.83H1096.07C1099.28 294.83 1101.89 292.224 1101.89 289.01V135.06C1101.89 131.846 1099.28 129.24 1096.07 129.24Z" fill="white"/>
<path d="M755.92 181.92C756.97 186.82 763.54 188.36 767.39 185.15C775.49 178.39 785.06 175.01 796.11 175.01C808.07 175.01 817.88 178.23 825.55 184.67C833.37 191.11 837.28 200.46 837.28 212.73V289.01C837.28 292.22 834.68 294.83 831.46 294.83H809.52C806.31 294.83 803.7 292.22 803.7 289.01V220.08C803.7 214.1 801.94 209.19 798.41 205.36C795.04 201.37 790.67 199.38 785.3 199.38C779.32 199.38 774.19 201.68 769.89 206.28C765.6 210.73 763.45 216.17 763.45 222.61V289.01C763.45 292.22 760.84 294.83 757.63 294.83H735.69C732.48 294.83 729.87 292.22 729.87 289.01V183.13C729.87 179.92 732.48 177.31 735.69 177.31H750.23C752.97 177.31 755.35 179.24 755.92 181.92Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M676.88 202.83C680.25 206.05 681.94 210.27 681.94 215.48V217.77L681.92 218.01C681.81 219.29 681.7 220.53 681.7 221.58C681.7 222.53 680.93 223.3 679.98 223.31C663.39 223.38 649.79 224.38 639.17 226.3C628.13 228.29 619.47 232.2 613.18 238.03C607.05 243.7 603.98 251.98 603.98 262.87C603.98 276.21 607.74 285.25 615.25 290.01C622.76 294.76 632.35 297.14 644 297.14C655.65 297.14 665 293.81 673.64 287.14C676.7 284.78 681.7 286.77 681.7 290.64C681.7 292.96 683.58 294.84 685.91 294.84H709.46C712.67 294.84 715.28 292.23 715.28 289.02V212.74C715.28 200.47 711.37 191.12 703.55 184.68C697.44 179.55 690.79 177.53 682.01 176.57C676.71 175.54 670.78 175.02 664.23 175.02H659.4C649.28 175.02 640.31 176.48 632.49 179.39C624.67 182.3 618.54 186.37 614.09 191.58C609.8 196.79 607.65 202.7 607.65 209.29C607.65 211.7 609.87 213.43 612.28 213.43H638.7C639.72 213.43 640.54 212.61 640.54 211.59C640.54 209.75 641.31 207.76 642.84 205.61C644.53 203.46 646.9 201.7 649.97 200.32C653.04 198.79 656.72 198.02 661.01 198.02C668.37 198.02 673.66 199.63 676.88 202.85V202.83ZM675.87 242.25C679.08 242.15 681.7 244.77 681.7 247.98V251.43C681.7 251.71 681.68 251.98 681.64 252.25C681.09 255.64 679.82 258.79 677.8 261.7C675.19 265.69 671.9 268.75 667.91 270.9C663.92 273.05 659.94 274.12 655.95 274.12C650.12 274.12 645.75 273.12 642.84 271.13C640.08 268.98 638.7 265.38 638.7 260.32C638.7 253.88 642.23 249.28 649.28 246.52C655.45 244.03 664.31 242.6 675.87 242.24V242.25Z" fill="white"/>
<path d="M510.44 294.82C507.23 294.82 504.62 292.21 504.62 289V142.87C504.62 139.66 507.23 137.05 510.44 137.05H534.45C537.66 137.05 540.27 139.66 540.27 142.87V259.56C540.27 262.77 542.88 265.38 546.09 265.38H586.44C589.15 265.38 591.51 267.26 592.11 269.9L596.17 287.7C597 291.34 594.23 294.81 590.5 294.81H510.44V294.82Z" fill="white"/>
<path d="M963.6 277.34C957.93 271.21 949.72 268.14 938.99 268.14H898.05C893.14 268.14 889.46 267.3 887.01 265.61C884.71 263.92 883.56 261.55 883.56 258.48C883.56 256.03 884.79 253.8 887.24 251.81C889.85 249.82 893.45 248.82 898.05 248.82H913.92C922.97 248.82 930.94 247.21 937.84 243.99C944.89 240.62 950.33 236.17 954.17 230.65C958 224.98 959.92 218.77 959.92 212.02C959.92 207.87 959.16 203.97 957.66 200.33H964.52C967.73 200.33 970.34 197.72 970.34 194.51V183.62C970.34 180.41 967.74 177.8 964.52 177.8H929.05C928.99 177.78 928.93 177.76 928.87 177.75C923.2 175.91 916.76 174.99 909.55 174.99H905.87C889.46 174.99 876.97 178.44 868.38 185.34C859.95 192.24 855.73 201.13 855.73 212.02C855.73 217.69 857.19 223.06 860.1 228.12C860.9 229.47 861.8 230.76 862.78 231.99C865.89 235.88 864.99 243.1 861.21 246.33C860.08 247.3 859.02 248.36 858.03 249.51C854.35 253.65 852.51 258.56 852.51 264.23C852.51 268.22 853.43 271.9 855.27 275.27C855.47 275.63 855.68 275.99 855.89 276.34C858.31 280.24 857.89 286.5 854.57 289.67C853.16 291.01 851.86 292.5 850.66 294.13C847.44 298.58 845.83 303.56 845.83 309.08C845.83 318.13 848.97 325.18 855.26 330.24C861.7 335.45 869.59 338.06 878.95 338.06H927.48C935.76 338.06 943.27 336.45 950.02 333.23C956.92 330.16 962.36 325.79 966.35 320.12C970.34 314.45 972.33 308.08 972.33 301.03C972.33 291.22 969.42 283.32 963.59 277.34H963.6ZM893.68 199.61C897.51 196.54 902.34 195.01 908.17 195.01C914 195.01 919.06 196.54 922.89 199.61C926.72 202.52 928.64 206.66 928.64 212.03C928.64 217.4 926.72 221.61 922.89 224.68C919.06 227.59 914.15 229.05 908.17 229.05C902.19 229.05 897.51 227.59 893.68 224.68C889.85 221.61 887.93 217.4 887.93 212.03C887.93 206.66 889.85 202.52 893.68 199.61ZM936.92 313.68C934.01 316.13 930.56 317.36 926.57 317.36H887.93C884.56 317.36 881.64 316.21 879.19 313.91C876.74 311.61 875.51 308.93 875.51 305.86C875.51 302.79 876.74 300.11 879.19 297.81C881.64 295.66 884.56 294.59 887.93 294.59H927.95C931.94 294.59 935.16 295.59 937.61 297.58C940.06 299.73 941.29 302.33 941.29 305.4C941.29 308.62 939.83 311.38 936.92 313.68Z" fill="white"/>
<path d="M342.05 242.17H382.58C390.53 242.17 396.97 248.61 396.97 256.56V280.47C396.97 288.42 390.53 294.86 382.58 294.86H347.23C343.41 294.86 339.75 296.38 337.05 299.08L276.39 359.73C273.69 362.43 270.03 363.95 266.21 363.95H237.05C229.25 363.95 222.86 357.73 222.66 349.93L222.03 325.52C221.82 317.43 228.32 310.75 236.42 310.75H261.52C265.34 310.75 269 309.23 271.7 306.53L331.85 246.38C334.55 243.68 338.21 242.16 342.03 242.16L342.05 242.17Z" fill="#7528FC"/>
<path d="M221.71 106.05H262.24C270.19 106.05 276.63 112.49 276.63 120.44V144.35C276.63 152.3 270.19 158.74 262.24 158.74H226.89C223.07 158.74 219.41 160.26 216.71 162.96L156.05 223.62C153.35 226.32 149.69 227.84 145.87 227.84H116.71C108.91 227.84 102.52 221.62 102.32 213.82L101.69 189.41C101.48 181.32 107.98 174.65 116.08 174.65H141.18C145 174.65 148.66 173.13 151.36 170.43L211.51 110.28C214.21 107.58 217.87 106.06 221.69 106.06L221.71 106.05Z" fill="#FF3276"/>
<path d="M342.05 136.85H382.58C390.53 136.85 396.97 143.29 396.97 151.24V175.15C396.97 183.1 390.53 189.54 382.58 189.54H347.23C343.41 189.54 339.75 191.06 337.05 193.76L276.39 254.42C273.69 257.12 270.03 258.64 266.21 258.64H230.78C227.07 258.64 223.5 260.07 220.82 262.64L152.72 327.9C150.04 330.47 146.47 331.9 142.76 331.9H117.94C109.99 331.9 103.55 325.45 103.55 317.51V292.94C103.55 284.99 109.99 278.55 117.94 278.55H142.58C146.4 278.55 150.06 277.03 152.76 274.33L217.43 209.66C220.13 206.96 223.79 205.44 227.61 205.44H261.52C265.34 205.44 269 203.92 271.7 201.22L331.85 141.07C334.55 138.37 338.21 136.85 342.03 136.85H342.05Z" fill="#F480FF"/>
</g>
<defs>
<clipPath id="clip0_666_1801">
<rect width="1520" height="470" fill="white"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 403 KiB

After

Width:  |  Height:  |  Size: 269 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 747 KiB

After

Width:  |  Height:  |  Size: 538 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 576 KiB

After

Width:  |  Height:  |  Size: 550 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 576 KiB

After

Width:  |  Height:  |  Size: 550 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

After

Width:  |  Height:  |  Size: 970 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 146 KiB

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 699 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

After

Width:  |  Height:  |  Size: 972 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 693 KiB

After

Width:  |  Height:  |  Size: 745 KiB

View File

@@ -128,6 +128,7 @@ uv run lfx run my_flow.json "What is AI?"
- `--flow-json`: Inline JSON flow content as a string
- `--stdin`: Read JSON flow from stdin
- `--check-variables/--no-check-variables`: Check global variables for environment compatibility (default: check)
- `--env-var`: Pass environment variables to the flow in the format `KEY=VALUE`. These variables take precedence over OS environment variables.
**Examples:**
@@ -149,6 +150,9 @@ echo '{"data": {"nodes": [...], "edges": [...]}}' | uv run lfx run --stdin --inp
# Inline JSON
uv run lfx run --flow-json '{"data": {"nodes": [...], "edges": [...]}}' --input-value "Test"
# Pass dynamic environment variables (overrides OS environment variables)
uv run lfx run my_flow.json "Hello" --env-var API_KEY=my-api-key --env-var MODEL_NAME=gpt-4
```
### Complete Agent Example