diff --git a/docs/docs/API-Reference/api-flows-run.mdx b/docs/docs/API-Reference/api-flows-run.mdx
index fbc78631b..0a421be0a 100644
--- a/docs/docs/API-Reference/api-flows-run.mdx
+++ b/docs/docs/API-Reference/api-flows-run.mdx
@@ -3,6 +3,9 @@ title: Flow trigger endpoints
slug: /api-flows-run
---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
Use the `/run` and `/webhook` endpoints to run flows.
To create, read, update, and delete flows, see [Flow management endpoints](/api-flows).
@@ -20,6 +23,67 @@ Flow IDs can be found on the code snippets on the [**API access** pane](/concept
The following example runs the **Basic Prompting** template flow with flow parameters passed in the request body.
This flow requires a chat input string (`input_value`), and uses default values for all other parameters.
+
+
+
+```python
+import requests
+
+url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID"
+
+# Request payload
+payload = {
+ "input_value": "Tell me about something interesting!",
+ "session_id": "chat-123",
+ "input_type": "chat",
+ "output_type": "chat",
+ "output_component": ""
+}
+
+# Request headers
+headers = {
+ "Content-Type": "application/json",
+ "x-api-key": "LANGFLOW_API_KEY"
+}
+
+try:
+ response = requests.post(url, json=payload, headers=headers)
+ response.raise_for_status()
+ print(response.json())
+except requests.exceptions.RequestException as e:
+ print(f"Error making API request: {e}")
+```
+
+
+
+
+```js
+const payload = {
+ input_value: "Tell me about something interesting!",
+ session_id: "chat-123",
+ input_type: "chat",
+ output_type: "chat",
+ output_component: ""
+};
+
+const options = {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'x-api-key': 'LANGFLOW_API_KEY'
+ },
+ body: JSON.stringify(payload)
+};
+
+fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID', options)
+ .then(response => response.json())
+ .then(data => console.log(data))
+ .catch(err => console.error(err));
+```
+
+
+
+
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID" \
@@ -30,11 +94,13 @@ curl -X POST \
"session_id": "chat-123",
"input_type": "chat",
"output_type": "chat",
- "output_component": "",
- "tweaks": null
+ "output_component": ""
}'
```
+
+
+
The response from `/v1/run/$FLOW_ID` includes metadata, inputs, and outputs for the run.
@@ -84,6 +150,77 @@ With `/v1/run/$FLOW_ID`, the flow is executed as a batch with optional LLM token
To stream LLM token responses, append the `?stream=true` query parameter to the request:
+
+
+
+```python
+import requests
+
+url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true"
+
+# Request payload
+payload = {
+ "message": "Tell me something interesting!",
+ "session_id": "chat-123"
+}
+
+# Request headers
+headers = {
+ "accept": "application/json",
+ "Content-Type": "application/json",
+ "x-api-key": "LANGFLOW_API_KEY"
+}
+
+try:
+ response = requests.post(url, json=payload, headers=headers, stream=True)
+ response.raise_for_status()
+
+ # Process streaming response
+ for line in response.iter_lines():
+ if line:
+ print(line.decode('utf-8'))
+except requests.exceptions.RequestException as e:
+ print(f"Error making API request: {e}")
+```
+
+
+
+
+```js
+const payload = {
+ message: "Tell me something interesting!",
+ session_id: "chat-123"
+};
+
+const options = {
+ method: 'POST',
+ headers: {
+ 'accept': 'application/json',
+ 'Content-Type': 'application/json',
+ 'x-api-key': 'LANGFLOW_API_KEY'
+ },
+ body: JSON.stringify(payload)
+};
+
+fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true', options)
+ .then(async response => {
+ const reader = response.body?.getReader();
+ const decoder = new TextDecoder();
+
+ if (reader) {
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) break;
+ console.log(decoder.decode(value));
+ }
+ }
+ })
+ .catch(err => console.error(err));
+```
+
+
+
+
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID?stream=true" \
@@ -96,6 +233,9 @@ curl -X POST \
}'
```
+
+
+
LLM chat responses are streamed back as `token` events, culminating in a final `end` event that closes the connection.
@@ -132,8 +272,9 @@ The following example is truncated to illustrate a series of `token` events as w
| Header | Info | Example |
|--------|------|---------|
| Content-Type | Required. Specifies the JSON format. | "application/json" |
-| accept | Optional. Specifies the response format. | "application/json" |
-| x-api-key | Optional. Required only if authentication is enabled. | "sk-..." |
+| accept | Optional. Specifies the response format. Defaults to JSON if not specified. | "application/json" |
+| x-api-key | Required. Your Langflow API key for authentication. Can be passed as a header or query parameter. | "sk-..." |
+| `X-LANGFLOW-GLOBAL-VAR-*` | Optional. Pass global variables to the flow. Variable names are automatically converted to uppercase. These variables take precedence over OS environment variables and are only available during this specific request execution. | `"X-LANGFLOW-GLOBAL-VAR-API_KEY: sk-..."` |
### Run endpoint parameters
@@ -150,6 +291,93 @@ The following example is truncated to illustrate a series of `token` events as w
### Request example with all headers and parameters
+
+
+
+```python
+import requests
+
+url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true"
+
+# Request payload with tweaks
+payload = {
+ "input_value": "Tell me a story",
+ "input_type": "chat",
+ "output_type": "chat",
+ "output_component": "chat_output",
+ "session_id": "chat-123",
+ "tweaks": {
+ "component_id": {
+ "parameter_name": "value"
+ }
+ }
+}
+
+# Request headers
+headers = {
+ "Content-Type": "application/json",
+ "accept": "application/json",
+ "x-api-key": "LANGFLOW_API_KEY"
+}
+
+try:
+ response = requests.post(url, json=payload, headers=headers, stream=True)
+ response.raise_for_status()
+
+ # Process streaming response
+ for line in response.iter_lines():
+ if line:
+ print(line.decode('utf-8'))
+except requests.exceptions.RequestException as e:
+ print(f"Error making API request: {e}")
+```
+
+
+
+
+```js
+const payload = {
+ input_value: "Tell me a story",
+ input_type: "chat",
+ output_type: "chat",
+ output_component: "chat_output",
+ session_id: "chat-123",
+ tweaks: {
+ component_id: {
+ parameter_name: "value"
+ }
+ }
+};
+
+const options = {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'accept': 'application/json',
+ 'x-api-key': 'LANGFLOW_API_KEY'
+ },
+ body: JSON.stringify(payload)
+};
+
+fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID?stream=true', options)
+ .then(async response => {
+ const reader = response.body?.getReader();
+ const decoder = new TextDecoder();
+
+ if (reader) {
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) break;
+ console.log(decoder.decode(value));
+ }
+ }
+ })
+ .catch(err => console.error(err));
+```
+
+
+
+
```bash
curl -X POST \
"$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID?stream=true" \
@@ -170,6 +398,103 @@ curl -X POST \
}'
```
+
+
+
+### Pass global variables in request headers {#pass-global-variables-in-headers}
+
+You can pass global variables to your flow using HTTP headers with the format `X-LANGFLOW-GLOBAL-VAR-{VARIABLE_NAME}`.
+
+Variables passed in headers take precedence over OS environment variables. If a variable is provided in both a header and an environment variable, the header value is used. Variables are only available during this specific request execution and aren't persisted.
+
+Variable names are automatically converted to uppercase. For example, `X-LANGFLOW-GLOBAL-VAR-api-key` becomes `API_KEY` in your flow.
+
+You don't need to create these variables in Langflow's Global Variables section first. Pass any variable name using this header format.
+
+
+
+
+```python
+import requests
+
+url = "http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID"
+
+# Request payload
+payload = {
+ "input_value": "Tell me about something interesting!",
+ "input_type": "chat",
+ "output_type": "chat"
+}
+
+# Request headers with global variables
+headers = {
+ "Content-Type": "application/json",
+ "x-api-key": "LANGFLOW_API_KEY",
+ "X-LANGFLOW-GLOBAL-VAR-OPENAI_API_KEY": "sk-...",
+ "X-LANGFLOW-GLOBAL-VAR-USER_ID": "user123",
+ "X-LANGFLOW-GLOBAL-VAR-ENVIRONMENT": "production"
+}
+
+try:
+ response = requests.post(url, json=payload, headers=headers)
+ response.raise_for_status()
+ print(response.json())
+except requests.exceptions.RequestException as e:
+ print(f"Error making API request: {e}")
+```
+
+
+
+
+```js
+const payload = {
+ input_value: "Tell me about something interesting!",
+ input_type: "chat",
+ output_type: "chat"
+};
+
+const options = {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'x-api-key': 'LANGFLOW_API_KEY',
+ 'X-LANGFLOW-GLOBAL-VAR-OPENAI_API_KEY': 'sk-...',
+ 'X-LANGFLOW-GLOBAL-VAR-USER_ID': 'user123',
+ 'X-LANGFLOW-GLOBAL-VAR-ENVIRONMENT': 'production'
+ },
+ body: JSON.stringify(payload)
+};
+
+fetch('http://LANGFLOW_SERVER_URL/api/v1/run/FLOW_ID', options)
+ .then(response => response.json())
+ .then(data => console.log(data))
+ .catch(err => console.error(err));
+```
+
+
+
+
+```bash
+curl -X POST \
+ "$LANGFLOW_SERVER_URL/api/v1/run/$FLOW_ID" \
+ -H "Content-Type: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ -H "X-LANGFLOW-GLOBAL-VAR-OPENAI_API_KEY: sk-..." \
+ -H "X-LANGFLOW-GLOBAL-VAR-USER_ID: user123" \
+ -H "X-LANGFLOW-GLOBAL-VAR-ENVIRONMENT: production" \
+ -d '{
+ "input_value": "Tell me about something interesting!",
+ "input_type": "chat",
+ "output_type": "chat"
+ }'
+```
+
+
+
+
+If your flow components reference variables that aren't provided in headers or your Langflow database, the flow fails by default. To avoid this, you can set `LANGFLOW_FALLBACK_TO_ENV_VAR=True` in your `.env` file, which allows the flow to use values from OS environment variables if they aren't otherwise specified.
+
+
## Webhook run flow
Use the `/webhook` endpoint to start a flow by sending an HTTP `POST` request.
diff --git a/docs/docs/API-Reference/api-monitor.mdx b/docs/docs/API-Reference/api-monitor.mdx
index 0f42edc6a..ed320b083 100644
--- a/docs/docs/API-Reference/api-monitor.mdx
+++ b/docs/docs/API-Reference/api-monitor.mdx
@@ -3,6 +3,9 @@ title: Monitor endpoints
slug: /api-monitor
---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
The `/monitor` endpoints are for internal Langflow functionality, primarily related to running flows in the **Playground**, storing chat history, and generating flow logs.
This information is primarily for those who are building custom components or contributing to the Langflow codebase in a way that requires calling or understanding these endpoints.
@@ -630,6 +633,122 @@ HTTP/1.1 204 No Content
+## Get traces
+
+ Retrieve trace metadata and span trees for a specific flow.
+
+### Example request
+
+Use `GET /monitor/traces` and filter by `flow_id`:
+
+
+
+
+```python
+import os
+
+import requests
+
+base_url = os.getenv("LANGFLOW_SERVER_URL", "http://localhost:7860")
+api_key = os.getenv("LANGFLOW_API_KEY")
+flow_id = "YOUR_FLOW_ID"
+
+response = requests.get(
+ f"{base_url}/api/v1/monitor/traces",
+ params={"flow_id": flow_id, "page": 1, "size": 50},
+ headers={"x-api-key": api_key, "accept": "application/json"},
+ timeout=10,
+)
+response.raise_for_status()
+traces = response.json()
+print(traces)
+```
+
+
+
+
+```ts
+const baseUrl = process.env.LANGFLOW_SERVER_URL ?? "http://localhost:7860";
+const apiKey = process.env.LANGFLOW_API_KEY!;
+const flowId = "YOUR_FLOW_ID";
+
+async function listTraces() {
+ const url = new URL("/api/v1/monitor/traces", baseUrl);
+ url.searchParams.set("flow_id", flowId);
+ url.searchParams.set("page", "1");
+ url.searchParams.set("size", "50");
+
+ const res = await fetch(url.toString(), {
+ headers: {
+ accept: "application/json",
+ "x-api-key": apiKey,
+ },
+ });
+
+ if (!res.ok) {
+ throw new Error(`Request failed with status ${res.status}`);
+ }
+
+ const data = await res.json();
+ console.log(data);
+}
+
+listTraces().catch(console.error);
+```
+
+
+
+
+```bash
+export LANGFLOW_SERVER_URL="http://localhost:7860"
+export LANGFLOW_API_KEY="YOUR_LANGFLOW_API_KEY"
+export FLOW_ID="YOUR_FLOW_ID"
+
+curl -s "$LANGFLOW_SERVER_URL/api/v1/monitor/traces?flow_id=$FLOW_ID&page=1&size=50" \
+ -H "accept: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ | jq .
+```
+
+
+
+
+### Example response
+
+```json
+{
+ "traces": [
+ {
+ "id": "426656db-fc3c-4a3a-acf8-c60acf099543",
+ "name": "Simple Agent - 9e774f60-857b-44b4-bbcd-87bd23848ee8",
+ "status": "ok",
+ "startTime": "2026-03-03T19:13:30.692628Z",
+ "totalLatencyMs": 18693,
+ "totalTokens": 2050,
+ "flowId": "9e774f60-857b-44b4-bbcd-87bd23848ee8",
+ "sessionId": "9e774f60-857b-44b4-bbcd-87bd23848ee8",
+ "input": {
+ "input_value": "Use tools to teach me about vertex graphs"
+ },
+ "output": {
+ "message": {
+ "text_key": "text",
+ "data": {
+ "timestamp": "2026-03-03 19:13:30 UTC",
+ "sender": "Machine",
+ "sender_name": "AI",
+ "session_id": "9e774f60-857b-44b4-bbcd-87bd23848ee8",
+ "text": "I can teach you the concept, but I couldn’t pull the Wikipedia pages with the tool ... (truncated)"
+ }
+ }
+ }
+ }
+ ],
+ "total": 1,
+ "pages": 1
+}
+```
+
## Get transactions
Retrieve all transactions, which are interactions between components, for a specific flow.
diff --git a/docs/docs/API-Reference/api-openai-responses.mdx b/docs/docs/API-Reference/api-openai-responses.mdx
index 5ffb828cb..18f117595 100644
--- a/docs/docs/API-Reference/api-openai-responses.mdx
+++ b/docs/docs/API-Reference/api-openai-responses.mdx
@@ -191,6 +191,7 @@ Fields set dynamically by Langflow:
| `model` | `string` | The flow ID that was executed. |
| `output` | `list[dict]` | Array of output items (messages, tool calls, etc.). |
| `previous_response_id` | `string` | ID of previous response if continuing conversation. |
+| `usage` | `dict` | Token usage statistics if the `usage` field is available. Contains `prompt_tokens`, `completion_tokens`, and `total_tokens`. |
Fields with OpenAI-compatible default values
@@ -212,7 +213,7 @@ Fields set dynamically by Langflow:
| `tools` | `list[dict]` | `[]` | Available tools. |
| `top_p` | `float` | `1.0` | Top-p setting. |
| `truncation` | `string` | `"disabled"` | Truncation setting. |
-| `usage` | `dict` | `null` | Usage statistics (if any). |
+| `usage` | `dict` | `null` | Token usage statistics. Set dynamically when available from flow components, otherwise `null`. See [Token usage tracking](#token-usage-tracking). |
| `user` | `string` | `null` | User identifier (if any). |
| `metadata` | `dict` | `{}` | Additional metadata. |
@@ -596,4 +597,123 @@ To avoid this, you can set the `FALLBACK_TO_ENV_VARS` environment variable is `t
In the above example, `OPENAI_API_KEY` will fall back to the database variable if not provided in the header.
`USER_ID` and `ENVIRONMENT` will fall back to environment variables if `FALLBACK_TO_ENV_VARS` is enabled.
-Otherwise, the flow fails.
\ No newline at end of file
+Otherwise, the flow fails.
+
+## Token usage tracking {#token-usage-tracking}
+
+The OpenAI Responses API endpoint tracks token usage when your flow uses language model components that provide token usage information. The `usage` field in the response contains statistics about the number of tokens used for the request and response.
+
+Token usage is automatically extracted from the flow execution results when the `usage` field is available.
+The `usage` field follows OpenAI's format with `prompt_tokens`, `completion_tokens`, and `total_tokens` fields.
+If token usage information is not available from the flow components, the `usage` field is `null`.
+
+The `usage` field is always present in the response, either with token counts or as `null`. The conditional checks shown in the examples below are optional defensive programming to handle cases where usage might not be available.
+
+
+
+
+```python
+from openai import OpenAI
+
+client = OpenAI(
+ base_url="LANGFLOW_SERVER_URL/api/v1/",
+ default_headers={"x-api-key": "LANGFLOW_API_KEY"},
+ api_key="dummy-api-key"
+)
+
+response = client.responses.create(
+ model="FLOW_ID",
+ input="Explain quantum computing in simple terms"
+)
+
+# Access token usage if available
+if response.usage:
+ print(f"Prompt tokens: {response.usage.get('prompt_tokens', 0)}")
+ print(f"Completion tokens: {response.usage.get('completion_tokens', 0)}")
+ print(f"Total tokens: {response.usage.get('total_tokens', 0)}")
+else:
+ print("Token usage not available for this flow")
+```
+
+
+
+
+```typescript
+import OpenAI from "openai";
+
+const client = new OpenAI({
+ baseURL: "LANGFLOW_SERVER_URL/api/v1/",
+ defaultHeaders: {
+ "x-api-key": "LANGFLOW_API_KEY"
+ },
+ apiKey: "dummy-api-key"
+});
+
+const response = await client.responses.create({
+ model: "FLOW_ID",
+ input: "Explain quantum computing in simple terms"
+});
+
+// Access token usage if available
+if (response.usage) {
+ console.log(`Prompt tokens: ${response.usage.prompt_tokens || 0}`);
+ console.log(`Completion tokens: ${response.usage.completion_tokens || 0}`);
+ console.log(`Total tokens: ${response.usage.total_tokens || 0}`);
+} else {
+ console.log("Token usage not available for this flow");
+}
+```
+
+
+
+
+```bash
+curl -X POST \
+ "$LANGFLOW_SERVER_URL/api/v1/responses" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "model": "FLOW_ID",
+ "input": "Explain quantum computing in simple terms",
+ "stream": false
+ }'
+```
+
+
+Response with token usage
+
+```json
+{
+ "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
+ "object": "response",
+ "created_at": 1756837941,
+ "status": "completed",
+ "model": "ced2ec91-f325-4bf0-8754-f3198c2b1563",
+ "output": [
+ {
+ "type": "message",
+ "id": "msg_a1b2c3d4-e5f6-7890-abcd-ef1234567890",
+ "status": "completed",
+ "role": "assistant",
+ "content": [
+ {
+ "type": "output_text",
+ "text": "Quantum computing is a type of computing that uses quantum mechanical phenomena...",
+ "annotations": []
+ }
+ ]
+ }
+ ],
+ "usage": {
+ "prompt_tokens": 12,
+ "completion_tokens": 145,
+ "total_tokens": 157
+ },
+ "previous_response_id": null
+}
+```
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/docs/API-Reference/workflows-api.mdx b/docs/docs/API-Reference/workflows-api.mdx
new file mode 100644
index 000000000..2ba2f27c0
--- /dev/null
+++ b/docs/docs/API-Reference/workflows-api.mdx
@@ -0,0 +1,528 @@
+---
+title: Workflow API (Beta)
+slug: /workflow-api
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialAPISetup from '@site/docs/_partial-api-setup.mdx';
+
+:::warning Beta Feature
+The Workflow API is currently in **Beta**.
+The API endpoints and response formats may change in future releases.
+:::
+
+The Workflow API provides programmatic access to execute Langflow workflows synchronously or asynchronously.
+Synchronous requests receive complete results immediately upon completion.
+Asynchronous requests are queued in the background and will run until complete, or a request is issued to the [Stop Workflow endpoint](#stop-workflow-endpoint).
+
+The Workflow API is part of the Langflow Developer v2 API and offers enhanced workflow execution capabilities compared to the v1 `/run` endpoint.
+
+
+
+## Execute workflows endpoint (synchronous or asynchronous)
+
+**Endpoint:**
+
+```
+POST /api/v2/workflows
+```
+
+**Description:** Execute a workflow synchronously and receive complete results immediately upon completion.
+Set `background=false` to make the request synchronous.
+
+### Example synchronous request
+
+Execute a workflow synchronously and receive complete results immediately:
+
+
+
+
+```python
+import requests
+
+url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows"
+headers = {
+ "Content-Type": "application/json",
+ "x-api-key": LANGFLOW_API_KEY
+}
+
+payload = {
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "background": False,
+ "inputs": {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "what is 2+2",
+ "ChatInput-abc.session_id": "session-123"
+ }
+}
+
+response = requests.post(url, json=payload, headers=headers)
+print(response.json())
+```
+
+
+
+
+```typescript
+import axios from 'axios';
+
+const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows`;
+
+const payload = {
+ flow_id: "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ background: false,
+ inputs: {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "what is 2+2",
+ "ChatInput-abc.session_id": "session-123"
+ }
+};
+
+const runWorkflow = async () => {
+ try {
+ const response = await axios.post(url, payload, {
+ headers: {
+ 'Content-Type': 'application/json',
+ 'x-api-key': LANGFLOW_API_KEY
+ }
+ });
+ console.log(response.data);
+ } catch (error) {
+ console.error('Error triggering workflow:', error);
+ }
+};
+
+runWorkflow();
+```
+
+
+
+
+```bash
+curl -X POST \
+ "$LANGFLOW_SERVER_URL/api/v2/workflows" \
+ -H "Content-Type: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ -d '{
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "background": false,
+ "inputs": {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "what is 2+2",
+ "ChatInput-abc.session_id": "session-123"
+ }
+ }'
+```
+
+
+
+
+### Example asynchronous request
+
+For long-running workflows, set `background=true` to get a `job_id` immediately, and then poll the status [using the GET endpoint](#get-workflow-status-endpoint) until the job is complete.
+
+To stop a job, send a POST request to the [Stop workflow endpoint](#stop-workflow-endpoint).
+
+:::tip
+The asynchronous request contains `stream` parameter, but streaming is not yet supported. The parameter is included for future compatibility.
+:::
+
+**Example request:**
+
+
+
+
+```python
+import requests
+
+url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows"
+headers = {
+ "Content-Type": "application/json",
+ "x-api-key": LANGFLOW_API_KEY
+}
+
+payload = {
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "background": True,
+ "stream": False,
+ "inputs": {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "Process this in the background",
+ "ChatInput-abc.session_id": "session-456"
+ }
+}
+
+response = requests.post(url, json=payload, headers=headers)
+print(response.json()) # Returns job_id immediately
+```
+
+
+
+
+```typescript
+import axios from 'axios';
+
+const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows`;
+
+const payload = {
+ flow_id: "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ background: true,
+ stream: false,
+ inputs: {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "Process this in the background",
+ "ChatInput-abc.session_id": "session-456"
+ }
+};
+
+const runWorkflow = async () => {
+ try {
+ const response = await axios.post(url, payload, {
+ headers: {
+ 'Content-Type': 'application/json',
+ 'x-api-key': LANGFLOW_API_KEY
+ }
+ });
+ console.log(response.data); // Returns job_id immediately
+ } catch (error) {
+ console.error('Error triggering workflow:', error);
+ }
+};
+
+runWorkflow();
+```
+
+
+
+
+```bash
+curl -X POST \
+ "$LANGFLOW_SERVER_URL/api/v2/workflows" \
+ -H "Content-Type: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ -d '{
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "background": true,
+ "stream": false,
+ "inputs": {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "Process this in the background",
+ "ChatInput-abc.session_id": "session-456"
+ }
+ }'
+```
+
+
+
+
+**Response:**
+
+```json
+{
+ "job_id": "job_id_1234567890",
+ "created_timestamp": "2025-01-15T10:30:00Z",
+ "status": "queued",
+ "errors": []
+}
+```
+
+### Request body
+
+| Field | Type | Required | Default | Description |
+|-------|------|----------|---------|-------------|
+| `flow_id` | `string` | Yes | - | The ID or endpoint name of the flow to execute. |
+| `flow_version` | `string` | No | - | Optional version hash to pin to a specific flow version. |
+| `background` | `boolean` | No | `false` | Must be `false` for synchronous execution. |
+| `inputs` | `object` | No | `{}` | Inputs for the workflow execution. Uses component identifiers with dot notation (e.g., `ChatInput-abc.input_value`). See [Component identifiers and input structure](#component-identifiers-and-input-structure) for detailed information. |
+
+### Example response
+
+```json
+{
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "job_id": "job_id_1234567890",
+ "object": "response",
+ "created_at": 1741476542,
+ "status": "completed",
+ "errors": [],
+ "inputs": {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "what is 2+2",
+ "ChatInput-abc.session_id": "session-123"
+ },
+ "outputs": {
+ "ChatOutput-xyz": {
+ "type": "message",
+ "component_id": "ChatOutput-xyz",
+ "status": "completed",
+ "content": "2 + 2 equals 4."
+ }
+ },
+ "metadata": {}
+}
+```
+
+### Response body
+
+The response includes an `outputs` field containing component-level results. Each output has a `type` field indicating the type of content:
+
+| Type | Description | Example |
+|------|-------------|---------|
+| `message` | Text message content. | Chat responses, summaries |
+| `image` | Image URL or data. | Generated images, processed images |
+| `sql` | SQL query results. | Database query outputs |
+| `data` | Structured data. | JSON objects, arrays |
+| `file` | File reference. | Generated documents, reports |
+
+## Get workflow status endpoint
+
+**Endpoint:** `GET /api/v2/workflows`
+
+**Description:** Retrieve the status and results of a workflow execution by job ID.
+
+### Example request
+
+
+
+
+```python
+import requests
+
+url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows"
+params = {
+ "job_id": "job_id_1234567890"
+}
+headers = {
+ "accept": "application/json",
+ "x-api-key": LANGFLOW_API_KEY
+}
+
+response = requests.get(url, params=params, headers=headers)
+print(response.json())
+```
+
+
+
+
+```typescript
+import axios from 'axios';
+
+const jobId = 'job_id_1234567890';
+const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows`;
+
+const getWorkflowStatus = async () => {
+ try {
+ const response = await axios.get(url, {
+ params: {
+ job_id: jobId
+ },
+ headers: {
+ 'accept': 'application/json',
+ 'x-api-key': LANGFLOW_API_KEY
+ }
+ });
+ console.log(response.data);
+ } catch (error) {
+ console.error('Error getting workflow status:', error);
+ }
+};
+
+getWorkflowStatus();
+```
+
+
+
+
+```bash
+curl -X GET \
+ "$LANGFLOW_SERVER_URL/api/v2/workflows?job_id=job_id_1234567890" \
+ -H "accept: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY"
+```
+
+
+
+
+### Query parameters
+
+| Parameter | Type | Required | Description |
+|-----------|------|----------|-------------|
+| `job_id` | `string` | Yes | The job ID returned from a workflow execution. |
+| `stream` | `boolean` | No | If `true`, returns server-sent events stream. Default: `false`. |
+| `sequence_id` | `integer` | No | Optional sequence ID to resume streaming from a specific point. |
+
+### Example response
+
+```json
+{
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "job_id": "job_id_1234567890",
+ "object": "response",
+ "created_at": 1741476542,
+ "status": "completed",
+ "errors": [],
+ "outputs": {
+ "ChatOutput-xyz": {
+ "type": "message",
+ "component_id": "ChatOutput-xyz",
+ "status": "completed",
+ "content": "Processing complete..."
+ }
+ },
+ "input": [
+ {
+ "type": "text",
+ "data": "Input text prompt for the workflow execution",
+ "role": "User"
+ }
+ ],
+ "metadata": {}
+}
+```
+
+### Response body
+
+The response includes a `status` field that indicates the current state of the workflow execution:
+
+| Status | Description |
+|--------|-------------|
+| `queued` | Job is queued and waiting to start. |
+| `in_progress` | Job is currently executing. |
+| `completed` | Job completed successfully. |
+| `failed` | Job failed during execution. |
+| `error` | Job encountered an error. |
+
+## Stop workflow endpoint
+
+**Endpoint:** `POST /api/v2/workflows/stop`
+
+**Description:** Stop a running workflow execution by job ID.
+
+### Example request
+
+
+
+
+```python
+import requests
+
+url = f"{LANGFLOW_SERVER_URL}/api/v2/workflows/stop"
+headers = {
+ "Content-Type": "application/json",
+ "x-api-key": LANGFLOW_API_KEY
+}
+payload = {
+ "job_id": "job_id_1234567890"
+}
+
+response = requests.post(url, json=payload, headers=headers)
+print(response.json())
+```
+
+
+
+
+```typescript
+import axios from 'axios';
+
+const url = `${LANGFLOW_SERVER_URL}/api/v2/workflows/stop`;
+
+const payload = {
+ job_id: "job_id_1234567890"
+};
+
+const stopWorkflow = async () => {
+ try {
+ const response = await axios.post(url, payload, {
+ headers: {
+ 'Content-Type': 'application/json',
+ 'x-api-key': LANGFLOW_API_KEY
+ }
+ });
+ console.log(response.data);
+ } catch (error) {
+ console.error('Error stopping workflow:', error);
+ }
+};
+
+stopWorkflow();
+```
+
+
+
+
+```bash
+curl -X POST \
+ "$LANGFLOW_SERVER_URL/api/v2/workflows/stop" \
+ -H "Content-Type: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ -d '{
+ "job_id": "job_id_1234567890"
+ }'
+```
+
+
+
+
+### Request body
+
+| Field | Type | Required | Default | Description |
+|-------|------|----------|---------|-------------|
+| `job_id` | `string` | Yes | - | The job ID of the workflow to stop. |
+
+### Example response
+
+```json
+{
+ "job_id": "job_id_1234567890",
+ "message": "Job job_id_1234567890 cancelled successfully."
+}
+```
+
+## Component identifiers and input structure
+
+The Workflows API uses component identifiers with dot notation to specify inputs for individual components in your workflow. This allows you to pass values to specific components and override component parameters.
+
+Component identifiers use the format `{component_id}.{parameter_name}`.
+When making requests to the Workflows API, include component identifiers in the `inputs` object.
+For example, this demonstrates targeting multiple components and their parameters in a single request.
+
+```json
+{
+ "flow_id": "your-flow-id",
+ "inputs": {
+ "ChatInput-abc.input_type": "chat",
+ "ChatInput-abc.input_value": "what is 2+2",
+ "ChatInput-abc.session_id": "session-123",
+ "OpenSearchComponent-xyz.opensearch_url": "https://opensearch:9200",
+ "LLMComponent-123.temperature": 0.7,
+ "LLMComponent-123.max_tokens": 100
+ }
+}
+```
+
+To find the component ID in the Langflow UI, open your flow in Langflow, click the component, and then click **Controls**. The component ID is at the top of the **Controls** pane.
+
+You can override any component's parameters.
+
+## Error handling
+
+The API uses standard HTTP status codes to indicate success or failure:
+
+| Status Code | Description |
+|-------------|-------------|
+| `200 OK` | Request successful. |
+| `400 Bad Request` | Invalid request parameters. |
+| `401 Unauthorized` | Invalid or missing API key. |
+| `404 Not Found` | Flow not found or developer API disabled. |
+| `500 Internal Server Error` | Server error during execution. |
+| `501 Not Implemented` | Endpoint not yet implemented. |
+
+### Error response format
+
+```json
+{
+ "detail": "Error message describing what went wrong"
+}
+```
diff --git a/docs/docs/Agents/agents-tools.mdx b/docs/docs/Agents/agents-tools.mdx
index d3c269ab2..5dc189767 100644
--- a/docs/docs/Agents/agents-tools.mdx
+++ b/docs/docs/Agents/agents-tools.mdx
@@ -151,7 +151,7 @@ An agent can use [custom components](/components-custom-components) as tools.
3. Enable **Tool Mode** in the custom component.
4. Connect the custom component's tool output to the **Agent** component's **Tools** input.
-5. Open the **Playground** and instruct the agent, `Use the text analyzer on this text: "Agents really are thinking machines!"`
+5. Open the **Playground** and instruct the agent, `Use the text analyzer on this text: "Agents really are thinking machines!"`
Based on your instruction, the agent should call the `analyze_text` action and return the result.
For example:
diff --git a/docs/docs/Agents/agents.mdx b/docs/docs/Agents/agents.mdx
index bf8cfdf1f..c3c8b5ff0 100644
--- a/docs/docs/Agents/agents.mdx
+++ b/docs/docs/Agents/agents.mdx
@@ -6,6 +6,7 @@ slug: /agents
import Icon from "@site/src/components/icon";
import PartialParams from '@site/docs/_partial-hidden-params.mdx';
import PartialAgentsWork from '@site/docs/_partial-agents-work.mdx';
+import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Langflow's [**Agent** component](/components-agents) is critical for building agent flows.
This component provides everything you need to create an agent, including multiple Large Language Model (LLM) providers, tool calling, and custom instructions.
@@ -19,22 +20,14 @@ The following steps explain how to create an agent flow in Langflow from a blank
For a prebuilt example, use the **Simple Agent** template or the [Langflow quickstart](/get-started-quickstart).
1. Click **New Flow**, and then click **Blank Flow**.
-
2. Add an **Agent** component to your flow.
-
-3. Select the provider and model that you want to use.
-The default model for the **Agent** component is an OpenAI model.
-If you want to use a different provider, edit the **Model Provider** and **Model Name** fields accordingly.
-If your preferred model isn't listed, type the complete model name into the **Model Name** field, and then select it from the **Model Name** menu.
-Make sure that the model is enabled/verified in your model provider account.
+3.
+4. Select the model that you want to use from the **Language Model** dropdown.
+If your preferred model isn't listed, make sure it's enabled in the **Models** configuration.
For more information, see [Agent component parameters](#agent-component-parameters).
-
-4. Enter a valid credential for your selected model provider.
-Make sure that the credential has permission to call the selected model.
-
5. Add [**Chat Input** and **Chat Output** components](/chat-input-and-output) to your flow, and then connect them to the **Agent** component.
- At this point, you have created a basic LLM-based chat flow that you can test in the **Playground**.
+ At this point, you have created a basic LLM-based chat flow that you can test in the **Playground**.
However, this flow only chats with the LLM.
To enhance this flow and make it truly agentic, add some tools, as explained in the next steps.
@@ -56,7 +49,7 @@ Make sure that the credential has permission to call the selected model.

-8. Open the **Playground**, and then ask the agent, `What tools are you using to answer my questions?`
+8. Open the **Playground**, and then ask the agent, `What tools are you using to answer my questions?`
The agent should respond with a list of the connected tools.
It may also include built-in tools.
@@ -89,28 +82,22 @@ You can configure the **Agent** component to use your preferred provider and mod
### Provider and model
-Use the **Model Provider** (`agent_llm`) and **Model Name** (`llm_model`) settings to select the model provider and LLM that you want the agent to use.
+Use the **Language Model** (`agent_llm`) setting to select the LLM that you want the agent to use.
+
+
+
+To use a model with the **Agent** component, select the model in the **Agent** component's **Language Model** field.
+
+The **Language Model** field lists all language models that you've configured globally. If a provider doesn't have any language models available, they aren't listed.
+For example, if a provider offers only embeddings models, those models aren't listed on the **Agent** component.
-The **Agent** component includes many models from several popular model providers.
To access other providers or models, you can do either of the following:
-* Set **Model Provider** to **Connect other models**, and then connect any [language model component](/components-models).
-* Select your preferred provider, type the complete model name into the **Model Name** field, and then select your custom option from the **Model Name** menu.
-Make sure that the model is enabled/verified in your model provider account.
+* Connect any [language model component](/components-models) to the **Agent** component's **Language Model** port. This option allows you to connect a custom language model component to use models that aren't available in the global model providers list.
+* Configure additional providers in the **Models** pane, and then select the model from the **Language Model** dropdown.
If you need to generate embeddings in your flow, use an [embedding model component](/components-embedding-models).
-### Model provider API key
-
-In the **API Key** field, enter a valid authentication key for your selected model provider, if you are using a built-in provider.
-For example, to use the default OpenAI model, you must provide a valid OpenAI API key for an OpenAI account that has credits and permission to call OpenAI LLMs.
-
-You can enter the key directly, but it is recommended that you follow industry best practices for storing and referencing API keys.
-For example, you can use a [global variable](/configuration-global-variables) or [environment variables](/environment-variables).
-For more information, see [Add component API keys to Langflow](/api-keys-and-authentication#component-api-keys).
-
-If you select **Connect other models** as the model provider, authentication is handled in the incoming language model component.
-
### Agent instructions and input
In the **Agent Instructions** (`system_prompt`) field, you can provide custom instructions that you want the **Agent** component to use for every conversation.
diff --git a/docs/docs/Agents/mcp-client.mdx b/docs/docs/Agents/mcp-client.mdx
index 24838a878..a7ba31a7a 100644
--- a/docs/docs/Agents/mcp-client.mdx
+++ b/docs/docs/Agents/mcp-client.mdx
@@ -6,6 +6,8 @@ slug: /mcp-client
import Icon from "@site/src/components/icon";
import McpIcon from '@site/static/logos/mcp-icon.svg';
import PartialMcpNodeTip from '@site/docs/_partial-mcp-node-tip.mdx';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
Langflow integrates with the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) as both an MCP server and an MCP client.
@@ -24,6 +26,8 @@ This component has two modes, depending on the type of server you want to access
### Connect to a non-Langflow MCP server {#mcp-stdio-mode}
+
+
1. Add an **MCP Tools** component to your flow.
2. In the **MCP Server** field, select a previously connected server or click **Add MCP Server**.
@@ -36,35 +40,31 @@ This component has two modes, depending on the type of server you want to access
* **HTTP/SSE**: Enter your MCP server's **Name**, **URL**, and any **Headers** and **Environment Variables** the server uses, and then click **Add Server**.
The default **URL** for Langflow MCP servers is `http://localhost:7860/api/v1/mcp/project/PROJECT_ID/streamable` or `http://localhost:7860/api/v1/mcp/streamable`. For more information, see [Connect to a Langflow MCP server](#mcp-http-mode).
-
+3. To configure headers for your MCP server, enter each header in the **Headers** fields as key-value pairs.
+ You can use [global variables](/configuration-global-variables) in header values by entering the global variable name as the header value.
+ For more information, see [Use global variables in MCP server headers](#use-global-variables-in-mcp-server-headers).
-3. To use environment variables in your server command, enter each variable in the **Env** fields as key-value pairs.
+4. To use environment variables in your server command, enter each variable in the **Env** fields as key-value pairs.
- :::tip
- Langflow passes environment variables from the `.env` file to MCP, but it doesn't pass global variables declared in your Langflow **Settings**.
- To define an MCP server environment variable as a global variable, add it to Langflow's `.env` file at startup.
- For more information, see [global variables](/configuration-global-variables).
- :::
-
-4. In the **Tool** field, select a tool that you want this component to use, or leave the field blank to allow access to all tools provided by the MCP server.
+5. In the **Tool** field, select a tool that you want this component to use, or leave the field blank to allow access to all tools provided by the MCP server.
If you select a specific tool, you might need to configure additional tool-specific fields. For information about tool-specific fields, see your MCP server's documentation.
At this point, the **MCP Tools** component is serving a tool from the connected server, but nothing is using the tool. The next steps explain how to make the tool available to an [**Agent** component](/components-agents) so that the agent can use the tool in its responses.
-5. In the [component's header menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
+6. In the [component's header menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
-6. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port.
+7. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port.
If not already present in your flow, make sure you also attach **Chat Input** and **Chat Output** components to the **Agent** component.

-7. Test your flow to make sure the MCP server is connected and the selected tool is used by the agent. Open the **Playground**, and then enter a prompt that uses the tool you connected through the **MCP Tools** component.
+8. Test your flow to make sure the MCP server is connected and the selected tool is used by the agent. Open the **Playground**, and then enter a prompt that uses the tool you connected through the **MCP Tools** component.
For example, if you use `mcp-server-fetch` with the `fetch` tool, you could ask the agent to summarize recent tech news. The agent calls the MCP server function `fetch`, and then returns the response.
-8. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
+9. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
### Connect a Langflow MCP server {#mcp-http-mode}
@@ -110,6 +110,167 @@ To add a new MCP server, click **Add MCP Server**, and then follow the steps in
Click **More** to edit or delete an MCP server connection.
+## Modify MCP server environment variables with the API {#mcp-api-tweaks}
+
+You can modify MCP server environment variables at runtime when running flows through the [Langflow API](/api-reference-api-examples) by tweaking the **MCP Tools** component.
+
+You can include tweaks with any Langflow API request that supports the `tweaks` parameter, such as POST requests to the `/run` or `/webhook` endpoints.
+For more information, see [Input schema (tweaks)](/concepts-publish#input-schema).
+
+To modify the **MCP Tools** component's environment variables with tweaks, do the following:
+
+1. Open the flow that contains your **MCP Tools** component.
+2. To find the **MCP Tools** component's unique ID, in your **MCP Tools** component, click **Controls**.
+The component's ID is displayed in the **Controls** pane, such as `MCPTools-Bzahc`.
+3. Send a POST request to the Langflow server's `/run` endpoint, and include tweaks to the **MCP Tools** component.
+
+ The following examples demonstrate a request structure with the `env` object nested under `mcp_server` in the `tweaks` payload:
+
+
+
+
+ ```python
+ import requests
+ import os
+
+ LANGFLOW_SERVER_ADDRESS = "http://localhost:7860"
+ FLOW_ID = "your-flow-id"
+ LANGFLOW_API_KEY = os.getenv("LANGFLOW_API_KEY")
+ MCP_TOOLS_COMPONENT_ID = "MCPTools-Bzahc"
+
+ url = f"{LANGFLOW_SERVER_ADDRESS}/api/v1/run/{FLOW_ID}?stream=false"
+ headers = {
+ "Content-Type": "application/json",
+ "x-api-key": LANGFLOW_API_KEY
+ }
+ payload = {
+ "output_type": "chat",
+ "input_type": "chat",
+ "input_value": "What sales data is available to me?",
+ "tweaks": {
+ MCP_TOOLS_COMPONENT_ID: {
+ "mcp_server": {
+ "env": {
+ "API_URL": "https://api.example.com",
+ "API_KEY": "your-mcp-server-api-key",
+ "ENVIRONMENT": "production"
+ }
+ }
+ }
+ }
+ }
+
+ response = requests.post(url, json=payload, headers=headers)
+ print(response.json())
+ ```
+
+
+
+
+ ```typescript
+ const LANGFLOW_SERVER_ADDRESS = "http://localhost:7860";
+ const FLOW_ID = "your-flow-id";
+ const LANGFLOW_API_KEY = process.env.LANGFLOW_API_KEY || "";
+ const MCP_TOOLS_COMPONENT_ID = "MCPTools-Bzahc";
+
+ const url = `${LANGFLOW_SERVER_ADDRESS}/api/v1/run/${FLOW_ID}?stream=false`;
+
+ const response = await fetch(url, {
+ method: "POST",
+ headers: {
+ "Content-Type": "application/json",
+ "x-api-key": LANGFLOW_API_KEY,
+ },
+ body: JSON.stringify({
+ output_type: "chat",
+ input_type: "chat",
+ input_value: "What sales data is available to me?",
+ tweaks: {
+ [MCP_TOOLS_COMPONENT_ID]: {
+ mcp_server: {
+ env: {
+ API_URL: "https://api.example.com",
+ API_KEY: "your-mcp-server-api-key",
+ ENVIRONMENT: "production",
+ },
+ },
+ },
+ },
+ }),
+ });
+
+ const data = await response.json();
+ console.log(data);
+ ```
+
+
+
+
+ ```bash
+ curl --request POST \
+ --url "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID?stream=false" \
+ --header "Content-Type: application/json" \
+ --header "x-api-key: LANGFLOW_API_KEY" \
+ --data '{
+ "output_type": "chat",
+ "input_type": "chat",
+ "input_value": "What sales data is available to me?",
+ "tweaks": {
+ "MCP_TOOLS_COMPONENT_ID": {
+ "mcp_server": {
+ "env": {
+ "API_URL": "https://api.example.com",
+ "API_KEY": "your-mcp-server-api-key",
+ "ENVIRONMENT": "production"
+ }
+ }
+ }
+ }
+ }'
+ ```
+
+
+
+
+ Replace `MCP_TOOLS_COMPONENT_ID`, `LANGFLOW_API_KEY`, `LANGFLOW_SERVER_ADDRESS`, and `FLOW_ID` with the actual values from your Langflow deployment.
+
+ Langflow doesn't automatically discover or expose which environment variables your MCP server accepts from the **MCP Tools** component.
+ To determine which environment variables your MCP server accepts, see the MCP server's documentation. For example, the [Astra DB MCP server](https://github.com/datastax/astra-db-mcp) requires `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT`, with an optional variable for `ASTRA_DB_KEYSPACE`, as documented in its repository.
+
+## Use global variables in MCP server headers {#use-global-variables-in-mcp-server-headers}
+
+You can use [global variables](/configuration-global-variables) in MCP server header values to securely store and reference API keys, authentication tokens, and other sensitive values. This is particularly useful for deployment scenarios where you need to pass user-specific credentials at runtime.
+
+Enter a global variable name as the header value, and Langflow resolves the global variable name to its actual value before making the MCP server request. Langflow only passes the token value to your server; it doesn't validate tokens on behalf of your MCP server.
+
+For example, to create a global variable named `TEST_BEARER_TOKEN` for MCP server bearer authentication, do the following:
+
+1. To open the **Global Variables** pane, click your profile icon, select **Settings**, and then click **Global Variables**.
+2. Create a **Credential** global variable named `TEST_BEARER_TOKEN`.
+3. In the **Value** field, enter your MCP server's bearer token value. The value must include the `Bearer` prefix with a space, for example: `Bearer eyJhbG...`.
+4. Click **Save Variable**.
+5. To manage MCP server connections for your Langflow client, click **MCP servers** in the visual editor, or click your profile icon, select **Settings**, and then click **MCP Servers**.
+6. Click **Add MCP Server**.
+7. Select the following:
+ * **Name**: test-mcp-server
+ * **Streamable HTTP/SSE URL**: Your MCP server's URL, such as `http://127.0.0.1:8000/mcp`.
+ * **Headers**: In the key field, enter the literal string `Authorization`. For the key's value, enter `TEST_BEARER_TOKEN`, or the exact name of your global variable.
+8. Click **Create Server**.
+
+ If the connection succeeds, Langflow shows the number of tools exposed by the server.
+
+ After creating the server and global variable, you can connect to the server with the **MCP Tools** component, as explained in the next steps.
+
+9. Add the **MCP Tools** component to a flow.
+10. In the **MCP Tools** component, select the **MCP Server** you created.
+The MCP server configuration already includes the headers you configured earlier, so no further configuration is needed in the component. The global variable `TEST_BEARER_TOKEN` is automatically resolved when the component makes requests to the MCP server.
+
+11. Optional: To override headers or add additional headers to the **MCP Tools** component, click the component to view the **Headers** parameter in the [component inspection panel](/concepts-components#component-menus), and then add header key values. Headers configured in the component take precedence over the headers configured in the MCP server settings.
+
+12. Test your flow to make sure the agent uses your server to respond to queries. Open the **Playground**, and then enter a prompt that uses a tool that you connected through the **MCP Tools** component.
+
+ Langflow automatically resolves `TEST_BEARER_TOKEN` to its actual value before sending the request to the MCP server. When your MCP server receives the request, the `Authorization` header contains the resolved token value.
+
## See also
- [Use Langflow as an MCP server](/mcp-server)
diff --git a/docs/docs/Agents/mcp-component-astra.mdx b/docs/docs/Agents/mcp-component-astra.mdx
index f51edb494..8bf6d46c2 100644
--- a/docs/docs/Agents/mcp-component-astra.mdx
+++ b/docs/docs/Agents/mcp-component-astra.mdx
@@ -28,7 +28,7 @@ This guide demonstrates how to [use Langflow as an MCP client](/mcp-client) by u
1. In the **MCP Server** field, click **Add MCP Server**.
2. Select **Stdio** mode.
- 3. EIn the **Name** field, enter a name for the MCP server.
+ 3. In the **Name** field, enter a name for the MCP server.
4. In the **Commmand** field, add the following code to connect to an Astra DB MCP server:
```bash
diff --git a/docs/docs/Agents/mcp-server.mdx b/docs/docs/Agents/mcp-server.mdx
index b4ff6836b..6fa96dd83 100644
--- a/docs/docs/Agents/mcp-server.mdx
+++ b/docs/docs/Agents/mcp-server.mdx
@@ -12,6 +12,7 @@ Langflow integrates with the [Model Context Protocol (MCP)](https://modelcontext
This page describes how to use Langflow as an MCP server that exposes your flows as [tools](https://modelcontextprotocol.io/docs/concepts/tools) that [MCP clients](https://modelcontextprotocol.io/clients) can use when generating responses.
Langflow MCP servers support both the **streamable HTTP** transport and **Server-Sent Events (SSE)** as a fallback.
+The default project MCP server configuration uses streamable HTTP transport at the URL path `/streamable`.
For information about using Langflow as an MCP client and managing MCP server connections within flows, see [Use Langflow as an MCP client](/mcp-client).
@@ -146,6 +147,8 @@ For example:
"command": "uvx",
"args": [
"mcp-proxy",
+ "--transport",
+ "streamablehttp",
"http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
]
}
@@ -161,7 +164,7 @@ For example:
If your Langflow server requires authentication, you must include your Langflow API key or OAuth settings in the configuration.
For more information, see [MCP server authentication](#authentication).
-6. To include other environment variables with your MCP server command, add an `env` object with key-value pairs of environment variables:
+6. To include other environment variables with your MCP server command, add an `env` object with key-value pairs of environment variables. For example:
```json
{
@@ -170,6 +173,8 @@ For example:
"command": "uvx",
"args": [
"mcp-proxy",
+ "--transport",
+ "streamablehttp",
"http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
],
"env": {
@@ -180,6 +185,10 @@ For example:
}
```
+ Don't add API keys in the `env` object, as these variables are specifically for the `mcp-proxy` process.
+ Instead, add API keys under `args`.
+ For an example, see [MCP server authentication](#authentication).
+
7. Save and close your client's MCP configuration file.
8. Confirm that your Langflow MCP server is on the client's list of MCP servers.
@@ -226,11 +235,32 @@ To configure authentication for a Langflow MCP server, go to the **Projects** pa
-When authenticating your MCP server with a Langflow API key, your project's MCP server **JSON** code snippets and **Auto install** configuration automatically include the `--headers` and `x-api-key` arguments.
+When authenticating your MCP server with a Langflow API key, your project's MCP server **JSON** code snippets and **Auto install** configuration automatically include the `--headers` and `x-api-key` arguments in the **args** array (for streamable transport).
Click **Generate API key** to automatically insert a new Langflow API key into the code template.
Alternatively, you can replace `YOUR_API_KEY` with an existing Langflow API key.
+To add your API key to the configuration, use three separate entries in `args`: `"--headers"`, `"x-api-key"`, and your key value. For example:
+
+```json
+{
+ "mcpServers": {
+ "PROJECT_NAME": {
+ "command": "uvx",
+ "args": [
+ "mcp-proxy",
+ "--transport",
+ "streamablehttp",
+ "--headers",
+ "x-api-key",
+ "YOUR_API_KEY",
+ "http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
+ ]
+ }
+ }
+}
+```
+
diff --git a/docs/docs/Components/batch-run.mdx b/docs/docs/Components/batch-run.mdx
index 8d813bc0c..5a65d02c4 100644
--- a/docs/docs/Components/batch-run.mdx
+++ b/docs/docs/Components/batch-run.mdx
@@ -34,7 +34,7 @@ For example, if you want to extract text from a `name` column in a CSV file, ent
4. Connect the **Batch Run** component's **Batch Results** output to a **Parser** component's **DataFrame** input.
-5. Optional: In the **Batch Run** [component's header menu](/concepts-components#component-menus), click **Controls**, enable the **System Message** parameter, click **Close**, and then enter an instruction for how you want the LLM to process each cell extracted from the file.
+5. Optional: In the **Batch Run** [component menu](/concepts-components#component-menus), enable the **System Message** parameter, click **Close**, and then enter an instruction for how you want the LLM to process each cell extracted from the file.
For example, `Create a business card for each name.`
6. In the **Parser** component's **Template** field, enter a template for processing the **Batch Run** component's new `DataFrame` columns (`text_input`, `model_response`, and `batch_index`):
diff --git a/docs/docs/Components/bundles-agentics.mdx b/docs/docs/Components/bundles-agentics.mdx
index c7ef6fc4a..612a0143e 100644
--- a/docs/docs/Components/bundles-agentics.mdx
+++ b/docs/docs/Components/bundles-agentics.mdx
@@ -62,8 +62,8 @@ For example, this schema definition creates the following DataFrame output:
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | DataFrame | Optional. Example DataFrame to learn from; only first 50 rows used. If not provided, Schema is used. |
-| Schema | Table | Define columns to generate when no Input DataFrame is provided. See [Output Schema Format](#output-schema-format). |
-| Instructions | String | **Advanced.** Optional instructions for generation. |
+| Schema | Table | Define columns to generate when no Input DataFrame is provided. See the component's schema definition. |
+| Instructions | String | Optional instructions for generation. |
| Number of Rows to Generate | Integer | How many synthetic rows to create. Default: 10. |
## aMap component
@@ -98,7 +98,7 @@ For example, **aMap** keeps each input row and fills in `sentiment`, `confidence
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | DataFrame | Input DataFrame (list of dicts or DataFrame). Each row is processed independently. |
-| Schema | Table | Define the structure and types for generated columns. See [Output Schema Format](#output-schema-format). |
+| Schema | Table | Define the structure and types for generated columns. See the component's schema definition. |
| Instructions | String | Natural language instructions for transforming each row into the output schema. |
| As List | Boolean | If true, generate multiple instances of the schema per row and concatenate. |
| Keep Source Columns | Boolean | If `true`, append new columns to original data; if false, return only generated columns. Ignored if As List is true. Default: `true`. |
@@ -139,7 +139,7 @@ It sums revenue into `total_revenue`, identifies the best-selling product in `be
|------|------|-------------|
| Language Model | Dropdown | Select the LLM provider and model. |
| Input DataFrame | DataFrame | Input DataFrame (list of dicts or DataFrame). Required. |
-| Schema | Table | Define the structure and types for the aggregated output. See [Output Schema Format](#output-schema-format). |
+| Schema | Table | Define the structure and types for the aggregated output. See the component's schema definition. |
| As List | Boolean | If true, output is a list of instances of the schema. |
| Instructions | String | Optional instructions for aggregation. If omitted, the LLM infers from field descriptions. |
diff --git a/docs/docs/Components/bundles-composio.mdx b/docs/docs/Components/bundles-composio.mdx
index 089f102c3..f4e98ae8d 100644
--- a/docs/docs/Components/bundles-composio.mdx
+++ b/docs/docs/Components/bundles-composio.mdx
@@ -206,7 +206,7 @@ All single-service Composio components have the same parameters, and the **Compo
| Name | Type | Description |
|------|------|-------------|
-| entity_id | String | Input parameter. The entity ID for the Composio account. Default: `default`. This parameter is hidden by default in the visual editor. If you need to set this parameter, you can access it through the **Controls** in the [component's header menu](/concepts-components#component-menus). |
+| entity_id | String | Input parameter. The entity ID for the Composio account. Default: `default`. This parameter is hidden by default in the visual editor. If you need to set this parameter, you can access it through the [component inspection panel](/concepts-components#component-menus). |
| api_key | SecretString | Input parameter. The Composio API key for authentication with the Composio platform. Make sure the key authorizes the specific service that you want to use. For more information, see [Composio authentication](#composio-authentication). |
| tool_name | Connection | Input parameter for the **Composio Tools** component only. Select the Composio service (tool) to connect to. |
| action | List | Input parameter. Select actions to use. Available actions vary by service. Some actions might require premium access to a particular service. |
diff --git a/docs/docs/Components/bundles-cuga.mdx b/docs/docs/Components/bundles-cuga.mdx
index f17c98380..2a1241ff0 100644
--- a/docs/docs/Components/bundles-cuga.mdx
+++ b/docs/docs/Components/bundles-cuga.mdx
@@ -19,7 +19,7 @@ Like the core **Agent** component, the **CUGA** component can use tools connecte
It also includes some additional features:
* Browser automation for web scraping with [Playwright](https://playwright.dev/docs/intro).
-To enable web scraping, set the component's `browser_enabled` parameter to `true`, and specify a single URL in the `web_apps` parameter, in the format `https://example.com`.
+To enable web scraping, set the component's `browser_enabled` parameter to `true`.
* Load custom instructions for the agent to execute.
To use this feature, use the component's **Instructions** input to attach markdown files containing agent instructions.
@@ -90,6 +90,6 @@ This example asked about the sales data provided by the MCP Server, such as `Whi
| add_current_date_tool | Boolean | If true, adds a tool that returns the current date. Default: `true`. |
| lite_mode | Boolean | Set to `true` to enable CugaLite mode for faster execution when using a smaller number of tools. Default: `true`. |
| lite_mode_tool_threshold | Integer | The threshold to automatically enable CugaLite. If the CUGA component has fewer tools connected than this threshold, CugaLite is activated. Default: `25`. |
+| shortlisting_tool_threshold | Integer | The threshold for tool shortlisting. When the total number of tools exceeds this threshold, the CUGA component enables its `find_tools` feature to filter tools down to a smaller subset before making tool selection decisions. This helps reduce token usage and improve performance when working with large numbers of tools. Default: `35`. |
| decomposition_strategy | Dropdown | Strategy for task decomposition. `flexible` allows multiple subtasks per app. `exact` enforces one subtask per app. Default: `flexible`. |
| browser_enabled | Boolean | Enable a built-in browser for web scraping and search. Allows the agent to use general web search in its responses. Disable (`false`) to restrict the agent to the context provided in the flow. Default: `false`. |
-| web_apps | Multiline String | When `browser_enabled` is `true`, specify a single URL such as `https://example.com` that the agent can open with the built-in browser. The CUGA component can access both public and private internet resources. There is no built-in mechanism in the CUGA component to restrict access to only public internet resources. |
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-datastax.mdx b/docs/docs/Components/bundles-datastax.mdx
index d5307b9bd..3624f7f06 100644
--- a/docs/docs/Components/bundles-datastax.mdx
+++ b/docs/docs/Components/bundles-datastax.mdx
@@ -112,7 +112,7 @@ This input only appears after connecting a collection that support hybrid search
7. Update the **Structured Output** template:
- 1. Click the **Structured Output** component to expose the [component's header menu](/concepts-components#component-menus), and then click **Controls**.
+ 1. Click the **Structured Output** component to expose the [component inspection panel](/concepts-components#component-menus).
2. Find the **Format Instructions** row, click **Expand**, and then replace the prompt with the following text:
```text
diff --git a/docs/docs/Components/bundles-lite-llm.mdx b/docs/docs/Components/bundles-lite-llm.mdx
new file mode 100644
index 000000000..bd6e4051a
--- /dev/null
+++ b/docs/docs/Components/bundles-lite-llm.mdx
@@ -0,0 +1,39 @@
+---
+title: LiteLLM
+slug: /bundles-lite-llm
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+ [**Bundles**](/components-bundle-components) contain custom components that support specific third-party integrations with Langflow.
+
+The **LiteLLM** bundle component connects to models through a LiteLLM proxy, which routes requests to multiple LLM providers.
+Using a proxy lets you change model providers without changing credentials in your flows.
+You authenticate to the proxy using a single key, and the proxy then uses its own configured credentials to call providers.
+Virtual keys are created by the proxy administrator. For more information on managing virtual keys, see [Virtual Keys](https://docs.litellm.ai/docs/proxy/virtual_keys) in the LiteLLM documentation.
+
+## LiteLLM Proxy text generation
+
+The **LiteLLM Proxy** component generates text using an LLM provider.
+
+It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
+
+Use the **Language Model** output when you want to use a LiteLLM proxy-backed model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
+
+For more information, see [Language model components](/components-models).
+
+### LiteLLM Proxy parameters
+
+
+
+| Name | Type | Description |
+|------|------|-------------|
+| api_base | String | Input parameter. Base URL of the LiteLLM proxy. Default: `"http://localhost:4000/v1"`. |
+| api_key | String | Input parameter. Virtual key for authentication with the LiteLLM proxy. |
+| model_name | String | Input parameter. Model name to use, such as `gpt-4o`, `claude-3-opus`. |
+| temperature | Float | Input parameter. Controls randomness. Lower values are more deterministic. Range: `[0.0, 2.0]`. Default: `0.7`. |
+| max_tokens | Integer | Input parameter. Maximum number of tokens to generate. Set to `0` for no limit. Range: `[0, 128000]`. Advanced. |
+| timeout | Integer | Input parameter. Request timeout in seconds. Default: `60`. |
+| max_retries | Integer | Input parameter. Maximum number of retries on failure. Default: `2`. |
+| stream | Boolean | Input parameter. Whether to stream the response. |
diff --git a/docs/docs/Components/bundles-ollama.mdx b/docs/docs/Components/bundles-ollama.mdx
index 8ffff1fa0..9388b4483 100644
--- a/docs/docs/Components/bundles-ollama.mdx
+++ b/docs/docs/Components/bundles-ollama.mdx
@@ -28,7 +28,7 @@ To use the **Ollama** component in a flow, connect Langflow to your locally runn
To refresh the server's list of models, click **Refresh**.
-4. Optional: To configure additional parameters, such as temperature or max tokens, click **Controls** in the [component's header menu](/concepts-components#component-menus).
+4. Optional: To configure additional parameters, such as temperature or max tokens, click the component to open the [component inspection panel](/concepts-components#component-menus).
5. Connect the **Ollama** component to other components in the flow, depending on how you want to use the model.
@@ -55,7 +55,7 @@ To use this component in a flow, connect Langflow to your locally running Ollama
To refresh the server's list of models, click **Refresh**.
-4. Optional: To configure additional parameters, such as temperature or max tokens, click **Controls** in the [component's header menu](/concepts-components#component-menus).
+4. Optional: To configure additional parameters, such as temperature or max tokens, click the component to open the [component inspection panel](/concepts-components#component-menus).
Available parameters depend on the selected model.
5. Connect the **Ollama Embeddings** component to other components in the flow.
diff --git a/docs/docs/Components/components-bundles.mdx b/docs/docs/Components/components-bundles.mdx
index 5ff036e9b..01bb9eb32 100644
--- a/docs/docs/Components/components-bundles.mdx
+++ b/docs/docs/Components/components-bundles.mdx
@@ -22,7 +22,7 @@ Some bundles have no documentation.
To find documentation for a specific bundled component, browse the Langflow docs and your provider's documentation.
If available, you can also find links to relevant documentation, such as API endpoints, through the component itself:
-1. Click the component to expose the [component's header menu](/concepts-components#component-menus).
+1. Click the component to expose the [component inspection panel](/concepts-components#component-menus).
2. Click **More**.
3. Select **Docs**.
diff --git a/docs/docs/Components/components-embedding-models.mdx b/docs/docs/Components/components-embedding-models.mdx
index 873029cf8..35b5d0025 100644
--- a/docs/docs/Components/components-embedding-models.mdx
+++ b/docs/docs/Components/components-embedding-models.mdx
@@ -4,6 +4,7 @@ slug: /components-embedding-models
---
import Icon from "@site/src/components/icon";
+import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Embedding model components in Langflow generate text embeddings using a specified Large Language Model (LLM).
@@ -21,33 +22,34 @@ This flow loads a text file, splits the text into chunks, generates embeddings f
1. Create a flow, add a **Read File** component, and then select a file containing text data, such as a PDF, that you can use to test the flow.
-2. Add the **Embedding Model** core component, and then provide a valid OpenAI API key.
-You can enter the API key directly or use a [global variable](/configuration-global-variables).
+2.
:::tip My preferred provider or model isn't listed
- If your preferred embedding model provider or model isn't supported by the **Embedding Model** core component, you can use any [additional embedding models](#additional-embedding-models) in place of the core component.
+ If your preferred embedding model provider or model isn't available in Langflow's global **Models**, you can use any [additional embedding models](#additional-embedding-models) in place of the core component.
Browse [**Bundles**](/components-bundle-components) or **Search** for your preferred provider to find additional embedding models, such as the [**Hugging Face Embeddings Inference** component](/bundles-huggingface#hugging-face-embeddings-inference).
:::
-3. Add a [**Split Text** component](/split-text) to your flow.
+3. Add the **Embedding Model** core component to your flow, and then select your configured embedding model from the **Embedding Model** dropdown.
+
+4. Add a [**Split Text** component](/split-text) to your flow.
This component splits text input into smaller chunks to be processed into embeddings.
-4. Add a vector store component, such as the **Chroma DB** component, to your flow, and then configure the component to connect to your vector database.
+5. Add a vector store component, such as the **Chroma DB** component, to your flow, and then configure the component to connect to your vector database.
This component stores the generated embeddings so they can be used for similarity search.
-5. Connect the components:
+6. Connect the components:
* Connect the **Read File** component's **Loaded Files** output to the **Split Text** component's **Data or DataFrame** input.
* Connect the **Split Text** component's **Chunks** output to the vector store component's **Ingest Data** input.
* Connect the **Embedding Model** component's **Embeddings** output to the vector store component's **Embedding** input.
-6. To query the vector store, add [**Chat Input and Output** components](/chat-input-and-output):
+7. To query the vector store, add [**Chat Input and Output** components](/chat-input-and-output):
* Connect the **Chat Input** component to the vector store component's **Search Query** input.
* Connect the vector store component's **Search Results** output to the **Chat Output** component.
-7. Click **Playground**, and then enter a search query to retrieve text chunks that are most semantically similar to your query.
+8. Click **Playground**, and then enter a search query to retrieve text chunks that are most semantically similar to your query.
## Embedding Model parameters
@@ -60,9 +62,8 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
| Name | Display Name | Type | Description |
|------|--------------|------|-------------|
-| provider | Model Provider | List | Input parameter. Select the embedding model provider. |
-| model | Model Name | List | Input parameter. Select the embedding model to use.|
-| api_key | OpenAI API Key | Secret[String] | Input parameter. The API key required for authenticating with the provider. |
+| provider | Model Provider | List | Input parameter. Select the embedding model provider. Models are configured globally in the **Models** pane. |
+| model | Model Name | List | Input parameter. Select the embedding model to use. Options depend on the selected provider and are configured globally in the **Models** pane. |
| api_base | API Base URL | String | Input parameter. Base URL for the API. Leave empty for default. |
| dimensions | Dimensions | Integer | Input parameter. The number of dimensions for the output embeddings. |
| chunk_size | Chunk Size | Integer | Input parameter. The size of text chunks to process. Default: `1000`. |
@@ -74,7 +75,7 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
## Additional embedding models
-If your provider or model isn't supported by the **Embedding Model** core component, you can replace this component with any other component that generates embeddings.
+If your provider or model isn't available in Langflow's global **Models**, you can replace the **Embedding Model** core component with any other component that generates embeddings.
To find additional embedding model components, browse [**Bundles**](/components-bundle-components) or **Search** for your preferred provider.
diff --git a/docs/docs/Components/components-models.mdx b/docs/docs/Components/components-models.mdx
index 7128ef3e4..5d0ce0813 100644
--- a/docs/docs/Components/components-models.mdx
+++ b/docs/docs/Components/components-models.mdx
@@ -6,6 +6,7 @@ slug: /components-models
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Language model components in Langflow generate text using a specified Large Language Model (LLM).
These components accept inputs like chat messages, files, and instructions in order to generate a text response.
@@ -24,18 +25,23 @@ One of the most common use cases of language model components is to chat with LL
The following example uses a language model component in a chatbot flow similar to the **Basic Prompting** template.
-1. Add the **Language Model** core component to your flow, and then enter your OpenAI API key.
+1.
- This example uses the **Language Model** core component's default OpenAI model.
- If you want to use a different provider or model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
:::tip My preferred provider or model isn't listed
- If you want to use a provider or model that isn't built-in to the **Language Model** core component, you can replace this component with any [additional language model](#additional-language-models).
+ If you want to use a provider or model that isn't built-in to Langflow's global **Models**, you can replace the **Language Model** component with any [additional language model component](#additional-language-models).
Browse [**Bundles**](/components-bundle-components) or **Search** for your preferred provider to find additional language models.
+
+ Alternatively, you can use Ollama to host your preferred model, and then configure your Ollama service in Langflow's global **Models**.
+ Or, create your own custom component to support any provider and model of your choice, and then use your custom component in place of the **Language Model** core component. As a shortcut, use an existing language model component as the basis for your custom component.
:::
-3. In the [component's header menu](/concepts-components#component-menus), click **Controls**, enable the **System Message** parameter, and then click **Close**.
+2. Add the **Language Model** core component to your flow, and then select your model from the **Language Model** field.
+
+ Optionally, to configure API keys and enable or disable models, click **Manage Model Providers** to open the **Model Providers** pane.
+
+3. In the [component inspection panel](/concepts-components#component-menus), enable the **System Message** parameter.
4. Add a [**Prompt Template** component](/components-prompts) to your flow.
@@ -65,8 +71,8 @@ These components are required for direct chat interaction with an LLM.
10. Optional: Try a different model or provider to see how the response changes.
-For example, if you are using the **Language Model** core component, you could try an Anthropic model.
+ If you enabled multiple models in Langflow's global **Model Providers** pane, select a different model in the **Language Model** field. To open the **Model Providers** pane, click your profile icon, select **Settings**, and then click **Model Providers**.
Then, open the **Playground**, ask the same question as you did before, and then compare the content and format of the responses.
This helps you understand how different models handle the same request so you can choose the best model for your use case.
@@ -103,7 +109,7 @@ For more information, see [Language Model output types](#language-model-output-t
-If you don't want to use the **Agent** component's built-in LLMs, you can use a language model component to connect your preferred model:
+If you don't want to use the **Agent** component's built-in LLM, you can use a language model component to connect your preferred model:
1. Add a language model component to your flow.
@@ -111,17 +117,14 @@ If you don't want to use the **Agent** component's built-in LLMs, you can use a
Components in bundles may not have `language model` in the name.
For example, Azure OpenAI LLMs are provided through the [**Azure OpenAI** component](/bundles-azure#azure-openai).
-2. Configure the language model component as needed to connect to your preferred model.
+2. Select your preferred model from the **Language Model** dropdown. The model must be configured globally in the **Models** pane.
3. Change the language model component's output type from **Model Response** to **Language Model**.
The output port changes to a `LanguageModel` port.
This is required to connect the language model component to the **Agent** component.
For more information, see [Language Model output types](#language-model-output-types).
-4. Add an **Agent** component to the flow, and then set **Model Provider** to **Connect other models**.
-
- The **Model Provider** field changes to a **Language Model** (`LanguageModel`) input.
-
+4. Add an **Agent** component to the flow.
5. Connect the language model component's output to the **Agent** component's **Language Model** input.
The **Agent** component now inherits the language model settings from the connected language model component instead of using any of the built-in models.
@@ -139,9 +142,8 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
| Name | Type | Description |
|------|------|-------------|
-| provider | String | Input parameter. The model provider to use. |
-| model_name | String | Input parameter. The name of the model to use. Options depend on the selected provider. |
-| api_key | SecretString | Input parameter. The API Key for authentication with the selected provider. |
+| provider | String | Input parameter. The model provider to use. Options depend on your global **Models** configuration. |
+| model_name | String | Input parameter. The name of the model to use. Options depend on the selected provider and your global **Models** configuration. |
| input_value | String | Input parameter. The input text to send to the model. |
| system_message | String | Input parameter. A system message that helps set the behavior of the assistant. |
| stream | Boolean | Input parameter. Whether to stream the response. Default: `false`. |
diff --git a/docs/docs/Components/components-prompts.mdx b/docs/docs/Components/components-prompts.mdx
index 1974e85c5..1ae357baf 100644
--- a/docs/docs/Components/components-prompts.mdx
+++ b/docs/docs/Components/components-prompts.mdx
@@ -19,10 +19,10 @@ The **Prompt Template** component can also output variable instructions to other
## Prompt Template parameters
-| Name | Display Name | Description |
-|----------|----------------|-------------------------------------------------------------------|
-| template | Template | Input parameter. Create a prompt template with dynamic variables in curly braces, such as `{VARIABLE_NAME}`. |
-| prompt | Prompt Message | Output parameter. The built prompt message returned by the `build_prompt` method. |
+| Name | Display Name | Description |
+|---------------------|---------------------|-------------------------------------------------------------------|
+| template | Template | Input parameter. Create a prompt template with dynamic variables in curly braces, such as `{VARIABLE_NAME}`. |
+| use_double_brackets | Use Double Brackets | When enabled, use Mustache syntax `{{variable}}` instead of f-string syntax `{variable}`. For more information, see [Use Mustache templating in prompt templates](#use-mustache-templating-in-prompt-templates). |
## Define variables in prompts
@@ -70,6 +70,42 @@ The following steps demonstrate how to add variables to a **Prompt Template** co
You can add as many variables as you like in your template.
For example, you could add variables for `{references}` and `{instructions}`, and then feed that information in from other components, such as **Text Input**, **URL**, or **Read File** components.
+### Use Mustache templating in prompt templates
+
+F-string escaping can become confusing when you mix escaped braces with variables in the same template.
+For example:
+
+```text
+Generate a response in this JSON format:
+{{"name": "{name}", "age": {age}, "city": "{city}"}}
+
+The user's name is {name}, age is {age}, and they live in {city}.
+```
+
+The characters `{{` and `}}` are escaped literal braces for the JSON structure, but `{name}` is a variable.
+This can make prompts error-prone and difficult to parse.
+Use [Mustache](https://mustache.github.io) in your prompt templates to make the differences clearer.
+
+To enable Mustache templating, do the following:
+
+1. In the **Prompt Template** component, enable **Use Double Brackets**.
+2. In your prompt template, change the variables from `{variable}` to `{{variable}}`.
+ Mustache uses `{` `}` for literal braces and `{{variable}}` for variables.
+
+ ```text
+ Generate a response in this JSON format:
+ {"name": "{{name}}", "age": {{age}}, "city": "{{city}}"}
+
+ The user's name is {{name}}, age is {{age}}, and they live in {{city}}.
+ ```
+
+3. Click **Check & Save**.
+ The component lints the template code and returns **Prompt is ready** if there are no errors.
+ Your prompt is now ready to use in a flow.
+
+Langflow supports variable replacement with double brackets, but does not support the full Mustache engine.
+The prompt component validation rejects syntax for other Mustache features such as loops and conditionals.
+
## See also
* [**LangChain Prompt Hub** component](/bundles-langchain#prompt-hub)
diff --git a/docs/docs/Components/concepts-components.mdx b/docs/docs/Components/concepts-components.mdx
index 2df03f7f4..2b1a4ce28 100644
--- a/docs/docs/Components/concepts-components.mdx
+++ b/docs/docs/Components/concepts-components.mdx
@@ -36,7 +36,13 @@ After adding a component to a flow, configure the component's parameters and con
Each component has inputs, outputs, parameters, and controls related to the component's purpose.
By default, components show only required and common options.
-To access additional settings and controls, including meta settings, use the [component's header menu](#component-header-menus).
+To access additional settings and controls, including meta settings, use the [component inspection panel](#component-inspection-panel).
+
+### Component inspection panel {#component-inspection-panel}
+
+When you select a component in the workspace, a component inspection panel appears on the right side of the screen.
+
+The inspection panel displays all of a component's parameters, including hidden or advanced parameters.
### Component header menus
@@ -44,11 +50,10 @@ To access a component's header menu, click the component in your workspace.

-A few options are available directly on the header menu.
-For example:
+The following options are available directly on the header menu:
- **Code**: Modify component settings by directly editing the component's Python code.
-- **Controls**: Adjust all component parameters, including optional settings that are hidden by default.
+- **Freeze**: Freeze a component and all upstream components to prevent re-running. For more information, see [Freeze a component](#freeze-a-component).
- **Tool Mode**: Enable this option when combining a component with an **Agent** component.
For all other options, including **Delete** and **Duplicate** controls, click **Show More**.
@@ -80,7 +85,7 @@ Use the freeze option if you expect consistent output from a component _and all
Freezing a component prevents that component and all upstream components from re-running, and it preserves the last output state for those components.
Any future flow runs use the preserved output.
-To freeze a component, click the component in the workspace to expose the component's header menu, click **Show More**, and then select **Freeze**.
+To freeze a component, click the component in the workspace to expose the component's header menu, and then click **Freeze**.
## Component ports
diff --git a/docs/docs/Components/guardrails.mdx b/docs/docs/Components/guardrails.mdx
new file mode 100644
index 000000000..f17781e33
--- /dev/null
+++ b/docs/docs/Components/guardrails.mdx
@@ -0,0 +1,64 @@
+---
+title: Guardrails
+slug: /guardrails
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **Guardrails** component validates input text against security and safety guardrails by issuing prompts to a language model (LLM) to check for violations.
+
+The following guardrails can be validated against:
+
+- **PII**: Detects personal identifiable information such as names, addresses, phone numbers, email addresses, social security numbers, credit card numbers, or other personal data.
+- **Tokens/Passwords**: Detects API tokens, passwords, API keys, access keys, secret keys, authentication credentials, or other sensitive credentials.
+- **Jailbreak**: Detects attempts to bypass AI safety guidelines, manipulate the model's behavior, or make it ignore its instructions.
+- **Offensive Content**: Detects offensive, hateful, discriminatory, violent, or inappropriate content.
+- **Malicious Code**: Detects potentially malicious code, scripts, exploits, or harmful commands.
+- **Prompt Injection**: Detects attempts to inject malicious prompts, override system instructions, or manipulate the AI's behavior through embedded instructions.
+
+When validation passes, the input continues through the **Pass** output.
+When validation fails, the input is blocked and sent through the **Fail** output with a justification explaining why it failed.
+
+The **Jailbreak** and **Prompt Injection** guardrails include additional heuristic detection first, and then fall back to LLM validation if needed. This additional stage identifies obvious patterns quickly and reduces API costs by avoiding unnecessary LLM calls for clear violations.
+
+The **Guardrails** component uses a language model to analyze input and can produce false positives or miss some violations.
+Use this component **in addition to** other data-sanitization best practices, such as personnel training and scripts that check for literal values or regex patterns, rather than as a sole safeguard.
+
+## Use the Guardrails component in a flow
+
+1. Connect a **Chat Input** or other text source to the **Guardrails** component's **Input Text** port.
+2. Select a **Language Model** to use for validation. The component uses the connected LLM to analyze the input text against the enabled guardrails.
+3. From the **Guardrails** dropdown, select one or more guardrails to enable.
+ For example, select **Tokens/Passwords** to block API keys and credentials.
+4. Connect the **Pass** output to components to receive validated input.
+5. Optionally, connect the **Fail** output to handle blocked inputs, such as a [**Chat Output** component](/chat-input-and-output) or [**Write File** component](/write-file).
+
+## Create custom guardrails
+
+Use the **Enable Custom Guardrail** parameter to create your own, specific guardrail validations.
+In the *Custom Guardrail Description** field, enter a natural language guardrail description of disallowed data that you want to detect.
+
+Custom guardrails can work simultaneously with the built-in guardrails, and follow the same validation process.
+
+For example, to block inputs that mention competitor names or products, enter the following in the **Custom Guardrail Description** field:
+
+```
+competitor company names, competitor product names, or references to competing services
+```
+
+When this custom guardrail is enabled, the LLM analyzes the input text against your criteria. If it detects content matching your description, such as mentions of competitors, validation fails and the input is blocked. Otherwise, validation passes and the input continues through the **Pass** output.
+
+## Guardrails parameters
+
+
+
+| Name | Type | Description |
+|------|------|-------------|
+| Language Model (`model`) | `LanguageModel` | Input parameter. Connect a **Language Model** component to use as the driver for this component. The model reviews the data, compares it against the guardrails, and determines if any data is in violation of the guardrails. |
+| API Key (`api_key`) | Secret String | Input parameter. Model provider API key. Required if the model provider needs authentication. |
+| Guardrails (`enabled_guardrails`) | Multiselect | Input parameter. Select one or more security guardrails to validate the input against. Options: `PII`, `Tokens/Passwords`, `Jailbreak`, `Offensive Content`, `Malicious Code`, `Prompt Injection`. Default: `["PII", "Tokens/Passwords", "Jailbreak"]`. |
+| Input Text (`input_text`) | Multiline String | Input parameter. The text to validate against guardrails. Accepts `Message` input types. |
+| Enable Custom Guardrail (`enable_custom_guardrail`) | Boolean | Input parameter. Enable a custom guardrail with your own validation criteria. Default: `false`. |
+| Custom Guardrail Description (`custom_guardrail_explanation`) | Multiline String | Input parameter. Describe what the custom guardrail should check for. This description is used by the LLM to validate the input. Be specific and clear about what you want to detect. Only used when `enable_custom_guardrail` is `true`. |
+| Heuristic Detection Threshold (`heuristic_threshold`) | Slider | Input parameter. Score threshold (0.0-1.0) for heuristic jailbreak/prompt injection detection. Strong patterns such as "ignore instructions" and "jailbreak" have high weights, while weak patterns such as "bypass" and "act as" have low weights. If the cumulative score meets or exceeds this threshold, the input fails immediately. Lower values are more strict. Higher values defer more cases to LLM validation. Default: `0.7`. |
\ No newline at end of file
diff --git a/docs/docs/Components/if-else.mdx b/docs/docs/Components/if-else.mdx
index 7f459076e..1fcd0c1cc 100644
--- a/docs/docs/Components/if-else.mdx
+++ b/docs/docs/Components/if-else.mdx
@@ -39,7 +39,7 @@ The following example uses the **If-Else** component to check incoming chat mess
* **Operator**: Select **regex**.
- * **Case True**: In the [component's header menu](/concepts-components#component-menus), click **Controls**, enable the **Case True** parameter, click **Close**, and then enter `New Message Detected` in the field.
+ * **Case True**: In the [component inspection panel](/concepts-components#component-menus), enable the **Case True** parameter, click **Close**, and then enter `New Message Detected` in the field.
The **Case True** message is sent from the **True** output port when the match condition evaluates to true.
diff --git a/docs/docs/Components/knowledge-base.mdx b/docs/docs/Components/knowledge-base.mdx
new file mode 100644
index 000000000..b50c2a79f
--- /dev/null
+++ b/docs/docs/Components/knowledge-base.mdx
@@ -0,0 +1,46 @@
+---
+title: Knowledge Base
+slug: /knowledge-base
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialKbSummary from '@site/docs/_partial-kb-summary.mdx';
+
+
+
+The **Knowledge Base** component reads from an existing knowledge base using semantic search.
+
+The output is a [`DataFrame`](/data-types#dataframe) containing the top matching results from the queried knowledge base.
+
+## Knowledge Base parameters
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| knowledge_base | Knowledge | Input parameter. Select the knowledge base to retrieve data from. |
+| api_key | Embedding Provider API Key | Input parameter. Optional API key for the embedding provider to override a previously-provided key. The embedding provider and model are chosen when you create a knowledge base. |
+| search_query | Search Query | Input parameter. Optional search query to filter knowledge base data using semantic similarity. If omitted, the top results are returned from an arbitrary sort. |
+| top_k | Top K Results | Input parameter. Number of search results to return. Default: `5`. |
+| include_metadata | Include Metadata | Input parameter. Whether to include all metadata and embeddings in the output. If enabled, each output row includes all metadata, embeddings, and content. If disabled, only the content is returned. Default: Enabled (true). |
+
+## Use the Knowledge Base component in a flow
+
+After you create and load data to a [knowledge base](/knowledge), you can use the **Knowledge Base** component in any flow to retrieve data from your knowledge base using semantic search:
+
+1. Add a **Knowledge Base** component to your flow.
+
+2. In the **Knowledge** field, select the knowledge base you want to search, such as the customer sales data knowledge base created in the previous steps.
+
+3. To view the search results as chat messages, connect the **Results** output to a **Chat Output** component.
+
+4. In **Search query**, enter a query that relates to your embedded data.
+
+ For the customer sales data example, enter a product name like `laptop` or `wireless devices`.
+
+5. Click **Run component** on the **Knowledge Base** component, and then open the **Playground** to view the output.
+
+## See also
+
+* [Manage vector data](/knowledge)
\ No newline at end of file
diff --git a/docs/docs/Components/message-history.mdx b/docs/docs/Components/message-history.mdx
index fc6071e3f..12b508ab6 100644
--- a/docs/docs/Components/message-history.mdx
+++ b/docs/docs/Components/message-history.mdx
@@ -42,9 +42,9 @@ The following steps explain how to create a chat-based flow that uses **Message
2. At the beginning of the flow, add a **Message History** component, and then set it to **Retrieve** mode.
-3. Optional: In the **Message History** [component's header menu](/concepts-components#component-menus), click **Controls** to enable parameters for memory sorting, filtering, and limits.
+3. Optional: To enable parameters for memory sorting, filtering, and limits, click the **Message History** component to expose the [component inspection panel](/concepts-components#component-menus).
-3. Add a **Prompt Template** component, add a `{memory}` variable to the **Template** field, and then connect the **Message History** output to the **memory** input.
+4. Add a **Prompt Template** component, add a `{memory}` variable to the **Template** field, and then connect the **Message History** output to the **memory** input.
The **Prompt Template** component supplies instructions and context to LLMs, separate from chat messages passed through a **Chat Input** component.
The template can include any text and variables that you want to supply to the LLM, for example:
@@ -64,19 +64,19 @@ The following steps explain how to create a chat-based flow that uses **Message
In this example, the `{memory}` variable is populated by the retrieved chat memories, which are then passed to a **Language Model** or **Agent** component to provide additional context to the LLM.
-4. Connect the **Prompt Template** component's output to a **Language Model** component's **System Message** input.
+5. Connect the **Prompt Template** component's output to a **Language Model** component's **System Message** input.
This example uses the **Language Model** core component as the central chat driver, but you can also use another language model component or the **Agent** component.
-5. Add a **Chat Input** component, and then connect it to the **Language Model** component's **Input** field.
+6. Add a **Chat Input** component, and then connect it to the **Language Model** component's **Input** field.
-6. Connect the **Language Model** component's output to a **Chat Output** component.
+7. Connect the **Language Model** component's output to a **Chat Output** component.
-7. At the end of the flow, add another **Message History** component, and then set it to **Store** mode.
+8. At the end of the flow, add another **Message History** component, and then set it to **Store** mode.
Configure any additional parameters in the second **Message History** component as needed, taking into consideration that this particular component will store chat messages rather than retrieve them.
-8. Connect the **Chat Output** component's output to the **Message History** component's **Message** input.
+9. Connect the **Chat Output** component's output to the **Message History** component's **Message** input.
Each response from the LLM is output from the **Language Model** component to the **Chat Output** component, and then stored in chat memory by the final **Message History** component.
@@ -94,7 +94,7 @@ Other options include the [**Mem0 Chat Memory** component](/bundles-mem0) and [*
1. Configure the **Redis Chat Memory** component to connect to your Redis database. For more information, see the [Redis documentation](https://redis.io/docs/latest/).
2. Set the **Message History** component to **Retrieve** mode.
- 3. In the **Message History** [component's header menu](/concepts-components#component-menus), click **Controls**, enable **External Memory**, and then click **Close**.
+ 3. In the **Message History** [component inspection pane](/concepts-components#component-menus), enable **External Memory**.
In **Controls**, you can also enable parameters for memory sorting, filtering, and limits.
@@ -132,7 +132,7 @@ Other options include the [**Mem0 Chat Memory** component](/bundles-mem0) and [*
1. Configure the **Redis Chat Memory** component to connect to your Redis database.
2. Set the **Message History** component to **Store** mode.
- 3. In the **Message History** [component's header menu](/concepts-components#component-menus), click **Controls**, enable **External Memory**, and then click **Close**.
+ 3. In the **Message History** [component inspection pane](/concepts-components#component-menus), enable **External Memory**.
Configure any additional parameters in this component as needed, taking into consideration that this particular component will store chat messages rather than retrieve them.
diff --git a/docs/docs/Components/read-file.mdx b/docs/docs/Components/read-file.mdx
index c16e0d955..b4d9fdb91 100644
--- a/docs/docs/Components/read-file.mdx
+++ b/docs/docs/Components/read-file.mdx
@@ -114,7 +114,7 @@ To use advanced parsing, do the following:
3. Enable **Advanced Parsing**.
-4. Click **Controls** in the [component's header menu](/concepts-components#component-menus) to configure advanced parsing parameters, which are hidden by default:
+4. To configure advanced parsing parameters, click the component to open the [component inspection panel](/concepts-components#component-menus).
| Name | Display Name | Info |
|------|--------------|------|
diff --git a/docs/docs/Components/smart-transform.mdx b/docs/docs/Components/smart-transform.mdx
index c06fc1a33..592495845 100644
--- a/docs/docs/Components/smart-transform.mdx
+++ b/docs/docs/Components/smart-transform.mdx
@@ -12,15 +12,17 @@ import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
This component has been renamed multiple times.
Its previous names include **Lambda Filter** and **Smart Function**.
-The **Smart Transform** component uses an LLM to generate a Lambda function to filter or transform structured data based on natural language instructions.
+The **Smart Transform** component uses an LLM and natural language instructions to generate a Lambda function that can filter or transform [`Data`](/data-types#data), [`DataFrame`](/data-types#dataframe), or [`Message`](/data-types#message) input.
You must connect this component to a [language model component](/components-models), which is used to generate a function based on the natural language instructions you provide in the **Instructions** parameter.
-The LLM runs the function against the data input, and then outputs the results as [`Data`](/data-types#data).
+The LLM runs the function against the input, and then outputs the results as [`Data`](/data-types#data), [`DataFrame`](/data-types#dataframe), or [`Message`](/data-types#message).
:::tip
Provide brief, clear instructions, focusing on the desired outcome or specific actions, such as `Filter the data to only include items where the 'status' is 'active'`.
One sentence or less is preferred because end punctuation, like periods, can cause errors or unexpected behavior.
If you need to provide more details instructions that aren't directly relevant to the Lambda function, you can input them in the **Language Model** component's **Input** field or through a **Prompt Template** component.
+
+For the most reliable results, the **Smart Transform** component's output type must match the input type. For example, select **Message** output for [`Message`](/data-types#message) input.
:::
The following example uses the **API Request** endpoint to pass JSON data from the `https://jsonplaceholder.typicode.com/users` endpoint to the **Smart Transform** component.
@@ -35,9 +37,8 @@ From there, the LLM generates a filter function that extracts email addresses fr
| Name | Display Name | Info |
|------|--------------|------|
-| data | Data | Input parameter. The structured data to filter or transform using a Lambda function. |
-| llm | Language Model | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component. |
+| data | Data | Input parameter. The [`Data`](/data-types#data), [`DataFrame`](/data-types#dataframe), or [`Message`](/data-types#message) input to filter or transform using the generated Lambda function. |
+| model | Language Model | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component. |
| filter_instruction | Instructions | Input parameter. The natural language instructions for how to filter or transform the data. The LLM uses these instructions to create a Lambda function. |
| sample_size | Sample Size | Input parameter. For large datasets, the number of characters to sample from the dataset head and tail. Only applied if the dataset meets or exceeds `max_size`. Default: `1000`. |
-| max_size | Max Size | Input parameter. The number of characters for the dataset to be considered large, which triggers sampling by the `sample_size` value. Default: `30000`. |
-
+| max_size | Max Size | Input parameter. The number of characters for the dataset to be considered large, which triggers sampling by the `sample_size` value. Default: `30000`. |
\ No newline at end of file
diff --git a/docs/docs/Components/url.mdx b/docs/docs/Components/url.mdx
index 67f527d08..af1522a71 100644
--- a/docs/docs/Components/url.mdx
+++ b/docs/docs/Components/url.mdx
@@ -10,7 +10,7 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
The **URL** component fetches content from one or more URLs, processes the content, and returns it in various formats.
-It follows links recursively to a given depth, and it supports output in plain text or raw HTML.
+It follows links recursively to a given depth, and it supports output in plain text, Markdown, or raw HTML.
## URL parameters
@@ -24,7 +24,7 @@ Some of the available parameters include the following:
| max_depth | Depth | Input parameter. Controls link traversal: how many "clicks" away from the initial page the crawler will go. A depth of 1 limits the crawl to the first page at the given URL only. A depth of 2 means the crawler crawls the first page plus each page directly linked from the first page, then stops. This setting exclusively controls link traversal; it doesn't limit the number of URL path segments or the domain. |
| prevent_outside | Prevent Outside | Input parameter. If enabled, only crawls URLs within the same domain as the root URL. This prevents the crawler from accessing sites outside the given URL's domain, even if they are linked from one of the crawled pages. |
| use_async | Use Async | Input parameter. If enabled, uses asynchronous loading which can be significantly faster but might use more system resources. |
-| format | Output Format | Input parameter. Sets the desired output format as **Text** or **HTML**. The default is **Text**. For more information, see [URL output](#url-output).|
+| format | Output Format | Input parameter. Sets the desired output format as **Text**, **Markdown**, or **HTML**. The default is **Text**. For more information, see [URL output](#url-output).|
| timeout | Timeout | Input parameter. Timeout for the request in seconds. |
| headers | Headers | Input parameter. The headers to send with the request if needed for authentication or otherwise. |
@@ -37,12 +37,13 @@ There are two settings that control the output of the **URL** component at diffe
* **Output Format**: This optional parameter controls the content extracted from the crawled pages:
* **Text (default)**: The component extracts only the text from the HTML of the crawled pages.
+ * **Markdown**: The component converts the HTML content to markdown format using [Markitdown](https://github.com/microsoft/markitdown).
* **HTML**: The component extracts the entire raw HTML content of the crawled pages.
* **Output data type**: In the component's output field (near the output port) you can select the structure of the outgoing data when it is passed to other components:
* **Extracted Pages**: Outputs a [`DataFrame`](/data-types#dataframe) that breaks the crawled pages into columns for the entire page content (`text`) and metadata like `url` and `title`.
- * **Raw Content**: Outputs a [`Message`](/data-types#message) containing the entire text or HTML from the crawled pages, including metadata, in a single block of text.
+ * **Raw Content**: Outputs a [`Message`](/data-types#message) containing the entire text, Markdown, or HTML from the crawled pages, including metadata, in a single block of text.
When used as a standard component in a flow, the **URL** component must be connected to a component that accepts the selected output data type (`DataFrame` or `Message`).
You can connect the **URL** component directly to a compatible component, or you can use a [**Type Convert** component](/type-convert) to convert the output to another type before passing the data to other components if the data types aren't directly compatible.
diff --git a/docs/docs/Develop/api-keys-and-authentication.mdx b/docs/docs/Develop/api-keys-and-authentication.mdx
index 83f445d69..826ac3d0f 100644
--- a/docs/docs/Develop/api-keys-and-authentication.mdx
+++ b/docs/docs/Develop/api-keys-and-authentication.mdx
@@ -150,6 +150,8 @@ This section describes the available authentication configuration variables.
You can use the [`.env.example`](https://github.com/langflow-ai/langflow/blob/main/.env.example) file in the Langflow repository as a template for your own `.env` file.
+For JWT authentication configuration, including algorithm selection and key management, see [JWT authentication](/jwt-authentication).
+
### LANGFLOW_AUTO_LOGIN {#langflow-auto-login}
This variable controls whether authentication is required to access your Langflow server, including the visual editor, API, and Langflow CLI:
@@ -207,8 +209,9 @@ These defaults don't apply when using the Langflow CLI command [`langflow superu
### LANGFLOW_SECRET_KEY {#langflow-secret-key}
-This environment variable stores a secret key used for encrypting sensitive data like API keys.
+This environment variable stores a secret key used for encrypting sensitive data like API keys and for JWT signing when using the HS256 algorithm.
Langflow uses the [Fernet](https://pypi.org/project/cryptography/) library for secret key encryption.
+For JWT-specific configuration, see [JWT authentication](/jwt-authentication).
If no secret key is provided, Langflow automatically generates one.
@@ -273,6 +276,13 @@ To generate a secret encryption key for `LANGFLOW_SECRET_KEY`, do the following:
- LANGFLOW_SECRET_KEY=${LANGFLOW_SECRET_KEY}
```
+#### Rotate the secret key {#rotating-the-secret-key}
+
+Rotate `LANGFLOW_SECRET_KEY` if the key might have been compromised and as part of your routine credential management practices.
+Langflow provides a migration script that re-encrypts stored credentials and other sensitive data with a new key so you can rotate without losing access.
+
+For more information, see [Secret Key Rotation](https://github.com/langflow-ai/langflow/blob/main/SECURITY.md#secret-key-rotation) in the Langflow Security Policy.
+
### LANGFLOW_NEW_USER_IS_ACTIVE {#langflow-new-user-is-active}
When `LANGFLOW_NEW_USER_IS_ACTIVE=False` (default), accounts created by superusers are inactive by default and must be explicitly activated before users can sign in to the visual editor.
@@ -553,3 +563,4 @@ Next, you can add users to your Langflow server to collaborate with others on fl
## See also
* [Langflow environment variables](/environment-variables)
+* [Langflow Security Policy](https://github.com/langflow-ai/langflow/blob/main/SECURITY.md) — reporting vulnerabilities, security configuration, and [secret key rotation](https://github.com/langflow-ai/langflow/blob/main/SECURITY.md#secret-key-rotation)
diff --git a/docs/docs/Develop/install-custom-dependencies.mdx b/docs/docs/Develop/install-custom-dependencies.mdx
index 0ade1c779..c354a02b1 100644
--- a/docs/docs/Develop/install-custom-dependencies.mdx
+++ b/docs/docs/Develop/install-custom-dependencies.mdx
@@ -5,10 +5,11 @@ slug: /install-custom-dependencies
Langflow provides optional dependency groups and support for custom dependencies to extend Langflow functionality. This guide covers how to add dependencies for different Langflow installations, including Langflow Desktop and Langflow OSS.
-The Langflow codebase uses two `pyproject.toml` files to manage dependencies, with one for `base` and one for `main`:
+The Langflow codebase uses three packages, each with its own `pyproject.toml` file:
-* The `main` package is managed by the root level `pyproject.toml`, and it includes end-user features and main application code, such as Langchain and OpenAI.
-* The `base` package is managed at `src/backend/base/pyproject.toml`, and it includes core infrastructure, such as the FastAPI web framework.
+* The `main` package (`langflow`) is managed by the root level `pyproject.toml`, and it includes end-user features and main application code, such as Langchain and OpenAI. The `main` package depends on the `base` package.
+* The `base` package (`langflow-base`) is managed at `src/backend/base/pyproject.toml`, and it includes core infrastructure, such as the FastAPI web framework. The `base` package depends on the `lfx` package.
+* The `lfx` package is managed at `src/lfx/pyproject.toml`. LFX is a lightweight CLI tool for executing and serving Langflow flows. The `lfx` package does not provide optional dependency groups for end users.
## Install custom dependencies in Langflow Desktop {#langflow-desktop}
@@ -33,13 +34,15 @@ If you're working within a cloned Langflow repository, add dependencies with `uv
uv add DEPENDENCY
```
-### Install optional dependency groups
+### Install optional dependency groups for `langflow`
-Langflow OSS provides optional dependency groups that extend its functionality.
+The `langflow` package (main) provides optional dependency groups that extend its functionality.
-These dependencies are listed in the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml#L194) file under `[project.optional-dependencies]`.
+By default, installing `langflow` without any extras includes all dependencies listed in the `[project.dependencies]` section. Optional dependency groups are not installed by default and must be explicitly requested.
-Install dependency groups using pip's `[extras]` syntax. For example, to install Langflow with the `postgresql` dependency group, enter the following command:
+These optional dependencies are listed in the [langflow `pyproject.toml`](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml) file under `[project.optional-dependencies]`.
+
+Install dependency groups using pip's `[extras]` syntax. For example, to install `langflow` with the `postgresql` dependency group, enter the following command:
```bash
uv pip install "langflow[postgresql]"
@@ -48,14 +51,42 @@ uv pip install "langflow[postgresql]"
To install multiple extras, use commas to separate each dependency group:
```bash
-uv pip install "langflow[local,postgresql]"
+uv pip install "langflow[postgresql,openai]"
+```
+
+### Install optional dependency groups for `langflow-base`
+
+`langflow-base` is recommended when you want to deploy Langflow with specific dependencies only.
+It contains the same codebase as `langflow`, but `langflow` includes `langflow-base` as a dependency and adds many additional dependencies on top of it.
+
+The `langflow-base` package provides its own optional dependency groups that are separate from those in the `langflow` package. The `langflow-base` package can be installed as a standalone package with these optional dependency groups.
+
+By default, installing `langflow-base` without any extras includes all dependencies listed in the `[project.dependencies]` section. Optional dependency groups are not installed by default and must be explicitly requested.
+These optional dependency groups are listed in the [langflow-base `pyproject.toml`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/pyproject.toml) file under `[project.optional-dependencies]`.
+
+Install `langflow-base` with optional dependency groups using pip's `[extras]` syntax. For example, to install `langflow-base` with the `postgresql` dependency group:
+
+```bash
+uv pip install "langflow-base[postgresql]"
+```
+
+To install multiple extras, use commas to separate each dependency group:
+
+```bash
+uv pip install "langflow-base[postgresql,openai]"
+```
+
+To install all optional dependencies for `langflow-base`, use the `complete` extra:
+
+```bash
+uv pip install "langflow-base[complete]"
```
### Use a virtual environment to test custom dependencies
When testing locally, use a virtual environment to isolate your dependencies and prevent conflicts with other Python projects.
-For example, if you want to experiment with `matplotlib` with Langflow:
+For example, if you want to experiment with a custom dependency like `matplotlib` with Langflow:
```bash
# Create and activate a virtual environment
@@ -66,20 +97,25 @@ source YOUR_LANGFLOW_VENV/bin/activate
uv pip install langflow matplotlib
```
+You can also install `langflow-base` with specific optional dependency groups in your virtual environment:
+
+```bash
+# Install langflow-base with only the dependencies you need
+uv pip install "langflow-base[postgresql,openai]" matplotlib
+```
+
If you're working within a cloned Langflow repository, add dependencies with `uv add` to reference the existing `pyproject.toml` files:
```bash
uv add matplotlib
```
-The `uv add` commands automatically update the `uv.lock` file in the appropriate location.
+The `uv add` command automatically updates the `uv.lock` file in the appropriate location.
## Add dependencies to the Langflow codebase
When contributing to the Langflow codebase, you might need to add dependencies to Langflow.
-Langflow uses a workspace with two packages, each with different types of dependencies.
-
To add a dependency to the `main` package, run `uv add DEPENDENCY` from the project root.
For example:
@@ -91,7 +127,7 @@ Dependencies can be added to the `main` package as regular dependencies at `[pro
To add a dependency to the `base` package, navigate to `src/backend/base` and run:
```bash
-cd src/backend/base && uv add DEPENDENCY
+uv add DEPENDENCY
```
To add a development dependency for testing, linting, or debugging, navigate to `src/backend/base` and run:
diff --git a/docs/docs/Develop/jwt-authentication.mdx b/docs/docs/Develop/jwt-authentication.mdx
new file mode 100644
index 000000000..9faf162b2
--- /dev/null
+++ b/docs/docs/Develop/jwt-authentication.mdx
@@ -0,0 +1,331 @@
+---
+title: JWT authentication
+slug: /jwt-authentication
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+Langflow supports symmetric or asymmetric JSON Web Tokens (JWT) for user authentication and authorization.
+
+JWT is an [open standard](https://tools.ietf.org/html/rfc7519) for securely transmitting information between parties as a JSON object.
+Use JWT to create credentials that automatically expire, enable stateless authentication without database storage, and work across distributed systems.
+
+JWT authentication with the HS256 algorithm is enabled by default, but can be configured further with the `LANGFLOW_ALGORITHM` environment variable.
+
+
+About the JWT structure and contents
+
+When a user logs in with their username and password at the `/api/v1/login` endpoint, Langflow validates the credentials and creates a JWT token containing the user's identity and expiration time. This token is then used for subsequent API requests instead of sending credentials with each request.
+
+A JWT consists of three parts separated by dots (`.`):
+
+```
+eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
+```
+
+* The header contains the token type and signing algorithm.
+* The payload contains _claims_, which are token data for user information and expiration time.
+* The signature is a secret key that ensures the token hasn't been tampered with.
+
+Each part of the JWT is Base64URL-encoded.
+You can paste this example JWT to decode the actual JSON data at [jwt.io](https://jwt.io/).
+
+
+
+## Configure JWT environment variables
+
+Configure JWT authentication in Langflow using the following environment variables:
+
+| Variable | Description | Default |
+|----------|-------------|---------|
+| `LANGFLOW_ALGORITHM` | JWT signing algorithm (`HS256`, `RS256`, or `RS512`) | `HS256` |
+| `LANGFLOW_SECRET_KEY` | Secret key for HS256 signing | Auto-generated |
+| `LANGFLOW_PRIVATE_KEY` | RSA private key for RS256/RS512 signing | Auto-generated |
+| `LANGFLOW_PUBLIC_KEY` | RSA public key for RS256/RS512 verification | Derived from private key |
+| `LANGFLOW_ACCESS_TOKEN_EXPIRE_SECONDS` | Access token expiration time | `3600` (1 hour) |
+| `LANGFLOW_REFRESH_TOKEN_EXPIRE_SECONDS` | Refresh token expiration time | `604800` (7 days) |
+
+## Configure signing algorithms
+
+Langflow supports multiple signing algorithms and both symmetric (HS256) and asymmetric (RS256, RS512) JWTs.
+
+Which method you choose depends upon your deployment's requirements.
+
+### HS256 (Default)
+
+HS256 is the default JWT algorithm, with a good security level for single-server deployments.
+Langflow automatically generates and persists a secret key.
+No configuration is necessary, but if you want to explicitly set it in the Langflow `.env`, the default value is `LANGFLOW_ALGORITHM=HS256`.
+
+To generate a custom secure key instead of using the Langflow-generated secret key, do the following:
+
+1. Generate a secure secret key with the Python secrets module or OpenSSL.
+ The key must be at least 32 characters long.
+
+ **Using Python:**
+
+ ```bash
+ python -c "import secrets; print(secrets.token_urlsafe(32))"
+ ```
+
+ **Using OpenSSL:**
+
+ ```bash
+ openssl rand -base64 32
+ ```
+
+2. Set the value for `LANGFLOW_SECRET_KEY` in your `.env` file.
+ ```bash
+ LANGFLOW_ALGORITHM="HS256"
+ LANGFLOW_SECRET_KEY="your-custom-secret-key"
+ ```
+
+### RS256
+
+The RS256 signing algorithm provides better security for production deployments by using a pair of private and public keys.
+The private key signs tokens, and the public verifies them.
+The private key must be kept secret, while the public key can be safely shared.
+
+To automatically generate a private and public key pair and store it in the Langflow [`LANGFLOW_CONFIG_DIR`](/logging), set `LANGFLOW_ALGORITHM="RS256"` in your Langflow `.env`.
+When Langflow starts, it will:
+1. Check if RSA keys exist in the configuration directory.
+2. If not, generate a new 2048-bit RSA key pair.
+3. Save the keys to `private_key.pem` and `public_key.pem`.
+4. Reuse the same keys on subsequent startups.
+
+To use a custom private key instead of the auto-generated keys, set the following in your `.env` file.
+The `LANGFLOW_PUBLIC_KEY` will be automatically derived from the private key.
+
+```bash
+LANGFLOW_ALGORITHM=RS256
+LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
+MIIEvgIBADANBgkqhkiG9w0BAQEF...
+-----END PRIVATE KEY-----"
+```
+
+To use a custom key pair, set both keys in your Langflow `.env` file.
+
+```bash
+LANGFLOW_ALGORITHM=RS256
+LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
+MIIEvgIBADANBgkqhkiG9w0BAQEF...
+-----END PRIVATE KEY-----"
+LANGFLOW_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----
+MIIBIjANBgkqhkiG9w0BAQEFAAOC...
+-----END PUBLIC KEY-----"
+```
+
+To generate an RSA key pair manually, do the following:
+
+1. Generate a 2048-bit private key:
+ ```bash
+ openssl genrsa -out private_key.pem 2048
+ ```
+
+2. Extract the public key from the private key:
+ ```bash
+ openssl rsa -in private_key.pem -pubout -out public_key.pem
+ ```
+
+3. Verify the keys were created:
+ ```bash
+ cat private_key.pem
+ cat public_key.pem
+ ```
+
+### RS512
+
+RS512 uses the same RSA format of private and public keys as RS256, but uses the SHA-512 hashing algorithm for greater security.
+The private key signs tokens, and the public verifies them.
+The private key must be kept secret, while the public key can be safely shared.
+
+To automatically generate a private and public key pair and store it in the Langflow [`LANGFLOW_CONFIG_DIR`](/logging), set `LANGFLOW_ALGORITHM="RS512"` in your Langflow `.env`.
+When Langflow starts, it does the following:
+1. Check if RSA keys exist in the configuration directory.
+2. If not, generate a new 2048-bit RSA key pair.
+3. Save the keys to `private_key.pem` and `public_key.pem`.
+4. Reuse the same keys on subsequent startups.
+
+To use a custom private key instead of the auto-generated keys, set the following in your `.env` file.
+The `LANGFLOW_PUBLIC_KEY` will be automatically derived from the private key.
+
+```bash
+LANGFLOW_ALGORITHM=RS512
+LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
+MIIEvgIBADANBgkqhkiG9w0BAQEF...
+-----END PRIVATE KEY-----"
+```
+
+To use a custom key pair, set both keys in your Langflow `.env` file.
+
+```bash
+LANGFLOW_ALGORITHM=RS512
+LANGFLOW_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----
+MIIEvgIBADANBgkqhkiG9w0BAQEF...
+-----END PRIVATE KEY-----"
+LANGFLOW_PUBLIC_KEY="-----BEGIN PUBLIC KEY-----
+MIIBIjANBgkqhkiG9w0BAQEFAAOC...
+-----END PUBLIC KEY-----"
+```
+
+To generate an RSA key pair manually, do the following:
+
+1. Generate a 2048-bit private key:
+ ```bash
+ openssl genrsa -out private_key.pem 2048
+ ```
+
+2. Extract the public key from the private key:
+ ```bash
+ openssl rsa -in private_key.pem -pubout -out public_key.pem
+ ```
+
+3. Verify the keys were created:
+ ```bash
+ cat private_key.pem
+ cat public_key.pem
+ ```
+
+## Configure Docker and Kubernetes deployments
+
+Use Docker with HS256 (symmetric) for single-server deployments or development environments where simplicity is preferred.
+
+Use Docker or Kubernetes with RS256 (asymmetric) for production deployments requiring enhanced security with private/public key pairs.
+
+### Docker with HS256
+
+1. Add the value for your JWT secret key to the Langflow `.env` file.
+ ```bash
+ JWT_SECRET_KEY=your-secret-key
+ ```
+
+2. Set the signing algorithm and include a variable for the secret key in the Docker Compose file.
+ ```yaml
+ version: "3.8"
+ services:
+ langflow:
+ image: langflowai/langflow:latest
+ environment:
+ - LANGFLOW_ALGORITHM=HS256
+ - LANGFLOW_SECRET_KEY=${JWT_SECRET_KEY} # Set in .env file
+ volumes:
+ - langflow_data:/app/langflow
+
+ volumes:
+ langflow_data:
+ ```
+
+### Docker with RS256
+
+To use Langflow's automatically generated key pair, set the `RS256` signing algorithm in the Docker Compose file.
+
+```yaml
+# docker-compose.yml
+version: "3.8"
+services:
+ langflow:
+ image: langflowai/langflow:latest
+ environment:
+ - LANGFLOW_ALGORITHM=RS256
+ volumes:
+ - langflow_data:/app/langflow # Keys stored here
+
+volumes:
+ langflow_data:
+```
+
+To mount an existing key pair, set the `RS256` signing algorithm and mount the private and public keys as volumes.
+
+```yaml
+# docker-compose.yml
+version: "3.8"
+services:
+ langflow:
+ image: langflowai/langflow:latest
+ environment:
+ - LANGFLOW_ALGORITHM=RS256
+ volumes:
+ - ./keys/private_key.pem:/app/langflow/private_key.pem:ro
+ - ./keys/public_key.pem:/app/langflow/public_key.pem:ro
+ - langflow_data:/app/langflow
+
+volumes:
+ langflow_data:
+```
+
+### Kubernetes with RS256
+
+Store JWT keys as Kubernetes Secrets and reference them in your Langflow deployment configuration.
+
+```yaml
+# jwt-secret.yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: langflow-jwt-keys
+type: Opaque
+stringData:
+ algorithm: "RS256"
+ private-key: |
+ -----BEGIN PRIVATE KEY-----
+ MIIEvgIBADANBgkqhkiG9w0BAQEF...
+ -----END PRIVATE KEY-----
+ public-key: |
+ -----BEGIN PUBLIC KEY-----
+ MIIBIjANBgkqhkiG9w0BAQEFAAOC...
+ -----END PUBLIC KEY-----
+---
+# langflow-deployment.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: langflow
+spec:
+ template:
+ spec:
+ containers:
+ - name: langflow
+ image: langflowai/langflow:latest
+ env:
+ - name: LANGFLOW_ALGORITHM
+ valueFrom:
+ secretKeyRef:
+ name: langflow-jwt-keys
+ key: algorithm
+ - name: LANGFLOW_PRIVATE_KEY
+ valueFrom:
+ secretKeyRef:
+ name: langflow-jwt-keys
+ key: private-key
+ - name: LANGFLOW_PUBLIC_KEY
+ valueFrom:
+ secretKeyRef:
+ name: langflow-jwt-keys
+ key: public-key
+```
+
+## Configure token expiration
+
+To configure access and refresh token expiration times, set the values in the Langflow `.env`.
+
+```bash
+LANGFLOW_ACCESS_TOKEN_EXPIRE_SECONDS=3600 # 1 hour
+LANGFLOW_REFRESH_TOKEN_EXPIRE_SECONDS=604800 # 7 days
+```
+
+Access tokens authenticate API requests and typically expire within 15 minutes to 1 hour to limit security risks.
+
+Refresh tokens obtain new access tokens without requiring the user to log in again.
+Refresh tokens typically expire within 7 to 30 days.
+
+When an access token expires, the client can use the refresh token to get a new access token from the `/api/v1/refresh` endpoint.
+This maintains the user's session without prompting for credentials again.
+
+## See also
+
+- [Langflow API keys and authentication](/api-keys-and-authentication)
+- [JWT.io](https://jwt.io/)
+- [RFC 7519 specification](https://tools.ietf.org/html/rfc7519)
+- [OWASP JWT Security Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/JSON_Web_Token_for_Java_Cheat_Sheet.html)
+- [Langflow Security Best Practices](/security)
\ No newline at end of file
diff --git a/docs/docs/Develop/knowledge.mdx b/docs/docs/Develop/knowledge.mdx
new file mode 100644
index 000000000..fc6a143f1
--- /dev/null
+++ b/docs/docs/Develop/knowledge.mdx
@@ -0,0 +1,131 @@
+---
+title: Manage vector data
+slug: /knowledge
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import Icon from "@site/src/components/icon";
+import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
+
+Vector data is critical to AI applications.
+Langflow provides several components to help you store and retrieve vector data in your flows, including embedding models, vector stores, and knowledge bases.
+
+## Embedding models
+
+Embedding model components generate text embeddings using a specified Large Language Model (LLM).
+
+There are two common use cases for these components:
+
+* **Store vectors**: Generate embeddings for content written to a vector database.
+* **Search vectors**: Generate an embedding from a query to run a similarity search.
+
+In both cases the embedding model component is attached to a vector store component.
+For more information, examples, and available options, see [Embedding model components](/components-embedding-models).
+
+Alternatively, you can use [knowledge bases](#knowledge-bases), which include built-in support for several embedding models.
+
+## Vector stores
+
+Vector store components read and write to vector databases.
+Typically, these components connect to remote databases, but some vector store components support local databases.
+
+import PartialVectorRagBlurb from '@site/docs/_partial-vector-rag-blurb.mdx';
+
+
+
+
+Example: Vector search flow
+
+import PartialVectorRagFlow from '@site/docs/_partial-vector-rag-flow.mdx';
+
+
+
+
+
+## Knowledge bases
+
+import PartialKbSummary from '@site/docs/_partial-kb-summary.mdx';
+
+
+
+### Knowledge base storage locations
+
+Each knowledge base is a [ChromaDB](https://docs.trychroma.com/docs/overview/introduction) vector database.
+Each database is stored in a separate directory that contains the following:
+
+- **Vector embeddings**: Embeddings are stored using the Chroma vector database.
+- **Metadata files**: Configuration and embedding model information.
+- **Source data**: The original data used to create the knowledge base.
+
+Knowledge bases are stored local to your Langflow instance.
+The default storage location depends on your operating system and installation method:
+
+- **Langflow Desktop**:
+ - **macOS**: `/Users//.langflow/knowledge_bases`
+ - **Windows**: `C:\Users\\AppData\Roaming\com.LangflowDesktop\knowledge_bases`
+- **Langflow OSS**:
+ - **macOS/Windows/Linux/WSL with `uv pip install`**: `/lib/python3.12/site-packages/langflow/knowledge_bases` (Python version can vary. Knowledge bases aren't shared between virtual environments.)
+ - **macOS/Windows/Linux/WSL with `git clone`**: `/src/backend/base/langflow/knowledge_bases`
+
+If you set the `LANGFLOW_CONFIG_DIR` environment variable, the `knowledge_bases` subdirectory is created relative to that path.
+
+To change the default `knowledge_bases` directory path, set the `LANGFLOW_KNOWLEDGE_BASES_DIR` environment variable:
+
+```bash
+export LANGFLOW_KNOWLEDGE_BASES_DIR="/path/to/parent/directory"
+```
+
+### Create a knowledge base
+
+In this example, you'll create a knowledge base of chunked customer orders.
+To follow along with this example, download [`customer-orders.csv`](/files/customer_orders.csv) to your local machine, or adapt the steps for your own structured data.
+
+1. On the [**Projects** page](/concepts-flows#projects) page, click **Knowledge** below the list of projects to view and manage your knowledge bases.
+
+2. To create a new knowledge base, click **Add Knowledge**.
+3. In the **Create Knowledge Base** pane, enter a name for your knowledge base, and select an embedding model.
+
+4. To configure sources for your knowledge base, click **Configure Sources**.
+Optionally, to create an empty knowledge base, click **Create**.
+5. In the **Configure Sources** pane, configure the sources for your knowledge base's data, and also how the embedded data will be chunked for vector search retrieval.
+ For this example, click **Add Sources**, and then select the downloaded [`customer-orders.csv`](/files/customer_orders.csv) file from your local machine.
+ The default settings for **Chunk Size**, **Chunk Overlap**, and **Separator** are fine.
+ To continue, click **Next Step**.
+6. The **Review & Build** pane allows you to preview your first chunk before you commit to spending tokens to embedall of the data into the knowledge base.
+ If the chunk isn't what you want to embed, click **Back** to configure your chunking strategy.
+ To embed this data, click **Create**.
+7. Your data is embedded as a **Knowledge**.
+ When it is available to use, the **Status** changes to **Ready**.
+
+To use the new knowledge base in a flow, see [Use the Knowledge Base component in a flow](/knowledge-base).
+
+### Manage knowledge bases
+
+On the [**Projects** page](/concepts-flows#projects) page, click **Knowledge** below the list of projects to view and manage your knowledge bases.
+
+For each knowledge base, you can see the following information:
+
+* Name
+* Embedding model
+* Size on disk
+* Number of words, characters, and chunks
+* The average length and size of chunks
+* The knowledge base's status
+
+Chunking behavior is determined by the embedding model, and the embedding model is set when you create the knowledge base.
+If you need to change the embedding model, you must delete and recreate the knowledge base.
+
+To update a knowledge base with , click **More**, and then select **Update Knowledge Base**.
+
+To view a knowledge base's chunks, click **More**, and then select **View Chunks**.
+
+To delete a knowledge base, click **More**, and then click **Delete**.
+If any flows use the deleted knowledge base, you must update them to use a different knowledge base.
+
+For more information on using knowledge bases in a flow, see the [**Knowledge Base** component](/knowledge-base) documentation.
+
+## See also
+
+* [Use Langflow agents](/agents)
+* [Language model components](/components-models)
\ No newline at end of file
diff --git a/docs/docs/Develop/logging.mdx b/docs/docs/Develop/logging.mdx
index 4cc438bb1..a69118acd 100644
--- a/docs/docs/Develop/logging.mdx
+++ b/docs/docs/Develop/logging.mdx
@@ -43,6 +43,7 @@ To customize log storage locations and behaviors, set the following [Langflow en
| `LANGFLOW_LOG_ROTATION` | String | `1 day` | Controls when the log file is rotated, either based on time or file size. For time-based rotation, set to `1 day`, `12 hours`, or `1 week`. For size-based rotation, set to `10 MB` or `1 GB`. To disable rotation, set to `None`. If disabled, log files grow without limit. |
| `LANGFLOW_ENABLE_LOG_RETRIEVAL` | Boolean | `False` | Enables retrieval of logs from your Langflow instance with [Logs endpoints](/api-logs). |
| `LANGFLOW_LOG_RETRIEVER_BUFFER_SIZE` | Integer | `10000` | Set the buffer size for log retrieval if `LANGFLOW_ENABLE_LOG_RETRIEVAL=True`. Must be greater than `0` for log retrieval to function. |
+| `LANGFLOW_NATIVE_TRACING` | Boolean | `true` | Enables the tracer to record execution traces directly in the Langflow database for use in Trace View. Set to `false` to disable tracing. |
## View logs in real-time
diff --git a/docs/docs/Develop/memory.mdx b/docs/docs/Develop/memory.mdx
index e3c19f8e4..49743931f 100644
--- a/docs/docs/Develop/memory.mdx
+++ b/docs/docs/Develop/memory.mdx
@@ -55,6 +55,8 @@ The following tables are stored in `langflow.db`:
• **Message**: Stores chat messages and interactions that occur between components. For more information, see [Message objects](/data-types#message) and [Store chat memory](#store-chat-memory).
+• **Trace** and **Span**: Stores traces and spans for flows and components. For more information, see [Traces](/traces).
+
• **Transactions**: Records execution history and results of flow runs. This information is used for [logging](/logging).
• **User**: Stores user account information including credentials, permissions, profiles, and user management settings. For more information, see [API keys and authentication](/api-keys-and-authentication).
diff --git a/docs/docs/Develop/traces.mdx b/docs/docs/Develop/traces.mdx
new file mode 100644
index 000000000..9f33733b8
--- /dev/null
+++ b/docs/docs/Develop/traces.mdx
@@ -0,0 +1,55 @@
+---
+title: Traces
+slug: /traces
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+Langflow’s **Traces** feature records detailed execution traces for your flows and components so that you can debug issues, measure latency, and track token usage without relying on external observability services.
+
+Trace data is stored in the Langflow database in the `trace` and `span` tables.
+Trace data is presented in the **Flow Activity** and **Trace Details** pages in the UI, and can be retrieved from the `/monitor/traces` API endpoint.
+
+Traces are enabled by default.
+To disable Langflow tracing and use a different tracing provider, set `LANGFLOW_NATIVE_TRACING` to `false`.
+
+## What traces capture
+
+The tracer records:
+
+- **Flow-level traces**: A trace for each flow run, including total runtime and status.
+- **Component spans**: Spans for each component in the flow, including inputs, outputs, latency, and errors.
+- **LangChain spans**: Deeper spans for chains, tools, retrievers, and LLM calls, including model name and token usage where available.
+
+Each span includes:
+
+- **Name** and **type** (for example, chain, LLM, tool, retriever)
+- **Start and end time** and **latency (ms)**
+- **Inputs and outputs** (serialized)
+- **Error details**, if the span failed
+- **Attributes** such as token counts and model metadata
+
+## View traces in the UI
+
+To view traces in the Langflow UI, do the following:
+1. Run a flow, such as the Simple Agent starter flow in the [Quickstart](/get-started-quickstart).
+2. Click **Traces**.
+ The **Flow Activity** page opens.
+ Each flow run is displayed as a single trace of all of its spans.
+ Flow runs can be sorted further by session ID, status, or time range.
+ Optionally, click **Download** to download a JSON file of that flow's trace to your local machine.
+3. Click a flow run to open the **Trace Details** pane.
+ The **Trace Details** pane displays spans for your flow run, including a flow-level span for the entire run, and a span for each component.
+ Individual component spans include the component's inputs and outputs, timing, and token usage.
+
+## Retrieve traces with the API
+
+To programmatically query traces, use the `/monitor/traces` endpoints.
+For full parameter details and code examples in Python, TypeScript, and curl, see [Monitor endpoints: Get traces](/api-monitor#get-traces).
+
+## See also
+
+- [Logs](/logging)
+- [Monitor endpoints](/api-monitor)
\ No newline at end of file
diff --git a/docs/docs/Flows/concepts-overview.mdx b/docs/docs/Flows/concepts-overview.mdx
index 4de04ff6b..d8f970b9b 100644
--- a/docs/docs/Flows/concepts-overview.mdx
+++ b/docs/docs/Flows/concepts-overview.mdx
@@ -48,12 +48,11 @@ Use these shortcuts, gestures, and functionality to navigate the workspace:
If your flow has a **Chat Input** component, you can use the **Playground** to run your flow, chat with your flow, view inputs and outputs, and modify the LLM's memories to tune the flow's responses in real time.
-To try this for yourself, create a flow based on the **Basic Prompting** template, and then click **Playground** when editing the flow in the workspace.
+To try this for yourself, create a flow based on the **Simple Agent** template, and then click **Playground** when editing the flow in the workspace.

If you have an **Agent** component in your flow, the **Playground** displays its tool calls and outputs so you can monitor the agent's tool use and understand the reasoning behind its responses.
-To try an agent flow in the **Playground**, use the **Simple Agent** template or the [Langflow quickstart](/get-started-quickstart).

diff --git a/docs/docs/Flows/concepts-playground.mdx b/docs/docs/Flows/concepts-playground.mdx
index 6adf66207..33003ade7 100644
--- a/docs/docs/Flows/concepts-playground.mdx
+++ b/docs/docs/Flows/concepts-playground.mdx
@@ -19,6 +19,10 @@ The **Playground** allows you to quickly iterate over your flow's logic and beha
To run a flow in the **Playground**, open the flow, and then click **Playground**.
Then, if your flow has a [**Chat Input** component](/chat-input-and-output), enter a prompt or [use voice mode](/concepts-voice-mode) to trigger the flow and start a chat session.
+To expand the **Playground** view, click **Enter fullscreen** within the **Playground** panel.
+
+
+
:::tip
If there is no message input field in the **Playground**, make sure your flow has a **Chat Input** component that is connected, directly or indirectly, to the **Input** port of a **Language Model** or **Agent** component.
@@ -85,12 +89,11 @@ You can set custom session IDs in the visual editor and programmatically.
In your [input and output components](/chat-input-and-output), use the **Session ID** field:
1. Click the component where you want to set a custom session ID.
-2. In the [component's header menu](/concepts-components#component-menus), click **Controls**.
-3. Enable **Session ID**.
-4. Click **Close**.
-5. Enter a custom session ID.
+2. In the [component inspection panel](/concepts-components#component-menus), enable **Session ID**.
+3. Click **Close**.
+4. Enter a custom session ID.
If the field is empty, the flow uses the default session ID.
-6. Open the **Playground** to start a chat under your custom session ID.
+5. Open the **Playground** to start a chat under your custom session ID.
Make sure to change the **Session ID** when you want to start a new chat session or continue an earlier chat session with a different session ID.
diff --git a/docs/docs/Flows/webhook.mdx b/docs/docs/Flows/webhook.mdx
index 2150636b4..b4a38deb6 100644
--- a/docs/docs/Flows/webhook.mdx
+++ b/docs/docs/Flows/webhook.mdx
@@ -57,7 +57,7 @@ To use the **Webhook** component in a flow, do the following:
Alternatively, to get a complete `POST /v1/webhook/$FLOW_ID` code snippet, open the flow's [**API access** pane](/concepts-publish#api-access), and then click the **Webhook curl** tab.
You can also modify the default curl command in the **Webhook** component's **curl** field.
- If this field isn't visible by default, click the **Webhook** component, and then click **Controls** in the [component's header menu](/concepts-components#component-menus).
+ If this field isn't visible by default, click the **Webhook** component to expose the [component inspection panel](/concepts-components#component-menus).
7. Send a POST request with `data` to the flow's `webhook` endpoint to trigger the flow.
diff --git a/docs/docs/Get-Started/get-started-quickstart.mdx b/docs/docs/Get-Started/get-started-quickstart.mdx
index 81aa3e854..99ca86cb2 100644
--- a/docs/docs/Get-Started/get-started-quickstart.mdx
+++ b/docs/docs/Get-Started/get-started-quickstart.mdx
@@ -6,6 +6,7 @@ slug: /get-started-quickstart
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+import PartialGlobalModelProviders from '@site/docs/_partial-global-model-providers.mdx';
Get started with Langflow by loading a template flow, running it, and then serving it at the `/run` API endpoint.
@@ -61,24 +62,32 @@ The **Simple Agent** template consists of an [**Agent** component](/agents) conn
Many components can be tools for agents, including [Model Context Protocol (MCP) servers](/mcp-server). The agent decides which tools to call based on the context of a given query.
-2. In the **Agent** component, enter your OpenAI API key directly or use a [global variable](/configuration-global-variables).
+2. In the **Agent** component, click **Setup Provider** to select your language model provider.
+
+3. In the **Agent** component, select your configured model from the **Language Model** dropdown.
- This example uses the **Agent** component's built-in OpenAI model.
- If you want to use a different provider, edit the model provider, model name, and credentials accordingly.
- If your preferred provider or model isn't listed, set **Model Provider** to **Connect other models**, and then connect any [language model component](/components-models#additional-language-models).
+
+ Access more models and providers
-3. To run the flow, click **Playground**.
+ There are two ways to access more models and providers:
-4. To test the **Calculator** tool, ask the agent a simple math question, such as `I want to add 4 and 4.`
+ * Edit Langflow's global **Models** configuration. These providers and models are part of Langflow's core functionality. Use the **Ollama** provider to connect to any model hosted on a local or remote Ollama instance.
+ * Connect any [additional language model component](/components-models#additional-language-models) to the **Agent** component's **Language Model** port.
+
+
+
+4. To run the flow, click **Playground**.
+
+5. To test the **Calculator** tool, ask the agent a simple math question, such as `I want to add 4 and 4.`
To help you test and evaluate your flows, the **Playground** shows the agent's reasoning process as it analyzes the prompt, selects a tool, and then uses the tool to generate a response.
In this case, a math question causes the agent to select the **Calculator** tool and use an action like `evaluate_expression`.

-5. To test the **URL** tool, ask the agent about current events.
+6. To test the **URL** tool, ask the agent about current events.
For this request, the agent selects the **URL** tool's `fetch_content` action, and then returns a summary of current news headlines.
-6. When you are done testing the flow, click **Close**.
+7. When you are done testing the flow, click **Close**.
:::tip Next steps
Now that you've run your first flow, try these next steps:
@@ -535,7 +544,8 @@ To assist with formatting, you can define tweaks in Langflow's **Input Schema**
1. To open the **Input Schema** pane, from the **API access** pane, click **Input Schema**.
2. In the **Input Schema** pane, select the parameter you want to modify in your next request.
Enabling parameters in the **Input Schema** pane doesn't permanently change the listed parameters. It only adds them to the sample code snippets.
-3. For example, to change the LLM provider from OpenAI to Groq, and include your Groq API key with the request, select the values **Model Providers**, **Model**, and **Groq API Key**.
+3. For example, to change the agent's LLM model from OpenAI to Anthropic and include your Anthropic API key with the request, select the **Agent** component in the **Input Schema** pane and enable the **Language Model** field.
+
Langflow updates the `tweaks` object in the code snippets based on your input parameters, and includes default values to guide you.
Use the updated code snippets in your script to run your flow with your overrides.
@@ -546,9 +556,9 @@ payload = {
"input_value": "hello world!",
"tweaks": {
"Agent-ZOknz": {
- "agent_llm": "Groq",
- "api_key": "GROQ_API_KEY",
- "model_name": "llama-3.1-8b-instant"
+ "agent_llm": "Anthropic",
+ "api_key": "ANTHROPIC_API_KEY",
+ "model_name": "claude-opus-4-5-20251101"
}
}
}
diff --git a/docs/docs/Support/release-notes.mdx b/docs/docs/Support/release-notes.mdx
index 2755c452c..e13add367 100644
--- a/docs/docs/Support/release-notes.mdx
+++ b/docs/docs/Support/release-notes.mdx
@@ -47,6 +47,114 @@ To avoid the impact of potential breaking changes and test new versions, the Lan
If you made changes to your flows in the isolated installation, you might want to export and import those flows back to your upgraded primary installation so you don't have to repeat the component upgrade process.
+## 1.8.x
+
+Highlights of this release include the following changes.
+For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases).
+
+### Breaking changes
+
+- `langflow-base` dependency structure refactored
+
+ The `langflow-base` package now uses granular optional dependency groups. As a result, many dependencies that were previously included in the `langflow-base` installation were moved to optional extras.
+
+ If you installed Langflow with `uv pip install langflow`, this isn't a breaking change. Installing `langflow` automatically installs `langflow-base[complete]`, which includes all optional dependencies and maintains the same functionality as before.
+
+ However, if you installed Langflow with `uv pip install langflow-base` without specifying extra dependencies, this _is_ a breaking change.
+ Some dependencies that were previously included by default are now available only through optional extras.
+ Therefore, installing `langflow-base` directly only installs the [core base dependencies](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/pyproject.toml).
+
+ If you installed `langflow-base`, there are two ways to resolve dependency errors that result from this breaking change:
+
+ * If you need the full set of dependencies, you must install `langflow-base` with the `complete` extra:
+
+ ```bash
+ uv pip install "langflow-base[complete]"
+ ```
+
+ * If you need specific dependencies, you must install `langflow-base` with those optional dependency groups. For example:
+
+ ```bash
+ uv pip install "langflow-base[postgresql,openai,chroma]"
+ ```
+
+ For more information about available optional dependency groups, see [Install optional dependency groups for `langflow-base`](/install-custom-dependencies#install-optional-dependency-groups-for-langflow-base).
+
+### New features and enhancements
+
+- Global model provider configuration
+
+ Model providers for language models, embedding models, and agents are now configured globally in the **Model providers** pane, instead of within individual components.
+ For more information, see the [Language Model component](/components-models).
+
+- Component inspection panel
+
+ The component inspection panel replaces the component header menu for managing component parameters and settings.
+ For more information, see [Component inspection panel](/concepts-components#component-inspection-panel).
+
+- Developer API: `/workflow` synchronous endpoints (Beta)
+
+ The Developer API is part of a larger effort to improve Langflow's APIs with enhanced capabilities and better developer experience.
+ The Developer API now includes `/v2/workflow` endpoints for executing flows with enhanced error handling, timeout protection, and structured responses.
+ The synchronous execution endpoint is available at `POST /api/v2/workflows`.
+ For more information, see [Workflow API (Beta)](/workflow-api).
+
+- Traces and trace view
+
+ Langflow now records execution traces for flows and components.
+ View your traces in the **Trace Details** pane, and inspect span trees, latencies, and errors.
+ For more information, see [Traces](/traces).
+
+- Knowledge bases
+
+ Knowledge bases let you organize documents and other reference data into reusable vector databases that can be attached to multiple flows.
+ This makes it easier to centralize domain knowledge and reuse the same data across agents and retrieval workflows.
+ For more information, see [Manage vector data](/knowledge).
+
+- Mustache templating support for Prompt Template component
+
+ The **Prompt Template** component now supports Mustache templating syntax.
+ Mustache templating eliminates the need to escape curly braces when including JSON structures in your prompts. For more information, see [Prompt Template](/components-prompts#use-mustache-templating-in-prompt-templates).
+
+- More configuration options for JWT-based session authentication
+
+ Langflow 1.8 offers additional configuration options for JWT algorithms, including support for RS256/RS512 algorithms, configurable keys, and token lifetimes. For more information, see [JWT authentication](/jwt-authentication).
+
+- Global variables in MCP server headers
+
+ You can now use [global variables](/configuration-global-variables) in MCP server header values to securely store and reference sensitive values. For more information, see [Use global variables in MCP server headers](/mcp-client#use-global-variables-in-mcp-server-headers).
+
+- Pass environment variables to flows in API headers and CLI
+
+ The ability to pass environment variables in HTTP headers (previously available for the [`/responses` endpoint](/api-openai-responses#global-var)) is now also available for the [`/run` endpoint](/api-flows-run#pass-global-variables-in-headers).
+
+- Guardrails component
+
+ The **Guardrails** component validates input text against security and safety guardrails by using a connected language model to check for content such as PII, tokens/passwords, or offensive content. For more information, see [Guardrails](/guardrails).
+
+- Token usage tracking for OpenAI Responses API
+
+ The OpenAI Responses API endpoint now tracks and returns token usage statistics when your flow uses language model APIs that provide token usage information.
+ For more information, see [Token usage tracking](/api-openai-responses#token-usage-tracking).
+
+- Docker AMD vs ARM image sizes
+
+ Langflow 1.8.0 addresses the AMD vs ARM Docker image size gap.
+ We reconfigured our Python dependencies to use CPU-only PyTorch wheels through `uv` sources, which removes large CUDA-related dependencies from the AMD64 images.
+ With this change, both AMD64 and ARM64 images are now smaller than 2 GB.
+
+- New [**Agentics** bundle](/bundles-agentics)
+
+ Uses LLMs to transform tabular data, including mapping, reducing, and generating DataFrame rows based on a defined schema.
+
+- New [**LiteLLM** bundle](/bundles-lite-llm)
+
+ Connects to models through a LiteLLM proxy so you can route requests to multiple LLM providers and switch providers without changing flow credentials.
+
+- New [**Openlayer** observability integration](/integrations-openlayer)
+
+ Configures Langflow to send tracing data to Openlayer for analysis, monitoring, and evaluation of your flow executions.
+
## 1.7.x
:::warning Version yanked
diff --git a/docs/docs/Support/troubleshooting.mdx b/docs/docs/Support/troubleshooting.mdx
index 3b3c76dba..620ddda9d 100644
--- a/docs/docs/Support/troubleshooting.mdx
+++ b/docs/docs/Support/troubleshooting.mdx
@@ -261,6 +261,36 @@ To fully remove a Langflow Desktop macOS installation, you must also delete `~/.
The following issues can occur when using Langflow as an MCP server or client.
+### Default project MCP server only works when authentication is None
+
+If the default project MCP server works without authentication, but fails after adding an API key to the server configuration, the API key might have been added to the wrong section of the configuration.
+
+The default MCP server uses streamable HTTP transport.
+The API key must be added to the `args` array that is passed to `mcp-proxy`, not the `env` object.
+
+Your `args` array must include `"--headers"`, `"x-api-key"`, and your key value. For example:
+
+```json
+{
+ "mcpServers": {
+ "PROJECT_NAME": {
+ "command": "uvx",
+ "args": [
+ "mcp-proxy",
+ "--transport",
+ "streamablehttp",
+ "--headers",
+ "x-api-key",
+ "YOUR_API_KEY",
+ "http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
+ ]
+ }
+ }
+}
+```
+
+For more information, see [Connect clients to your Langflow MCP server](/mcp-server#connect-clients-to-use-the-servers-actions).
+
### Claude for Desktop doesn't use MCP server tools correctly
If Claude for Desktop doesn't use your server's tools correctly, try explicitly defining the path to your local `uvx` or `npx` executable file in the `claude_desktop_config.json` configuration file:
diff --git a/docs/docs/_partial-api-setup.mdx b/docs/docs/_partial-api-setup.mdx
new file mode 100644
index 000000000..77862fb35
--- /dev/null
+++ b/docs/docs/_partial-api-setup.mdx
@@ -0,0 +1,57 @@
+## Prerequisites
+
+Before using the API, you need:
+
+* [Install and start Langflow](/get-started-installation) with the developer API enabled
+
+ The Workflows API endpoints require the `developer_api_enabled` setting to be enabled. If this setting is disabled, these endpoints will return a 404 Not Found error.
+
+ To enable the developer API endpoint, do the following:
+ 1. In the Langflow `.env` file, set the environment variable to `true`:
+ ```
+ LANGFLOW_DEVELOPER_API_ENABLED=true
+ ```
+ 2. Start your Langflow server with the `.env` file enabled:
+ ```
+ uv run langflow run --env-file .env
+ ```
+
+ For more information about configuring environment variables, see [Environment variables](/environment-variables).
+* [Create a Langflow API key](/api-keys-and-authentication)
+* [Create a flow](/concepts-flows) that you want to execute
+* [Get the flow ID](/concepts-publish#api-access) or endpoint name of the flow you want to execute
+
+### Set environment variables
+
+All code examples in this documentation assume you have set the following environment variables:
+
+**Python:**
+```python
+import os
+
+LANGFLOW_SERVER_URL = os.getenv("LANGFLOW_SERVER_URL")
+LANGFLOW_API_KEY = os.getenv("LANGFLOW_API_KEY")
+```
+
+**TypeScript/JavaScript:**
+```typescript
+const LANGFLOW_SERVER_URL = process.env.LANGFLOW_SERVER_URL;
+const LANGFLOW_API_KEY = process.env.LANGFLOW_API_KEY;
+```
+
+Set these environment variables before running the examples, or replace the variable references in the code examples with your actual Langflow server URL and API key.
+
+The default `LANGFLOW_SERVER_URL` for a local Langflow deployment is `http://localhost:7860`.
+For remote deployments, the domain is set by your hosting service, such as `https://UUID.ngrok.app`.
+
+### Authentication and headers
+
+All Workflows API requests require authentication using a Langflow API key. The API key is passed in the `x-api-key` header.
+
+For more information, see [Create a Langflow API key](/api-keys-and-authentication).
+
+| Header | Description | Example |
+|--------|-------------|---------|
+| `Content-Type` | Specifies the JSON format. | `application/json` |
+| `x-api-key` | Your Langflow API key. | `sk-...` |
+| `accept` | Optional. Specifies the response format. | `application/json` |
\ No newline at end of file
diff --git a/docs/docs/_partial-escape-curly-braces.mdx b/docs/docs/_partial-escape-curly-braces.mdx
index dc9d33c3e..899c31071 100644
--- a/docs/docs/_partial-escape-curly-braces.mdx
+++ b/docs/docs/_partial-escape-curly-braces.mdx
@@ -1,2 +1,5 @@
If your template includes literal text and variables, you can use double curly braces to escape literal curly braces in the template and prevent interpretation of that text as a variable.
-For example: `This is a template with {{literal text in curly braces}} and a {variable}`.
\ No newline at end of file
+For example: `This is a template with {{literal text in curly braces}} and a {variable}`.
+
+If your template contains many literal curly braces, such as JSON structures, consider using Mustache templating instead.
+For more information, see [Use Mustache templating in prompt templates](/components-prompts#use-mustache-templating-in-prompt-templates).
\ No newline at end of file
diff --git a/docs/docs/_partial-global-model-providers.mdx b/docs/docs/_partial-global-model-providers.mdx
new file mode 100644
index 000000000..7079751a1
--- /dev/null
+++ b/docs/docs/_partial-global-model-providers.mdx
@@ -0,0 +1,17 @@
+import Icon from "@site/src/components/icon";
+
+To edit Langflow's global model provider configuration, do the following:
+
+1. To open the **Model Providers** pane, click your profile icon, select **Settings**, and then click **Model Providers**.
+2. In the **Model Providers** pane, select a provider.
+3. In the **API Key** field, add your provider's API key.
+
+ The key must have permission to call the models you want to use in your flow, and your account must have sufficient credits for the actions you want to perform.
+
+ You can only add one key for each provider. Make sure the key has access to _all_ models that you want to use in Langflow.
+4. Enable the specific models that you want to use in Langflow.
+The available models depend on the provider and your API key's permissions.
+Models that generate text are listed under **Language Models**.
+Models that generate embeddings are listed under **Embedding Models**.
+
+After you enable a model in Langflow's global model configuration, you can use that model in any model-driven component in your flows.
\ No newline at end of file
diff --git a/docs/docs/_partial-hidden-params.mdx b/docs/docs/_partial-hidden-params.mdx
index 8db9a0310..35fddd476 100644
--- a/docs/docs/_partial-hidden-params.mdx
+++ b/docs/docs/_partial-hidden-params.mdx
@@ -1,4 +1,2 @@
-import Icon from "@site/src/components/icon";
-
Some parameters are hidden by default in the visual editor.
-You can modify all parameters through the **Controls** in the [component's header menu](/concepts-components#component-menus).
\ No newline at end of file
+You can modify all component parameters through the [component inspection panel](/concepts-components#component-inspection-panel) that appears when you select a component.
\ No newline at end of file
diff --git a/docs/docs/_partial-kb-summary.mdx b/docs/docs/_partial-kb-summary.mdx
new file mode 100644
index 000000000..0ef3667e2
--- /dev/null
+++ b/docs/docs/_partial-kb-summary.mdx
@@ -0,0 +1,17 @@
+A Langflow knowledge base is a local vector database that is stored in Langflow storage.
+
+Because knowledge bases are local, the data isn't remotely requested and re-ingested with every flow run.
+This can be more efficient than using a remote vector database, and it is a good choice for flows that use custom, domain-specific datasets, like slices of customer and product data.
+
+You can use knowledge base components in much the same way that you use vector store components.
+However, there are several key differences:
+
+* **Local storage**: Langflow knowledge bases are exclusively local.
+In contrast, only some vector store components support local databases.
+* **Built-in embedding models**: Langflow knowledge bases include built-in support for several embedding models.
+Other models aren't supported for use with knowledge bases.
+To use a different provider or model, you must use a vector store component along with your preferred embedding model component.
+* **Basic similarity search**: When querying Langflow knowledge bases, only standard similarity search is supported.
+For more advanced searches, you must use a vector store component for a vector database provider that supports your desired functionality.
+* **Structured data**: Langflow knowledge bases only support structured data.
+For unstructured data, you must use a compatible vector store component.
\ No newline at end of file
diff --git a/docs/docusaurus.config.js b/docs/docusaurus.config.js
index fde36e3c5..c324d6b0d 100644
--- a/docs/docusaurus.config.js
+++ b/docs/docusaurus.config.js
@@ -154,6 +154,11 @@ const config = {
spec: "openapi/openapi.json",
route: "/api",
},
+ {
+ id: "workflow",
+ spec: "openapi/langflow-workflows-openapi.json",
+ route: "/api/workflow",
+ },
],
theme: {
primaryColor: "#7528FC",
diff --git a/docs/openapi/fetch_openapi_spec.py b/docs/openapi/fetch_openapi_spec.py
new file mode 100755
index 000000000..6f0c4a26b
--- /dev/null
+++ b/docs/openapi/fetch_openapi_spec.py
@@ -0,0 +1,57 @@
+#!/usr/bin/env python3
+"""Pull OpenAPI spec files from the langflow-ai/sdk repository.
+
+Usage:
+ python3 fetch_openapi_spec.py # Download all files
+ python3 fetch_openapi_spec.py --file # Download specific file
+ python3 fetch_openapi_spec.py --branch # Use different branch
+"""
+
+import base64
+import json
+import sys
+import urllib.error
+import urllib.request
+from pathlib import Path
+
+REPO = "langflow-ai/sdk"
+BRANCH = "main"
+SPECS_DIR = "specs"
+FILES = ["langflow-workflows-openapi.json", "langflow-openapi.json"]
+
+
+def fetch_file(repo: str, filepath: str, branch: str) -> str:
+ """Fetch and decode file from GitHub."""
+ url = f"https://api.github.com/repos/{repo}/contents/{filepath}?ref={branch}"
+ with urllib.request.urlopen(url) as r: # noqa: S310
+ data = json.loads(r.read().decode())
+ return base64.b64decode(data["content"]).decode("utf-8")
+
+
+def main():
+ import argparse
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--file", action="append", dest="files")
+ parser.add_argument("--branch", default=BRANCH)
+ args = parser.parse_args()
+
+ files = args.files or FILES
+ local_dir = Path(__file__).parent
+
+ for filename in files:
+ if filename not in FILES:
+ sys.stderr.write(f"Error: {filename} not in {FILES}\n")
+ sys.exit(1)
+
+ try:
+ content = fetch_file(REPO, f"{SPECS_DIR}/{filename}", args.branch)
+ (local_dir / filename).write_text(content, encoding="utf-8")
+ sys.stdout.write(f"âś“ {filename}\n")
+ except (urllib.error.HTTPError, urllib.error.URLError, KeyError, json.JSONDecodeError) as e:
+ sys.stderr.write(f"âś— {filename}: {e}\n")
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/docs/openapi/langflow-workflows-openapi.json b/docs/openapi/langflow-workflows-openapi.json
new file mode 100644
index 000000000..a9d4ceac9
--- /dev/null
+++ b/docs/openapi/langflow-workflows-openapi.json
@@ -0,0 +1,621 @@
+{
+ "openapi": "3.1.0",
+ "info": {
+ "title": "Langflow V2 Workflow API",
+ "description": "Filtered API for Langflow V2 workflow operations (3 endpoints)",
+ "version": "1.8.0"
+ },
+ "paths": {
+ "/api/v2/workflows": {
+ "post": {
+ "tags": [
+ "Workflow"
+ ],
+ "summary": "Execute Workflow",
+ "description": "Execute a workflow with support for sync, stream, and background modes",
+ "operationId": "execute_workflow_api_v2_workflows_post",
+ "security": [
+ {
+ "API key query": []
+ },
+ {
+ "API key header": []
+ }
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/WorkflowExecutionRequest"
+ }
+ }
+ }
+ },
+ "responses": {
+ "200": {
+ "description": "Workflow execution response",
+ "content": {
+ "application/json": {
+ "schema": {
+ "anyOf": [
+ {
+ "$ref": "#/components/schemas/WorkflowExecutionResponse"
+ },
+ {
+ "$ref": "#/components/schemas/WorkflowJobResponse"
+ }
+ ],
+ "title": "Response Execute Workflow Api V2 Workflows Post",
+ "oneOf": [
+ {
+ "$ref": "#/components/schemas/WorkflowExecutionResponse"
+ },
+ {
+ "$ref": "#/components/schemas/WorkflowJobResponse"
+ }
+ ],
+ "discriminator": {
+ "propertyName": "object",
+ "mapping": {
+ "response": "#/components/schemas/WorkflowExecutionResponse",
+ "job": "#/components/schemas/WorkflowJobResponse"
+ }
+ }
+ }
+ },
+ "text/event-stream": {
+ "schema": {
+ "$ref": "#/components/schemas/WorkflowStreamEvent"
+ },
+ "description": "Server-sent events for streaming execution"
+ }
+ }
+ },
+ "422": {
+ "description": "Validation Error",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/HTTPValidationError"
+ }
+ }
+ }
+ }
+ }
+ },
+ "get": {
+ "tags": [
+ "Workflow"
+ ],
+ "summary": "Get Workflow Status",
+ "description": "Get status of workflow job by job ID",
+ "operationId": "get_workflow_status_api_v2_workflows_get",
+ "security": [
+ {
+ "API key query": []
+ },
+ {
+ "API key header": []
+ }
+ ],
+ "parameters": [
+ {
+ "name": "job_id",
+ "in": "query",
+ "required": false,
+ "schema": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "string",
+ "format": "uuid"
+ },
+ {
+ "type": "null"
+ }
+ ],
+ "description": "Job ID to query",
+ "title": "Job Id"
+ },
+ "description": "Job ID to query"
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "Workflow status response",
+ "content": {
+ "application/json": {
+ "schema": {
+ "anyOf": [
+ {
+ "$ref": "#/components/schemas/WorkflowExecutionResponse"
+ },
+ {
+ "$ref": "#/components/schemas/WorkflowJobResponse"
+ }
+ ],
+ "title": "Response Get Workflow Status Api V2 Workflows Get",
+ "$ref": "#/components/schemas/WorkflowExecutionResponse"
+ }
+ },
+ "text/event-stream": {
+ "schema": {
+ "$ref": "#/components/schemas/WorkflowStreamEvent"
+ },
+ "description": "Server-sent events for streaming status"
+ }
+ }
+ },
+ "422": {
+ "description": "Validation Error",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/HTTPValidationError"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/api/v2/workflows/stop": {
+ "post": {
+ "tags": [
+ "Workflow"
+ ],
+ "summary": "Stop Workflow",
+ "description": "Stop a running workflow execution",
+ "operationId": "stop_workflow_api_v2_workflows_stop_post",
+ "requestBody": {
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/WorkflowStopRequest"
+ }
+ }
+ },
+ "required": true
+ },
+ "responses": {
+ "200": {
+ "description": "Successful Response",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/WorkflowStopResponse"
+ }
+ }
+ }
+ },
+ "422": {
+ "description": "Validation Error",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/HTTPValidationError"
+ }
+ }
+ }
+ }
+ },
+ "security": [
+ {
+ "API key query": []
+ },
+ {
+ "API key header": []
+ }
+ ]
+ }
+ }
+ },
+ "components": {
+ "schemas": {
+ "ComponentOutput": {
+ "properties": {
+ "type": {
+ "type": "string",
+ "title": "Type",
+ "description": "Type of the component output (e.g., 'message', 'data', 'tool', 'text')"
+ },
+ "status": {
+ "$ref": "#/components/schemas/JobStatus"
+ },
+ "content": {
+ "anyOf": [
+ {},
+ {
+ "type": "null"
+ }
+ ],
+ "title": "Content"
+ },
+ "metadata": {
+ "anyOf": [
+ {
+ "additionalProperties": true,
+ "type": "object"
+ },
+ {
+ "type": "null"
+ }
+ ],
+ "title": "Metadata"
+ }
+ },
+ "type": "object",
+ "required": [
+ "type",
+ "status"
+ ],
+ "title": "ComponentOutput",
+ "description": "Component output schema."
+ },
+ "ErrorDetail": {
+ "properties": {
+ "error": {
+ "type": "string",
+ "title": "Error"
+ },
+ "code": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "null"
+ }
+ ],
+ "title": "Code"
+ },
+ "details": {
+ "anyOf": [
+ {
+ "additionalProperties": true,
+ "type": "object"
+ },
+ {
+ "type": "null"
+ }
+ ],
+ "title": "Details"
+ }
+ },
+ "type": "object",
+ "required": [
+ "error"
+ ],
+ "title": "ErrorDetail",
+ "description": "Error detail schema."
+ },
+ "HTTPValidationError": {
+ "properties": {
+ "detail": {
+ "items": {
+ "$ref": "#/components/schemas/ValidationError"
+ },
+ "type": "array",
+ "title": "Detail"
+ }
+ },
+ "type": "object",
+ "title": "HTTPValidationError"
+ },
+ "JobStatus": {
+ "type": "string",
+ "enum": [
+ "queued",
+ "in_progress",
+ "completed",
+ "failed",
+ "cancelled",
+ "timed_out"
+ ],
+ "title": "JobStatus",
+ "description": "Job execution status."
+ },
+ "ValidationError": {
+ "properties": {
+ "loc": {
+ "items": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "integer"
+ }
+ ]
+ },
+ "type": "array",
+ "title": "Location"
+ },
+ "msg": {
+ "type": "string",
+ "title": "Message"
+ },
+ "type": {
+ "type": "string",
+ "title": "Error Type"
+ }
+ },
+ "type": "object",
+ "required": [
+ "loc",
+ "msg",
+ "type"
+ ],
+ "title": "ValidationError"
+ },
+ "WorkflowExecutionRequest": {
+ "properties": {
+ "background": {
+ "type": "boolean",
+ "title": "Background",
+ "default": false
+ },
+ "stream": {
+ "type": "boolean",
+ "title": "Stream",
+ "default": false
+ },
+ "flow_id": {
+ "type": "string",
+ "title": "Flow Id"
+ },
+ "inputs": {
+ "anyOf": [
+ {
+ "additionalProperties": true,
+ "type": "object"
+ },
+ {
+ "type": "null"
+ }
+ ],
+ "title": "Inputs",
+ "description": "Component-specific inputs in flat format: 'component_id.param_name': value"
+ }
+ },
+ "additionalProperties": false,
+ "type": "object",
+ "required": [
+ "flow_id"
+ ],
+ "title": "WorkflowExecutionRequest",
+ "description": "Request schema for workflow execution.",
+ "examples": [
+ {
+ "background": false,
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "inputs": {
+ "ChatInput-abc.input_value": "Hello, how can you help me today?",
+ "ChatInput-abc.session_id": "session-123",
+ "LLM-xyz.max_tokens": 100,
+ "LLM-xyz.temperature": 0.7,
+ "OpenSearch-def.opensearch_url": "https://opensearch:9200"
+ },
+ "stream": false
+ },
+ {
+ "background": true,
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "inputs": {
+ "ChatInput-abc.input_value": "Process this in the background"
+ },
+ "stream": false
+ },
+ {
+ "background": false,
+ "flow_id": "flow_67ccd2be17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
+ "inputs": {
+ "ChatInput-abc.input_value": "Stream this conversation"
+ },
+ "stream": true
+ }
+ ]
+ },
+ "WorkflowExecutionResponse": {
+ "properties": {
+ "flow_id": {
+ "type": "string",
+ "title": "Flow Id"
+ },
+ "job_id": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "string",
+ "format": "uuid"
+ },
+ {
+ "type": "null"
+ }
+ ],
+ "title": "Job Id"
+ },
+ "object": {
+ "type": "string",
+ "const": "response",
+ "title": "Object",
+ "default": "response"
+ },
+ "created_timestamp": {
+ "type": "string",
+ "title": "Created Timestamp"
+ },
+ "status": {
+ "$ref": "#/components/schemas/JobStatus"
+ },
+ "errors": {
+ "items": {
+ "$ref": "#/components/schemas/ErrorDetail"
+ },
+ "type": "array",
+ "title": "Errors",
+ "default": []
+ },
+ "inputs": {
+ "additionalProperties": true,
+ "type": "object",
+ "title": "Inputs",
+ "default": {}
+ },
+ "outputs": {
+ "additionalProperties": {
+ "$ref": "#/components/schemas/ComponentOutput"
+ },
+ "type": "object",
+ "title": "Outputs",
+ "default": {}
+ }
+ },
+ "type": "object",
+ "required": [
+ "flow_id",
+ "status"
+ ],
+ "title": "WorkflowExecutionResponse",
+ "description": "Synchronous workflow execution response."
+ },
+ "WorkflowJobResponse": {
+ "properties": {
+ "job_id": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "string",
+ "format": "uuid"
+ }
+ ],
+ "title": "Job Id"
+ },
+ "flow_id": {
+ "type": "string",
+ "title": "Flow Id"
+ },
+ "object": {
+ "type": "string",
+ "const": "job",
+ "title": "Object",
+ "default": "job"
+ },
+ "created_timestamp": {
+ "type": "string",
+ "title": "Created Timestamp"
+ },
+ "status": {
+ "$ref": "#/components/schemas/JobStatus"
+ },
+ "links": {
+ "additionalProperties": {
+ "type": "string"
+ },
+ "type": "object",
+ "title": "Links"
+ },
+ "errors": {
+ "items": {
+ "$ref": "#/components/schemas/ErrorDetail"
+ },
+ "type": "array",
+ "title": "Errors",
+ "default": []
+ }
+ },
+ "type": "object",
+ "required": [
+ "job_id",
+ "flow_id",
+ "status"
+ ],
+ "title": "WorkflowJobResponse",
+ "description": "Background job response."
+ },
+ "WorkflowStopRequest": {
+ "properties": {
+ "job_id": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "string",
+ "format": "uuid"
+ }
+ ],
+ "title": "Job Id"
+ }
+ },
+ "type": "object",
+ "required": [
+ "job_id"
+ ],
+ "title": "WorkflowStopRequest",
+ "description": "Request schema for stopping workflow."
+ },
+ "WorkflowStopResponse": {
+ "properties": {
+ "job_id": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "string",
+ "format": "uuid"
+ }
+ ],
+ "title": "Job Id"
+ },
+ "message": {
+ "anyOf": [
+ {
+ "type": "string"
+ },
+ {
+ "type": "null"
+ }
+ ],
+ "title": "Message"
+ }
+ },
+ "type": "object",
+ "required": [
+ "job_id"
+ ],
+ "title": "WorkflowStopResponse",
+ "description": "Response schema for stopping workflow."
+ }
+ },
+ "securitySchemes": {
+ "OAuth2PasswordBearerCookie": {
+ "type": "oauth2",
+ "flows": {
+ "password": {
+ "scopes": {},
+ "tokenUrl": "api/v1/login"
+ }
+ }
+ },
+ "API key query": {
+ "type": "apiKey",
+ "in": "query",
+ "name": "x-api-key"
+ },
+ "API key header": {
+ "type": "apiKey",
+ "in": "header",
+ "name": "x-api-key"
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/docs/openapi/openapi.json b/docs/openapi/openapi.json
index df9df156b..75957225a 100644
--- a/docs/openapi/openapi.json
+++ b/docs/openapi/openapi.json
@@ -2,114 +2,9 @@
"openapi": "3.1.0",
"info": {
"title": "Langflow",
- "version": "1.7.3"
+ "version": "1.8.0"
},
"paths": {
- "/api/v1/build/{flow_id}/vertices": {
- "post": {
- "tags": [
- "Chat"
- ],
- "summary": "Retrieve Vertices Order",
- "description": "Retrieve the vertices order for a given flow.
Args: flow_id (str): The ID of the flow. background_tasks (BackgroundTasks): The background tasks. data (Optional[FlowDataRequest], optional): The flow data. Defaults to None. stop_component_id (str, optional): The ID of the stop component. Defaults to None. start_component_id (str, optional): The ID of the start component. Defaults to None. session (AsyncSession, optional): The session dependency.
Returns: VerticesOrderResponse: The response containing the ordered vertex IDs and the run ID.
Args: flow_id (str): The ID of the flow. vertex_id (str): The ID of the vertex to build. background_tasks (BackgroundTasks): The background tasks dependency. inputs (Optional[InputValueRequest], optional): The input values for the vertex. Defaults to None. files (List[str], optional): The files to use. Defaults to None. current_user (Any, optional): The current user dependency. Defaults to Depends(get_current_active_user).
Returns: VertexBuildResponse: The response containing the built vertex information.
This function is responsible for building a single vertex instead of the entire graph. It takes the `flow_id` and `vertex_id` as required parameters, and an optional `session_id`. It also depends on the `ChatService` and `SessionService` services.
If `session_id` is not provided, it retrieves the graph from the cache using the `chat_service`. If `session_id` is provided, it loads the session data using the `session_service`.
Once the graph is obtained, it retrieves the specified vertex using the `vertex_id`. If the vertex does not support streaming, an error is raised. If the vertex has a built result, it sends the result as a chunk. If the vertex is not frozen or not built, it streams the vertex data. If the vertex has a result, it sends the result as a chunk. If none of the above conditions are met, an error is raised.
If any exception occurs during the process, an error message is sent. Finally, the stream is closed.
Returns: A `StreamingResponse` object with the streamed vertex data in text/event-stream format.
This endpoint executes a flow identified by ID or name, with options for streaming the response and tracking execution metrics. It handles both streaming and non-streaming execution modes. This endpoint uses session-based authentication (cookies).
Args: background_tasks (BackgroundTasks): FastAPI background task manager flow (FlowRead | None): The flow to execute, loaded via dependency input_request (SimplifiedAPIRequest | None): Input parameters for the flow stream (bool): Whether to stream the response api_key_user (User): Authenticated user from session context (dict | None): Optional context to pass to the flow http_request (Request): The incoming HTTP request for extracting global variables
Returns: Union[StreamingResponse, RunResponse]: Either a streaming response for real-time results or a RunResponse with the complete execution results
Raises: HTTPException: For flow not found (404) or invalid input (400) APIException: For internal execution errors (500)
Args: flow_id_or_name (str): The flow ID or endpoint name. flow (Flow): The flow to be executed. request (Request): The incoming HTTP request. background_tasks (BackgroundTasks): The background tasks manager.
Returns: dict: A dictionary containing the status of the task.
Raises: HTTPException: If the flow is not found or if there is an error processing the request.",
+ "description": "Run a flow using a webhook request.
Args: flow_id_or_name: The flow ID or endpoint name (used by dependency). flow: The flow to be executed. request: The incoming HTTP request.
Returns: A dictionary containing the status of the task.
Processes the provided code and template updates, applies parameter changes (including those loaded from the database), updates the component's build configuration, and validates outputs. Returns the updated component node as a JSON-serializable dictionary.