feat(agent): Implement recursion limit handling and emergency synthesis for search process
This commit is contained in:
parent
18fdb192d8
commit
1e40244183
13 changed files with 249 additions and 70 deletions
|
|
@ -75,7 +75,6 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
|
|||
3. After cloning, navigate to the directory containing the project files.
|
||||
|
||||
4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields:
|
||||
|
||||
- `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models**.
|
||||
- `OLLAMA`: Your Ollama API URL. You should enter it as `http://host.docker.internal:PORT_NUMBER`. If you installed Ollama on port 11434, use `http://host.docker.internal:11434`. For other ports, adjust accordingly. **You need to fill this if you wish to use Ollama's models instead of OpenAI's**.
|
||||
- `GROQ`: Your Groq API key. **You only need to fill this if you wish to use Groq's hosted models**.
|
||||
|
|
@ -113,7 +112,6 @@ If you're encountering an Ollama connection error, it is likely due to the backe
|
|||
|
||||
1. **Check your Ollama API URL:** Ensure that the API URL is correctly set in the settings menu.
|
||||
2. **Update API URL Based on OS:**
|
||||
|
||||
- **Windows:** Use `http://host.docker.internal:11434`
|
||||
- **Mac:** Use `http://host.docker.internal:11434`
|
||||
- **Linux:** Use `http://<private_ip_of_host>:11434`
|
||||
|
|
@ -121,7 +119,6 @@ If you're encountering an Ollama connection error, it is likely due to the backe
|
|||
Adjust the port number if you're using a different one.
|
||||
|
||||
3. **Linux Users - Expose Ollama to Network:**
|
||||
|
||||
- Inside `/etc/systemd/system/ollama.service`, you need to add `Environment="OLLAMA_HOST=0.0.0.0"`. Then restart Ollama by `systemctl restart ollama`. For more information see [Ollama docs](https://github.com/ollama/ollama/blob/main/docs/faq.md#setting-environment-variables-on-linux)
|
||||
|
||||
- Ensure that the port (default is 11434) is not blocked by your firewall.
|
||||
|
|
@ -150,11 +147,9 @@ Perplexica runs on Next.js and handles all API requests. It works right away on
|
|||
When running Perplexica behind a reverse proxy (like Nginx, Apache, or Traefik), follow these steps to ensure proper functionality:
|
||||
|
||||
1. **Configure the BASE_URL setting**:
|
||||
|
||||
- In `config.toml`, set the `BASE_URL` parameter under the `[GENERAL]` section to your public-facing URL (e.g., `https://perplexica.yourdomain.com`)
|
||||
|
||||
2. **Ensure proper headers forwarding**:
|
||||
|
||||
- Your reverse proxy should forward the following headers:
|
||||
- `X-Forwarded-Host`
|
||||
- `X-Forwarded-Proto`
|
||||
|
|
|
|||
|
|
@ -41,7 +41,6 @@ The API accepts a JSON object in the request body, where you define the focus mo
|
|||
### Request Parameters
|
||||
|
||||
- **`chatModel`** (object, optional): Defines the chat model to be used for the query. For model details you can send a GET request at `http://localhost:3000/api/models`. Make sure to use the key value (For example "gpt-4o-mini" instead of the display name "GPT 4 omni mini").
|
||||
|
||||
- `provider`: Specifies the provider for the chat model (e.g., `openai`, `ollama`).
|
||||
- `name`: The specific model from the chosen provider (e.g., `gpt-4o-mini`).
|
||||
- Optional fields for custom OpenAI configuration:
|
||||
|
|
@ -49,16 +48,13 @@ The API accepts a JSON object in the request body, where you define the focus mo
|
|||
- `customOpenAIKey`: The API key for a custom OpenAI instance.
|
||||
|
||||
- **`embeddingModel`** (object, optional): Defines the embedding model for similarity-based searching. For model details you can send a GET request at `http://localhost:3000/api/models`. Make sure to use the key value (For example "text-embedding-3-large" instead of the display name "Text Embedding 3 Large").
|
||||
|
||||
- `provider`: The provider for the embedding model (e.g., `openai`).
|
||||
- `name`: The specific embedding model (e.g., `text-embedding-3-large`).
|
||||
|
||||
- **`focusMode`** (string, required): Specifies which focus mode to use. Available modes:
|
||||
|
||||
- `webSearch`, `academicSearch`, `localResearch`, `chat`, `wolframAlphaSearch`, `youtubeSearch`, `redditSearch`.
|
||||
|
||||
- **`optimizationMode`** (string, optional): Specifies the optimization mode to control the balance between performance and quality. Available modes:
|
||||
|
||||
- `speed`: Prioritize speed and get the quickest possible answer. Minimum effort retrieving web content. - Only uses SearXNG result previews.
|
||||
- `agent`: Use an agentic workflow to answer complex multi-part questions. This mode requires a model that is trained for tool use.
|
||||
|
||||
|
|
|
|||
|
|
@ -17,22 +17,26 @@ cp sample.config.toml config.toml
|
|||
General application settings.
|
||||
|
||||
#### SIMILARITY_MEASURE
|
||||
|
||||
- **Type**: String
|
||||
- **Options**: `"cosine"` or `"dot"`
|
||||
- **Default**: `"cosine"`
|
||||
- **Description**: The similarity measure used for embedding comparisons in search results ranking.
|
||||
|
||||
#### KEEP_ALIVE
|
||||
|
||||
- **Type**: String
|
||||
- **Default**: `"5m"`
|
||||
- **Description**: How long to keep Ollama models loaded into memory. Use time suffixes like `"5m"` for 5 minutes, `"1h"` for 1 hour, or `"-1m"` for indefinite.
|
||||
|
||||
#### BASE_URL
|
||||
|
||||
- **Type**: String
|
||||
- **Default**: `""` (empty)
|
||||
- **Description**: Optional base URL override. When set, overrides the detected URL for OpenSearch and other public URLs.
|
||||
|
||||
#### HIDDEN_MODELS
|
||||
|
||||
- **Type**: Array of Strings
|
||||
- **Default**: `[]` (empty array)
|
||||
- **Description**: Array of model names to hide from the user interface and API responses. Hidden models will not appear in model selection lists but can still be used if directly specified.
|
||||
|
|
@ -47,30 +51,39 @@ General application settings.
|
|||
Model provider configurations. Each provider has its own subsection.
|
||||
|
||||
#### [MODELS.OPENAI]
|
||||
|
||||
- **API_KEY**: Your OpenAI API key
|
||||
|
||||
#### [MODELS.GROQ]
|
||||
|
||||
- **API_KEY**: Your Groq API key
|
||||
|
||||
#### [MODELS.ANTHROPIC]
|
||||
|
||||
- **API_KEY**: Your Anthropic API key
|
||||
|
||||
#### [MODELS.GEMINI]
|
||||
|
||||
- **API_KEY**: Your Google Gemini API key
|
||||
|
||||
#### [MODELS.CUSTOM_OPENAI]
|
||||
|
||||
Configuration for OpenAI-compatible APIs (like LMStudio, vLLM, etc.)
|
||||
|
||||
- **API_KEY**: API key for the custom endpoint
|
||||
- **API_URL**: Base URL for the OpenAI-compatible API
|
||||
- **MODEL_NAME**: Name of the model to use
|
||||
|
||||
#### [MODELS.OLLAMA]
|
||||
|
||||
- **API_URL**: Ollama server URL (e.g., `"http://host.docker.internal:11434"`)
|
||||
|
||||
#### [MODELS.DEEPSEEK]
|
||||
|
||||
- **API_KEY**: Your DeepSeek API key
|
||||
|
||||
#### [MODELS.LM_STUDIO]
|
||||
|
||||
- **API_URL**: LM Studio server URL (e.g., `"http://host.docker.internal:1234"`)
|
||||
|
||||
### [API_ENDPOINTS]
|
||||
|
|
@ -78,6 +91,7 @@ Configuration for OpenAI-compatible APIs (like LMStudio, vLLM, etc.)
|
|||
External service endpoints.
|
||||
|
||||
#### SEARXNG
|
||||
|
||||
- **Type**: String
|
||||
- **Description**: SearxNG API URL for web search functionality
|
||||
- **Example**: `"http://localhost:32768"`
|
||||
|
|
@ -97,16 +111,19 @@ Some configurations can also be set via environment variables, which take preced
|
|||
The `HIDDEN_MODELS` setting allows server administrators to control which models are visible to users:
|
||||
|
||||
### How It Works
|
||||
|
||||
1. Models listed in `HIDDEN_MODELS` are filtered out of API responses
|
||||
2. The settings UI shows all models (including hidden ones) for management
|
||||
3. Hidden models can still be used if explicitly specified in API calls
|
||||
|
||||
### Managing Hidden Models
|
||||
|
||||
1. **Via Configuration File**: Edit the `HIDDEN_MODELS` array in `config.toml`
|
||||
2. **Via Settings UI**: Use the "Model Visibility" section in the settings page
|
||||
3. **Via API**: Use the `/api/config` endpoint to update the configuration
|
||||
|
||||
### API Behavior
|
||||
|
||||
- **Default**: `/api/models` returns only visible models
|
||||
- **Include Hidden**: `/api/models?include_hidden=true` returns all models (for admin use)
|
||||
|
||||
|
|
@ -129,6 +146,7 @@ The `HIDDEN_MODELS` setting allows server administrators to control which models
|
|||
### Configuration Validation
|
||||
|
||||
The application validates configuration on startup and will log errors for:
|
||||
|
||||
- Invalid TOML syntax
|
||||
- Missing required fields
|
||||
- Invalid URLs or API endpoints
|
||||
|
|
|
|||
|
|
@ -254,7 +254,9 @@ export default function SettingsPage() {
|
|||
embedding: Record<string, Record<string, any>>;
|
||||
}>({ chat: {}, embedding: {} });
|
||||
const [hiddenModels, setHiddenModels] = useState<string[]>([]);
|
||||
const [expandedProviders, setExpandedProviders] = useState<Set<string>>(new Set());
|
||||
const [expandedProviders, setExpandedProviders] = useState<Set<string>>(
|
||||
new Set(),
|
||||
);
|
||||
|
||||
// Default Search Settings state variables
|
||||
const [searchOptimizationMode, setSearchOptimizationMode] =
|
||||
|
|
@ -565,20 +567,23 @@ export default function SettingsPage() {
|
|||
localStorage.setItem(key, value);
|
||||
};
|
||||
|
||||
const handleModelVisibilityToggle = async (modelKey: string, isVisible: boolean) => {
|
||||
const handleModelVisibilityToggle = async (
|
||||
modelKey: string,
|
||||
isVisible: boolean,
|
||||
) => {
|
||||
let updatedHiddenModels: string[];
|
||||
|
||||
|
||||
if (isVisible) {
|
||||
// Model should be visible, remove from hidden list
|
||||
updatedHiddenModels = hiddenModels.filter(m => m !== modelKey);
|
||||
updatedHiddenModels = hiddenModels.filter((m) => m !== modelKey);
|
||||
} else {
|
||||
// Model should be hidden, add to hidden list
|
||||
updatedHiddenModels = [...hiddenModels, modelKey];
|
||||
}
|
||||
|
||||
|
||||
// Update local state immediately
|
||||
setHiddenModels(updatedHiddenModels);
|
||||
|
||||
|
||||
// Persist changes to backend
|
||||
try {
|
||||
await saveConfig('hiddenModels', updatedHiddenModels);
|
||||
|
|
@ -590,7 +595,7 @@ export default function SettingsPage() {
|
|||
};
|
||||
|
||||
const toggleProviderExpansion = (providerId: string) => {
|
||||
setExpandedProviders(prev => {
|
||||
setExpandedProviders((prev) => {
|
||||
const newSet = new Set(prev);
|
||||
if (newSet.has(providerId)) {
|
||||
newSet.delete(providerId);
|
||||
|
|
@ -1472,7 +1477,7 @@ export default function SettingsPage() {
|
|||
)}
|
||||
</SettingsSection>
|
||||
|
||||
<SettingsSection
|
||||
<SettingsSection
|
||||
title="Model Visibility"
|
||||
tooltip="Hide models from the API to prevent them from appearing in model lists.\nHidden models will not be available for selection in the interface.\nThis allows server admins to disable models that may incur large costs or won't work with the application."
|
||||
>
|
||||
|
|
@ -1481,35 +1486,41 @@ export default function SettingsPage() {
|
|||
{(() => {
|
||||
// Combine all models from both chat and embedding providers
|
||||
const allProviders: Record<string, Record<string, any>> = {};
|
||||
|
||||
|
||||
// Add chat models
|
||||
Object.entries(allModels.chat).forEach(([provider, models]) => {
|
||||
if (!allProviders[provider]) {
|
||||
allProviders[provider] = {};
|
||||
}
|
||||
Object.entries(models).forEach(([modelKey, model]) => {
|
||||
allProviders[provider][modelKey] = model;
|
||||
});
|
||||
});
|
||||
|
||||
Object.entries(allModels.chat).forEach(
|
||||
([provider, models]) => {
|
||||
if (!allProviders[provider]) {
|
||||
allProviders[provider] = {};
|
||||
}
|
||||
Object.entries(models).forEach(([modelKey, model]) => {
|
||||
allProviders[provider][modelKey] = model;
|
||||
});
|
||||
},
|
||||
);
|
||||
|
||||
// Add embedding models
|
||||
Object.entries(allModels.embedding).forEach(([provider, models]) => {
|
||||
if (!allProviders[provider]) {
|
||||
allProviders[provider] = {};
|
||||
}
|
||||
Object.entries(models).forEach(([modelKey, model]) => {
|
||||
allProviders[provider][modelKey] = model;
|
||||
});
|
||||
});
|
||||
Object.entries(allModels.embedding).forEach(
|
||||
([provider, models]) => {
|
||||
if (!allProviders[provider]) {
|
||||
allProviders[provider] = {};
|
||||
}
|
||||
Object.entries(models).forEach(([modelKey, model]) => {
|
||||
allProviders[provider][modelKey] = model;
|
||||
});
|
||||
},
|
||||
);
|
||||
|
||||
return Object.keys(allProviders).length > 0 ? (
|
||||
Object.entries(allProviders).map(([provider, models]) => {
|
||||
const providerId = `provider-${provider}`;
|
||||
const isExpanded = expandedProviders.has(providerId);
|
||||
const modelEntries = Object.entries(models);
|
||||
const hiddenCount = modelEntries.filter(([modelKey]) => hiddenModels.includes(modelKey)).length;
|
||||
const hiddenCount = modelEntries.filter(([modelKey]) =>
|
||||
hiddenModels.includes(modelKey),
|
||||
).length;
|
||||
const totalCount = modelEntries.length;
|
||||
|
||||
|
||||
return (
|
||||
<div
|
||||
key={providerId}
|
||||
|
|
@ -1521,13 +1532,21 @@ export default function SettingsPage() {
|
|||
>
|
||||
<div className="flex items-center space-x-3">
|
||||
{isExpanded ? (
|
||||
<ChevronDown size={16} className="text-black/70 dark:text-white/70" />
|
||||
<ChevronDown
|
||||
size={16}
|
||||
className="text-black/70 dark:text-white/70"
|
||||
/>
|
||||
) : (
|
||||
<ChevronRight size={16} className="text-black/70 dark:text-white/70" />
|
||||
<ChevronRight
|
||||
size={16}
|
||||
className="text-black/70 dark:text-white/70"
|
||||
/>
|
||||
)}
|
||||
<h4 className="text-sm font-medium text-black/80 dark:text-white/80">
|
||||
{(PROVIDER_METADATA as any)[provider]?.displayName ||
|
||||
provider.charAt(0).toUpperCase() + provider.slice(1)}
|
||||
{(PROVIDER_METADATA as any)[provider]
|
||||
?.displayName ||
|
||||
provider.charAt(0).toUpperCase() +
|
||||
provider.slice(1)}
|
||||
</h4>
|
||||
</div>
|
||||
<div className="flex items-center space-x-2 text-xs text-black/60 dark:text-white/60">
|
||||
|
|
@ -1539,7 +1558,7 @@ export default function SettingsPage() {
|
|||
)}
|
||||
</div>
|
||||
</button>
|
||||
|
||||
|
||||
{isExpanded && (
|
||||
<div className="p-3 bg-light-100 dark:bg-dark-100 border-t border-light-200 dark:border-dark-200">
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-2">
|
||||
|
|
@ -1554,7 +1573,10 @@ export default function SettingsPage() {
|
|||
<Switch
|
||||
checked={!hiddenModels.includes(modelKey)}
|
||||
onChange={(checked) => {
|
||||
handleModelVisibilityToggle(modelKey, checked);
|
||||
handleModelVisibilityToggle(
|
||||
modelKey,
|
||||
checked,
|
||||
);
|
||||
}}
|
||||
className={cn(
|
||||
!hiddenModels.includes(modelKey)
|
||||
|
|
|
|||
|
|
@ -74,4 +74,8 @@ export const AgentState = Annotation.Root({
|
|||
reducer: (x, y) => y ?? x,
|
||||
default: () => '',
|
||||
}),
|
||||
recursionLimitReached: Annotation<boolean>({
|
||||
reducer: (x, y) => y ?? x,
|
||||
default: () => false,
|
||||
}),
|
||||
});
|
||||
|
|
|
|||
|
|
@ -51,11 +51,38 @@ export class SynthesizerAgent {
|
|||
})
|
||||
.join('\n');
|
||||
|
||||
const recursionLimitMessage = state.recursionLimitReached
|
||||
? `# ⚠️ IMPORTANT NOTICE - LIMITED INFORMATION
|
||||
**The search process was interrupted due to complexity limits. You MUST start your response with a warning about incomplete information and qualify all statements appropriately.**
|
||||
## ⚠️ CRITICAL: Incomplete Information Response Requirements
|
||||
**You MUST:**
|
||||
1. **Start your response** with a clear warning that the information may be incomplete or conflicting
|
||||
2. **Acknowledge limitations** throughout your response where information gaps exist
|
||||
3. **Be transparent** about what you cannot determine from the available sources
|
||||
4. **Suggest follow-up actions** for the user to get more complete information
|
||||
5. **Qualify your statements** with phrases like "based on available information" or "from the limited sources gathered"
|
||||
|
||||
**Example opening for incomplete information responses:**
|
||||
"⚠️ **Please note:** This response is based on incomplete information due to search complexity limits. The findings below may be missing important details or conflicting perspectives. I recommend verifying this information through additional research or rephrasing your query for better results.
|
||||
|
||||
`
|
||||
: '';
|
||||
|
||||
// If we have limited documents due to recursion limit, acknowledge this
|
||||
const documentsAvailable = state.relevantDocuments?.length || 0;
|
||||
const limitedInfoNote =
|
||||
state.recursionLimitReached && documentsAvailable === 0
|
||||
? '**CRITICAL: No source documents were gathered due to search limitations.**\n\n'
|
||||
: state.recursionLimitReached
|
||||
? `**NOTICE: Search was interrupted with ${documentsAvailable} documents gathered.**\n\n`
|
||||
: '';
|
||||
|
||||
const formattedPrompt = await template.format({
|
||||
personaInstructions: this.personaInstructions,
|
||||
conversationHistory: conversationHistory,
|
||||
relevantDocuments: relevantDocuments,
|
||||
query: state.originalQuery || state.query,
|
||||
recursionLimitReached: recursionLimitMessage + limitedInfoNote,
|
||||
});
|
||||
|
||||
// Stream the response in real-time using LLM streaming capabilities
|
||||
|
|
|
|||
|
|
@ -142,9 +142,13 @@ export class TaskManagerAgent {
|
|||
});
|
||||
|
||||
// Use structured output for task breakdown
|
||||
const structuredLlm = withStructuredOutput(this.llm, TaskBreakdownSchema, {
|
||||
name: 'break_down_tasks',
|
||||
});
|
||||
const structuredLlm = withStructuredOutput(
|
||||
this.llm,
|
||||
TaskBreakdownSchema,
|
||||
{
|
||||
name: 'break_down_tasks',
|
||||
},
|
||||
);
|
||||
|
||||
const taskBreakdownResult = (await structuredLlm.invoke([prompt], {
|
||||
signal: this.signal,
|
||||
|
|
|
|||
|
|
@ -61,31 +61,37 @@ const loadConfig = () => {
|
|||
const config = toml.parse(
|
||||
fs.readFileSync(path.join(process.cwd(), `${configFileName}`), 'utf-8'),
|
||||
) as any as Config;
|
||||
|
||||
|
||||
// Ensure GENERAL section exists
|
||||
if (!config.GENERAL) {
|
||||
config.GENERAL = {} as any;
|
||||
}
|
||||
|
||||
|
||||
// Handle HIDDEN_MODELS - fix malformed table format to proper array
|
||||
if (!config.GENERAL.HIDDEN_MODELS) {
|
||||
config.GENERAL.HIDDEN_MODELS = [];
|
||||
} else if (typeof config.GENERAL.HIDDEN_MODELS === 'object' && !Array.isArray(config.GENERAL.HIDDEN_MODELS)) {
|
||||
} else if (
|
||||
typeof config.GENERAL.HIDDEN_MODELS === 'object' &&
|
||||
!Array.isArray(config.GENERAL.HIDDEN_MODELS)
|
||||
) {
|
||||
// Convert malformed table format to array
|
||||
const hiddenModelsObj = config.GENERAL.HIDDEN_MODELS as any;
|
||||
const hiddenModelsArray: string[] = [];
|
||||
|
||||
|
||||
// Extract values from numeric keys and sort by key
|
||||
const keys = Object.keys(hiddenModelsObj).map(k => parseInt(k)).filter(k => !isNaN(k)).sort((a, b) => a - b);
|
||||
const keys = Object.keys(hiddenModelsObj)
|
||||
.map((k) => parseInt(k))
|
||||
.filter((k) => !isNaN(k))
|
||||
.sort((a, b) => a - b);
|
||||
for (const key of keys) {
|
||||
if (typeof hiddenModelsObj[key] === 'string') {
|
||||
hiddenModelsArray.push(hiddenModelsObj[key]);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
config.GENERAL.HIDDEN_MODELS = hiddenModelsArray;
|
||||
}
|
||||
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,8 @@ Your task is to provide answers that are:
|
|||
- **Cited and credible**: Use inline citations with [number] notation to refer to the context source(s) for each fact or detail included
|
||||
- **Explanatory and Comprehensive**: Strive to explain the topic in depth, offering detailed analysis, insights, and clarifications wherever applicable
|
||||
|
||||
{recursionLimitReached}
|
||||
|
||||
# Formatting Instructions
|
||||
## System Formatting Instructions
|
||||
- **Structure**: Use a well-organized format with proper headings (e.g., "## Example heading 1" or "## Example heading 2"). Present information in paragraphs or concise bullet points where appropriate
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ export const embeddingModelProviders: Record<
|
|||
};
|
||||
|
||||
export const getAvailableChatModelProviders = async (
|
||||
options: { includeHidden?: boolean } = {}
|
||||
options: { includeHidden?: boolean } = {},
|
||||
) => {
|
||||
const { includeHidden = false } = options;
|
||||
const models: Record<string, Record<string, ChatModel>> = {};
|
||||
|
|
@ -154,7 +154,7 @@ export const getAvailableChatModelProviders = async (
|
|||
};
|
||||
|
||||
export const getAvailableEmbeddingModelProviders = async (
|
||||
options: { includeHidden?: boolean } = {}
|
||||
options: { includeHidden?: boolean } = {},
|
||||
) => {
|
||||
const { includeHidden = false } = options;
|
||||
const models: Record<string, Record<string, EmbeddingModel>> = {};
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import {
|
|||
import {
|
||||
BaseLangGraphError,
|
||||
END,
|
||||
GraphRecursionError,
|
||||
MemorySaver,
|
||||
START,
|
||||
StateGraph,
|
||||
|
|
@ -181,21 +182,121 @@ export class AgentSearch {
|
|||
focusMode: this.focusMode,
|
||||
};
|
||||
|
||||
const threadId = `agent_search_${Date.now()}`;
|
||||
const config = {
|
||||
configurable: { thread_id: threadId },
|
||||
recursionLimit: 18,
|
||||
signal: this.signal,
|
||||
};
|
||||
|
||||
try {
|
||||
await workflow.invoke(initialState, {
|
||||
configurable: { thread_id: `agent_search_${Date.now()}` },
|
||||
recursionLimit: 20,
|
||||
signal: this.signal,
|
||||
});
|
||||
} catch (error: BaseLangGraphError | any) {
|
||||
if (error instanceof BaseLangGraphError) {
|
||||
console.error('LangGraph error occurred:', error.message);
|
||||
if (error.lc_error_code === 'GRAPH_RECURSION_LIMIT') {
|
||||
const result = await workflow.invoke(initialState, config);
|
||||
} catch (error: any) {
|
||||
if (error instanceof GraphRecursionError) {
|
||||
console.warn(
|
||||
'Graph recursion limit reached, attempting best-effort synthesis with gathered information',
|
||||
);
|
||||
|
||||
// Emit agent action to explain what happened
|
||||
this.emitter.emit(
|
||||
'data',
|
||||
JSON.stringify({
|
||||
type: 'agent_action',
|
||||
data: {
|
||||
action: 'recursion_limit_recovery',
|
||||
message:
|
||||
'Search process reached complexity limits. Attempting to provide best-effort response with gathered information.',
|
||||
details:
|
||||
'The agent workflow exceeded the maximum number of steps allowed. Recovering by synthesizing available data.',
|
||||
},
|
||||
}),
|
||||
);
|
||||
|
||||
try {
|
||||
// Get the latest state from the checkpointer to access gathered information
|
||||
const latestState = await workflow.getState({
|
||||
configurable: { thread_id: threadId },
|
||||
});
|
||||
|
||||
if (latestState && latestState.values) {
|
||||
// Create emergency synthesis state using gathered information
|
||||
const stateValues = latestState.values;
|
||||
const emergencyState = {
|
||||
messages: stateValues.messages || initialState.messages,
|
||||
query: stateValues.query || initialState.query,
|
||||
relevantDocuments: stateValues.relevantDocuments || [],
|
||||
bannedSummaryUrls: stateValues.bannedSummaryUrls || [],
|
||||
bannedPreviewUrls: stateValues.bannedPreviewUrls || [],
|
||||
searchInstructionHistory:
|
||||
stateValues.searchInstructionHistory || [],
|
||||
searchInstructions: stateValues.searchInstructions || '',
|
||||
next: 'synthesizer',
|
||||
analysis: stateValues.analysis || '',
|
||||
fullAnalysisAttempts: stateValues.fullAnalysisAttempts || 0,
|
||||
tasks: stateValues.tasks || [],
|
||||
currentTaskIndex: stateValues.currentTaskIndex || 0,
|
||||
originalQuery:
|
||||
stateValues.originalQuery ||
|
||||
stateValues.query ||
|
||||
initialState.query,
|
||||
fileIds: stateValues.fileIds || initialState.fileIds,
|
||||
focusMode: stateValues.focusMode || initialState.focusMode,
|
||||
urlsToSummarize: stateValues.urlsToSummarize || [],
|
||||
summarizationIntent: stateValues.summarizationIntent || '',
|
||||
recursionLimitReached: true,
|
||||
};
|
||||
|
||||
const documentsCount =
|
||||
emergencyState.relevantDocuments?.length || 0;
|
||||
console.log(
|
||||
`Attempting emergency synthesis with ${documentsCount} gathered documents`,
|
||||
);
|
||||
|
||||
// Emit detailed agent action about the recovery attempt
|
||||
this.emitter.emit(
|
||||
'data',
|
||||
JSON.stringify({
|
||||
type: 'agent_action',
|
||||
data: {
|
||||
action: 'emergency_synthesis',
|
||||
message: `Proceeding with available information: ${documentsCount} documents gathered${emergencyState.analysis ? ', analysis available' : ''}`,
|
||||
details: `Recovered state contains: ${documentsCount} relevant documents, ${emergencyState.searchInstructionHistory?.length || 0} search attempts, ${emergencyState.analysis ? 'analysis data' : 'no analysis'}`,
|
||||
},
|
||||
}),
|
||||
);
|
||||
|
||||
// Only proceed with synthesis if we have some useful information
|
||||
if (documentsCount > 0 || emergencyState.analysis) {
|
||||
await this.synthesizerAgent.execute(emergencyState);
|
||||
} else {
|
||||
// If we don't have any gathered information, provide a helpful message
|
||||
this.emitter.emit(
|
||||
'data',
|
||||
JSON.stringify({
|
||||
type: 'response',
|
||||
data: "⚠️ **Search Process Incomplete** - The search process reached complexity limits before gathering sufficient information to provide a meaningful response. Please try:\n\n- Using more specific keywords\n- Breaking your question into smaller parts\n- Rephrasing your query to be more focused\n\nI apologize that I couldn't provide the information you were looking for.",
|
||||
}),
|
||||
);
|
||||
this.emitter.emit('end');
|
||||
}
|
||||
} else {
|
||||
// Fallback if we can't retrieve state
|
||||
this.emitter.emit(
|
||||
'data',
|
||||
JSON.stringify({
|
||||
type: 'response',
|
||||
data: '⚠️ **Limited Information Available** - The search process encountered complexity limits and was unable to gather sufficient information. Please try rephrasing your question or breaking it into smaller, more specific parts.',
|
||||
}),
|
||||
);
|
||||
this.emitter.emit('end');
|
||||
}
|
||||
} catch (synthError) {
|
||||
console.error('Emergency synthesis failed:', synthError);
|
||||
this.emitter.emit(
|
||||
'data',
|
||||
JSON.stringify({
|
||||
type: 'response',
|
||||
data: "I've been working on this for a while and can't find a solution. Please try again with a different query.",
|
||||
data: '⚠️ **Search Process Interrupted** - The search encountered complexity limits and could not complete successfully. Please try a simpler query or break your question into smaller parts.',
|
||||
}),
|
||||
);
|
||||
this.emitter.emit('end');
|
||||
|
|
|
|||
|
|
@ -14,10 +14,10 @@ interface StructuredOutputOptions {
|
|||
export function withStructuredOutput<T extends z.ZodType>(
|
||||
llm: BaseChatModel,
|
||||
schema: T,
|
||||
options: StructuredOutputOptions = {}
|
||||
options: StructuredOutputOptions = {},
|
||||
) {
|
||||
const isGroqModel = llm instanceof ChatGroq;
|
||||
|
||||
|
||||
if (isGroqModel) {
|
||||
return llm.withStructuredOutput(schema, {
|
||||
name: options.name,
|
||||
|
|
|
|||
|
|
@ -52,9 +52,13 @@ export const summarizeWebContent = async (
|
|||
|
||||
try {
|
||||
// Create structured LLM with Zod schema
|
||||
const structuredLLM = withStructuredOutput(llm, RelevanceCheckSchema, {
|
||||
name: 'check_content_relevance',
|
||||
});
|
||||
const structuredLLM = withStructuredOutput(
|
||||
llm,
|
||||
RelevanceCheckSchema,
|
||||
{
|
||||
name: 'check_content_relevance',
|
||||
},
|
||||
);
|
||||
|
||||
const relevanceResult = await structuredLLM.invoke(
|
||||
`${systemPrompt}You are a content relevance checker. Your task is to determine if the given content is relevant to the user's query.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue