You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for maintaining this great project. I’m developing an Agentic RAG application using the DocumentKnowledgeBase with PGVector.
In cases where the knowledge base contains no relevant information for a given prompt, I’d like the LLM to generate answers independently without relying on retrieved documents. While this can be partially addressed through prompting, is there a programmatic way to control this behavior—for example, by specifying a similarity threshold or distance value for retrieved documents?
Specifically:
Could we introduce a threshold parameter (e.g., min_similarity or max_distance) to determine whether to use the knowledge base or bypass it entirely?
If such a parameter already exists, could you clarify how to configure it?
This would allow dynamic switching between knowledge-grounded and independent LLM responses based on retrieval relevance.
The text was updated successfully, but these errors were encountered:
Hey @ozbekburak! We do have a reranker, but I totally understand that it doesn’t offer the level of granularity you're looking for. I really love your suggestion, though! I’ve added it as a feature request, and I’m excited to get it shipped. We’ll keep you in the loop and let you know as we make progress on this. Thanks so much for your valuable input!
Hello,
Thank you for maintaining this great project. I’m developing an Agentic RAG application using the DocumentKnowledgeBase with PGVector.
In cases where the knowledge base contains no relevant information for a given prompt, I’d like the LLM to generate answers independently without relying on retrieved documents. While this can be partially addressed through prompting, is there a programmatic way to control this behavior—for example, by specifying a similarity threshold or distance value for retrieved documents?
Specifically:
Could we introduce a threshold parameter (e.g., min_similarity or max_distance) to determine whether to use the knowledge base or bypass it entirely?
If such a parameter already exists, could you clarify how to configure it?
This would allow dynamic switching between knowledge-grounded and independent LLM responses based on retrieval relevance.
The text was updated successfully, but these errors were encountered: