Troubleshooting and FAQ
This page addresses frequently asked questions and common troubleshooting topics for Langfuse Prompt Management.
If you don't find a solution to your issue here, try using Ask AI for instant answers. For bug reports, please open a ticket on GitHub Issues. For general questions or support, visit our support page.
FAQ
GitHub Discussions
Folders
Organize prompts into virtual folders to group prompts with similar purposes. Use folder hierarchies to manage prompt libraries at scale.
Overview
With Langfuse you can capture all your LLM evaluations in one place. You can combine a variety of different evaluation metrics like model-based evaluations (LLM-as-a-Judge), human annotations or fully custom evaluation workflows via API/SDKs. This allows you to measure quality, tonality, factual accuracy, completeness, and other dimensions of your LLM application.