Mastering AI Agent Settings: A Guide to Every Option

2026-02-19

tutorialsproduct

Mastering AI Agent Settings: A Guide to Every Option

Creating a agent on Herm.Chat is just the beginning. To truly make your AI agent perform at its best, you need to understand the settings that control its "brain."

In this guide, we'll break down every option available in your AI agent configuration.

1. System Prompt

The System Prompt is the most important setting. It defines your agent's identity, knowledge, and limitations.

  • Bad Prompt: "You are a customer support agent."
  • Good Prompt: "You are a friendly customer support agent for Herm.Chat. You only answer questions about our product and pricing. Use a professional but helpful tone. Never provide medical or financial advice."

2. LLM Provider & Model

Under Model Settings, you can choose which AI model powers your agent:

  • OpenAI GPT-4o: Best all-rounder for complex reasoning.
  • Anthropic Claude 3.5 Sonnet: Excellent for nuanced writing and following strict instructions.
  • Google Gemini 2.5 Pro: Great for processing large amounts of context quickly.

3. Temperature

Think of Temperature as a "creativity" slider.

  • 0.0 - 0.3: Focused and deterministic. Best for factual support or data extraction.
  • 0.7 - 1.0: Creative and varied. Best for writing assistants or roleplay agents.

4. Max Tokens

Max Tokens limits the length of the agent's individual responses.

  • Lower limits (150-300): Good for quick answers and mobile-friendly widgets.
  • Higher limits (800+): Necessary for summarizing long documents or generating content.

5. Visual Styling

Beyond the logic, you can customize how the widget looks:

  • Theme Color: Match your brand's primary color.
  • Position: Choose whether the bubble appears on the left or right of the screen.
  • White-labeling: (Available on Scale plans) Remove the "Powered by Herm.Chat" branding.

Conclusion

The best configuration depends on your specific use case. We recommend starting with a balanced temperature (0.5), a strong system prompt, and testing frequently in the dashboard.