AI Speed and Providers by node
If you're using AI features on CivNode, response times depend on which provider you're using. This isn't a bug — it's the nature of the different services. **Fast providers (recommended for most users):** - **Claude (Anthropic)** — consistently fast, excellent writing quality. The default recommendation. - **GPT-4 / GPT-4o (OpenAI)** — fast, good general quality. - **Gemini (Google)** — fast, handles long context well. These providers respond in 2-10 seconds for most requests. Writing tools feel snappy. Exploration mode is responsive. **Slower providers:** Other providers work fine but may take longer. Some regional or smaller providers can take 15-30 seconds for a response. This is normal. The AI status widget (bottom of the screen) shows you what's running and how long it's been going. If you see a spinner, the request is in progress — it hasn't failed, it's just thinking. **Local AI (Ollama):** Ollama runs on your own machine, so speed depends entirely on your hardware. A good GPU gives you fast responses. CPU-only is slower but works. See the Ollama setup thread for details. **The AI status widget:** The small indicator at the bottom of the screen shows all active AI operations. If something seems stuck, check there. You can see what's running, how long it's been going, and whether anything has failed. It's your dashboard for AI activity. **If things are slow:** - Check the AI status widget for errors - Try a different provider if available - Consider Ollama for offline, no-latency work - Cloud providers occasionally have outages — this is their problem, not CivNode's