How to use Ollama with useAI hook
👉 Ensure you have installed the 🧩 Ollama plugin
📥 Download Ollama: Ollama Download
Before using, pull an LLM model to run locally:
ollama pull llama3.2
💡 Important: The comment //exec: node
is required for proper streaming.
//exec: node
const streamResponse = true;
await useLocalAI("llama3.2", streamResponse, "Create a short story");
Use streamResponse = false
to wait for a full response before proceeding.
//exec: node
const streamResponse = false;
const result = await useLocalAI("llama3.2", streamResponse, "Translate this sentence into French: Hello everyone.");
print(result);
🔑 Get your API key here: OpenRouter API Keys
//exec: node
const streamResponse = true;
const baseURL = "https://openrouter.ai/api/v1";
const apiKey = "API_KEY";
await useOpenAIApi(baseURL, apiKey, "sao10k/l3.3-euryale-70b", streamResponse, "Create a short story");
✅ You're now set up to use local and remote AI models! 🚀