How to use Ollama with useAI hook
👉 Insure to install the Ollama plugin
Download the LLM to run
ollama pull llama3.2
The comment //exec: node
is always required for correctly stream the response.
Stream response in console
//exec: node
const streamResponse = true;
await useLocalAI("llama3.2", streamResponse, "Create a short story");
Wait the result to chain to another task
//exec: node
const streamResponse = false;
const result = await useLocalAI("llama3.2", streamResponse, "Translate in French this sentence: Hello everyone.");
print(result);
Use remote LLM with OpenRouter: 👉 https://openrouter.ai/settings/keys
//exec: node
const streamResponse = true;
const baseURL = "https://openrouter.ai/api/v1";
const apiKey = "API_KEY";
await useOpenAIApi(baseURL, apiKey, "sao10k/l3.3-euryale-70b", streamResponse, "Create a short story");