📌 How to use another LLM other than OpenAI or Local LLM

(how to configure Yaml) ?

AutoRAG can use all LLMs supported by the LlamaIndex

1. Use vllm module

We recommend using VLLM for fast inference!

We developed it parallelize, so you can experiment faster than using the llama_index_llm module :)

2. Use llama_index_llm module

Untitled

❗ How to use LLMs other than the 3 LLM Model Types (e.g. ollama, groq)

[Tutorial] Use Ollama

  1. Add ollama with the code below in python
import autorag
from llama_index.llms.ollama import Ollama

autorag.generator_models["ollama"] = Ollama
  1. Configuring a YAML file
nodes:
  - node_line_name: node_line_1
    nodes:
      - node_type: generator
        modules:
          - module_type: llama_index_llm
            llm: ollama
            model: [llama3, qwen, mistral]

Additional parameters can be used directly in a YAML file