Currently, evaluations in AutoRAG is now chunk-dependent.
So it is important to make good corpus before evaluating it.
Of course we are aware of chunking's importance.
And we have plans to develop it in the future.
We do have a support plan, but it's a topic that requires a lot of thought and research, so we can't say exactly what it will be!
As much as we use the LLM, many of you are wondering how much it will cost before you start running experiments, Of course, it's in the support plan!
We'll support it in that issue :)
There are no plans to support multiple vector DBs yet.
We decided that the performance of the Vector DBs was sufficiently leveled upwards.
We're currently using ChromaDB
running locally.
AutoRAG will find the best RAG pipeline for your data, we don't provide a distribution or code for production, so we don't have any plans to support different vector DBs at this time :)