Just a simple experiment, aim was to see how cheap I could get when creating a RAG over my list of AI Governance, Regulations and Risks (GRR) resources. 
The app is designed to help one learn about AI Governance, Regulations, and Risks. It extracts information from the list of resources at AI Governance, Regulations and Risks (GRR) and uses the information as a knowledge base.
Along the way added additional features on top of the usual RAG:
1) pre-generated a series of flashcards over the entire list of AI Governance, Regulations and Risks (GRR) resources
2) used both information chunks and flashcards as context during the generation step
3) returns both a generated answer with sources and associated flashcards in response to a query to facilitate learning
4) added a user feedback system for continuous improvement of the knowledge base on both the generated answer to the query and the associated flashcards

In total, fixed costs of ~$5/month with Django on PythonAnywhere; Supabase PostgreSQL with Pgvector extension (as the vector database); Huggingface inference APIs (for LLMs). 
Mixtral-8x7B-Instruct-v0.1 is used to generate answers and flashcards for questions, while embeddings are generated using bge-small-en-v1.5. I think it could be a lot better and yet still cheap with GPT-4o mini (especially with function calling), so that's next on my to do list.
The usual disclaimer relating to LLMs applies. Do NOT trust the outputs and always verify.