prompt
I want to setup graph rag within 6 hours Here is the requirement. How would I do it with Llamaindex
POST /ingest -> called every 5 seconds - client calls it with whatever is there on user's screen
POST /chat_completion - when user asks sth
Example queries
some POC - jupyter notebook, python script - decide it for yourself. - code clarity and writing non trivial code
Answer
Here's a step-by-step guide to implement a GraphRAG system using LlamaIndex that can handle your specific requirements within a 6-hour timeframe.
Let's set up a clear project structure:
graphmem/
├── app.py # FastAPI application
├── ingest.py # Ingestion logic
├── query.py # Query processing
├── parsers/ # Application-specific parsers
│ ├── __init__.py
│ ├── base.py # Base parser class
│ ├── whatsapp.py # WhatsApp parser
│ ├── github.py # GitHub parser
│ └── generic.py # Generic content parser
├── storage/ # Storage management
│ ├── __init__.py
│ ├── graph_store.py # Graph database interface
│ └── vector_store.py # Vector database interface
├── utils/ # Utility functions
│ ├── __init__.py
│ └── helpers.py # Helper functions
├── config.py # Configuration
├── requirements.txt # Dependencies
└── demo.ipynb # Demo notebook