I'll help you set up a Graph RAG system for your POC within the 6-hour timeframe. Let me first search for the best current Graph RAG frameworks to recommend the most suitable approach for your requirements.
Based on my research of the latest Graph RAG frameworks in 2025, I'll design a system that can be implemented within 6 hours while meeting your requirements for sensemaking and broader context questions.
![System Architecture Diagram]
User Screen Content → Ingestion API → Knowledge Graph → Query Processing → Response Generation → User
For your 6-hour POC timeline, I recommend using LlamaIndex with its graph store capabilities for these reasons:
# app.py - FastAPI service
from fastapi import FastAPI, Request
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from llama_index.indices.knowledge_graph import KnowledgeGraphIndex
from llama_index.llms import OpenAI
import os, json, time
app = FastAPI()
# Initialize your indices
graph_index = None
vector_index = None
@app.post("/ingest")
async def ingest_data(request: Request):
content = await request.json()
screen_content = content.get("screen_content")
app_type = content.get("app_type", "general")
# Process and store the content in both vector and graph indices
processed_data = preprocess_content(screen_content, app_type)
update_indices(processed_data)
return {"status": "success"}
# graph_builder.py
from llama_index.indices.knowledge_graph import KGExtractor
from llama_index.llms import OpenAI
def build_knowledge_graph(documents):
llm = OpenAI(model="gpt-4-turbo")
service_context = ServiceContext.from_defaults(llm=llm)
# Extract entities and relationships
kg_extractor = KGExtractor(llm=llm)
kg_triplets = kg_extractor.extract_triplets(documents)
# Build the knowledge graph
graph_index = KnowledgeGraphIndex.from_documents(
documents,
kg_triplets=kg_triplets,
service_context=service_context
)
return graph_index
# processors.py
def preprocess_content(content, app_type):
# Base preprocessing
processed_data = extract_text_and_metadata(content)
# App-specific processing
if app_type == "messaging":
processed_data = extract_chat_metadata(processed_data)
elif app_type == "development":
processed_data = extract_code_and_pr_info(processed_data)
return processed_data