Summary
- Built to help recruiters and hiring managers explore my skills interactively
- Converts resume data into searchable, explainable content via RAG
- Uses Neon (Postgres) + Upstash Vector for structured + semantic retrieval
- Answers powered by Groq’s fast LLM, with human-readable formatting
- Fully deployed on Vercel for fast, stable and accessible
I developped a personal RAG (Retrieval-Augmented Generation) chatbox projects based on the knowledge I acquired from the bootcamp of data analystic. This project is a lightweight, resume chatbot that allows users to ask natural language questions about project experience, and receive contextual, AI-generated answers in real time.

This project was designed to make it easier for recruiters, hiring managers, and collaborators to understand my background, skills, and project experience through natural conversation. Instead of reading through a static resume, users can interact with an AI assistant that responds to questions like “What’s your most complex frontend project?” or “Have you worked with cloud deployment?” — providing detailed, contextual answers sourced from structured project data. I believe this is a more engaging and accessible way to present and showcase professional experience.
The frontend is a minimal HTML/JS chat interface deployed to Vercel alongside the API endpoints. It posts user queries to /api/chat, displays answers.

At runtime, a FastAPI serverless function retrieves semantically relevant vectors and builds prompts for Groq’s LLM (using DeepSeek as the model), enabling precise, memory-aware responses. The system dynamically prioritizes higher‑priority projects during retrieval and only falls back to lower‑priority items when no better matches are found. Additionally, the system prompt is loaded directly from a relational database, allowing seamless updates without redeploying the application.
The backend uses a Postgres database (Neon) to store structured resume/project data. A migration script transforms each record into a text + metadata format and generates vector embeddings, which are stored in Upstash Vector.

Due to Vercel’s serverless constraints and low requirements for real-time updating (people don’t usually finish a entire projects every day), converting and embedding processes are handled manually or via CI.
The system aims to bridge the gap between traditional resumes and real-world skill demonstration by combining vector search, large language models, and a personalized data pipeline.
