AI Chat Assistant

A personal AI chatbot integrated into my portfolio, powered by FastAPI and Google Gemini 2.5 Flash.

Architecture

πŸ’¬
User

Types a message or clicks a quick question

↓
🌐
Frontend (Astro)

Sends the message + conversation history via POST

↓
⚑
Backend (FastAPI)

Injects the system prompt + resumen.txt and builds the full conversation

↓
πŸ€–
Gemini 2.5 Flash

Generates a response in character as Rodrigo

↓
✨
Response

Rendered with markdown in a chat bubble, quick-question buttons reappear

How It Works

Backend β€” app.py

A FastAPI server that acts as the bridge between the frontend and Google Gemini. On startup it loads a resumen.txt file containing my experience, skills, and projects β€” this becomes the AI's knowledge base.

System Prompt

The system prompt instructs Gemini to act as me (Rodrigo). It sets the tone β€” professional and engaging, as if talking to a potential employer or client. The full resumen.txt is appended so the model has all the context it needs.

prompt_sistema = f"""
EstΓ‘s actuando como {nombre}.
EstΓ‘s respondiendo preguntas en el sitio web de {nombre},
particularmente preguntas relacionadas con la carrera,
antecedentes, habilidades y experiencia de {nombre}.

SΓ© profesional y atractivo, como si hablaras con un
cliente potencial o futuro empleador.
Si no sabes la respuesta, dilo.
"""

prompt_sistema += f"\n\nResumen:\n{resumen}\n\n"

Chat Function

The chatear() function rebuilds the full conversation history on each request, maps it to Gemini's format (user / model roles), and sends it along with the system instruction to Gemini 2.5 Flash.

def chatear(mensaje, historial):
    contents = []
    for msg in historial:
        role = "user" if msg["role"] == "user" else "model"
        contents.append({"role": role, "parts": [{"text": msg["content"]}]})
    contents.append({"role": "user", "parts": [{"text": mensaje}]})

    response = client.models.generate_content(
        model="gemini-2.5-flash",
        contents=contents,
        config={"system_instruction": prompt_sistema}
    )
    return response.text

Error Handling

The endpoint catches exceptions and returns user-friendly messages depending on the error type:

403 β€” Content Moderation

Blocked by Gemini's safety filters

429 β€” Rate Limited

Too many requests, retry later

Timeout

API took too long to respond

500/502/503 β€” Server Error

AI service unavailable

Frontend β€” chat.astro

An Astro page with a full chat UI. The user types a message or clicks a quick-question button. The frontend sends a POST request to the FastAPI /chat endpoint with the message and the full conversation history. While waiting, an animated typing indicator (3 bouncing dots) is displayed.

Responses are parsed with a custom markdown renderer (bold, italic, lists) and displayed as chat bubbles. After each response, quick-question buttons reappear so the user can keep exploring without typing.

// Frontend sends message + full history
const res = await fetch(API_URL, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ mensaje: text, historial }),
});
const data = await res.json();
addMessage(data.respuesta, false);  // render AI response
showQuickQuestions();               // show buttons again

How They Connect

REST API Endpoint

The backend exposes a single endpoint using FastAPI's @app.post() decorator. The request body is validated with a Pydantic model:

class MensajeChat(BaseModel):
    mensaje: str          # the user's message
    historial: list = []  # full conversation history

@app.post("/chat")
async def api_chat(data: MensajeChat):
    respuesta = chatear(data.mensaje, data.historial)
    return {"respuesta": respuesta}

Frontend Fetch

The frontend uses fetch() to send a POST request with the message and history as JSON, and reads the response:

const res = await fetch(API_URL, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ mensaje: text, historial }),
});
const data = await res.json();  // { respuesta: "..." }

Gemini SDK

FastAPI uses the google.genai client to call the model. The chatear() function maps the conversation history to Gemini's user / model role format and calls generate_content() with the system instruction:

client = genai.Client(api_key=os.getenv("GOOGLE_API_KEY"))

response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents=contents,                          # conversation history
    config={"system_instruction": prompt_sistema}  # resumen.txt injected here
)

Middleware

CORS is enabled via CORSMiddleware so the Astro frontend can call the FastAPI backend from a different origin.

Frontend Astro
🌐
β†’
Backend FastAPI
β†’
AI Model Gemini 2.5 Flash
πŸ€–

Frontend Features

Tech Stack

Python FastAPI Astro Tailwind CSS Google Gemini API
Try the Chat