IdeaSense AI

IdeaSense AI

Built during a 10-week internship and continued as a product system, IdeaSense is the clearest example of my AI workflow work: controlled orchestration, persistent state, review gates, and practical deployment trade-offs.

2025 — Present

An AI workflow system for structured early-stage venture evaluation, designed to turn open-ended founder interviews into confirmed, traceable evidence for DVF scoring and report generation.

Overview

IdeaSense AI guides early-stage founders through a 30–60-minute DVF assessment. The product does not treat chat as the final interface; it uses staged conversation, explicit confirmation, structured state, scoring, and report generation so each assessment can be reviewed and reused.

Problem

Founder interviews contain useful signal, but raw conversation is hard to compare or trust. Generic chat can drift, merge assumptions with facts, and produce confident output without a stable evaluation path. The challenge was to keep conversation natural while making the assessment process controlled and reviewable.

Role

AI workflow lead for the internship build. I owned the question-bank structure, orchestration logic, context design, model routing, verification strategy, and integration path from prototype to usable product flow.

Context and constraints

The first MVP had to fit a 10-week internship schedule with prototype, MVP, and final-release checkpoints.
The deployment target favored lightweight infrastructure, so the system could not depend on heavy local models or overly complex retrieval layers.
Latency and cost mattered, which forced trade-offs between fast conversational turns and stronger reasoning for summaries, scoring, and reports.
Non-technical founders needed the flow to feel simple, while the outputs still had to be explicit enough for review.

What I built

A DVF question bank and staged interview flow covering problem, market, technology, and report synthesis.
A deterministic turn orchestrator that replaced an overly flexible agent flow with explicit stage confirmation.
A layered context strategy using system rules, rolling summaries, recent turns, sanitized input, and stage-specific instructions.
Verification and evidence-handling hooks for claims, scoring, and final report generation.

Technical approach

Implemented a decoupled Next.js frontend and FastAPI backend with PostgreSQL state and SSE chat streaming.
Persisted structured workflow state instead of relying on raw chat alone, using confirmed fields, meta state, and pending-review values to separate facts from suggestions.
Used task-based model routing so conversational turns could stay fast, while stage summaries, scoring, and report generation could use stronger reasoning paths.
Extended the product with auth, project workspaces, sample report flows, export, and organization/admin screens.

Outcomes

Built a full end-to-end MVP with staged workflow control, persistent project state, SSE-based streaming, structured evidence capture, DVF scoring, and exportable reports.
In the internship evaluation, the hybrid routing strategy averaged roughly 1.8s chat latency while maintaining strong instruction adherence for structured tasks.
A retrospective test against four known startup cases produced sensible bands: Notion and Discord were rated as proceed, while Google Glass and Juicero were flagged as risk cases.
The main product lesson was that controlled workflow design made the AI output easier to trust than an unconstrained chat interface.

Reflection

Reliability improved when I stopped trying to make the agent more autonomous and made the workflow more explicit. The state-machine approach was less flashy, but it gave the product clearer boundaries and more defensible outputs.