Intro
For 12 years, I’ve been obsessed with one question: How do you create learning systems that actually produce outcomes — not just content? I validated online learning back in 2014 through test-prep, went deep into behavioral data at Enbyte, and eventually realized the real blocker wasn’t curriculum — it was human motivation. That insight led me to build CodeStudio from zero to 350K MAUs with nearly 50% retention, solving for the learner, the business, and the investor simultaneously. After the Naukri acquisition, I chased the next frontier: using LLMs to build a lifelong Learning OS. That didn’t find market fit, so I shut it down quickly and now consult in AI while searching for the next hard problem worth solving.
Problem:
The constant pattern I saw across EdTech was that most products improved access to learning but didn’t meaningfully improve outcomes.
Insight:
Outcomes fail for two reasons: we don’t personalize based on real behavioral data, and we underestimate the role of human motivation.
Action:
I validated online learning early through measurable test-prep results, used user-level data at Enbyte to tailor learning pathways, and later built CodeStudio to solve the behavioral layer directly. When Coding Ninjas was acquired, I pursued the next frontier: a lifelong Learning OS using LLMs to build content-agnostic, learner-agnostic experiences.
Outcome:
CodeStudio reached 350K MAUs with ~48% retention and became central to the Naukri acquisition. Umbra didn’t find market fit, so we closed it quickly. I now consult on AI implementations while exploring the next big opportunity in learning and outcomes.
Hardest problem
Happy to dive into the hardest product decision — the Code360 pivot — and walk you through the insight, the options, the hard part, and the measurable outcome.
Retention low. Everyone believed interview-prep was the hero feature.
“We pivoted the platform to align with the true habit loop, and retention compounded.”
Real insight:
“Interview-prep motivates, but doesn’t retain. Practice produces intrinsic reward loops.”
Why risky:
“We bet the product identity + resources on a counter-narrative insight.”
Validation signals:
“Attempt velocity and repeat visits spiked immediately on the A/B tests.”
Technical constraint:
“Slow compile-feedback loop broke the habit — we redesigned the editor to shrink action → feedback → reward.”
What I got wrong:
“Initially underestimated how critical feedback-loop speed was.”
Transferable to Trilogy:
“Start with the behavioral loop, not assumptions → fast, small iterations → measure → compound.”
What was the hard part?
“Breaking the org narrative and committing to a behavior-first model.”
The real hard part?
“Betting resources on a loop we believed would drive retention before data captured the full outcome.”
Why not smaller bet?
“Split experience = diluted loop. Habits need full reinforcement.”
Why not obvious?
“Interview-prep spikes masked the real long-term behavior.”
Questions
Q1. “Give me the hardest product decision — just the decision and the insight.”
Your Answer:
“The hardest decision was pivoting Code360 from interview-prep to coding-practice as the core product.
The insight was that solving even one coding problem increased 30-day retention by 20%, while interview-prep created spikes but no habit loop.”
Q2. “What exactly was the insight? Not the behavior — the insight.”
Your Answer:
“That retention is driven by micro-wins, not peak motivation.
Interview-prep delivers motivation spikes, but practice delivers intrinsic reinforcement — and habits only form when the action → feedback → reward loop is fast and rewarding.”
Q3. “Walk me through the decision tree. The real options.”
Your Answer:
“There were three real options:
- Option A: Keep interview-prep as the hero feature
- Option B: Split the product into prep + practice
- Option C: Fully pivot to a coding-first product with a redesigned editor, homepage, and behavioral loop
I chose Option C because only a full pivot created a consistent habit loop.”
Q4. “Why was that non-obvious? Why did smart people disagree?”
Your Answer:
“Interview-prep had high engagement spikes, and the team mistook spikes for value.
It felt like our differentiator — students cared about jobs.
But cohort data contradicted the intuition.
Smart people were anchored on narrative, not behavior.”
Q5. “What was the single hardest part? Just one thing.”
Your Answer:
“Breaking the internal belief that interview-prep was our value.
The org was emotionally invested in that narrative.”
Q6. “How did you validate the pivot before full retention results came in?”
Your Answer:
“With leading indicators:
- Homepage A/B test → higher click-through on practice
- Attempt velocity increased
- Repeat visits increased
- Time-on-task increased These moved within days — long before retention did.”
Q7. “What did you get wrong early?”
Your Answer:
“I underestimated how much the editor’s compile/feedback latency hurt the habit loop.
Fixing that was as important as shifting the product narrative.”
Q8. “Give me your one-sentence version of the whole story.”
Your Answer:
“We pivoted Code360 to coding-practice because the data proved practice — not interview-prep — was the only behavior that produced a repeatable habit loop, and aligning the system to that loop grew the product to 350K MAU with 43% retention.”
Q9. “Why should I believe you made this decision?”
Your Answer:
“Because I ran the cohort analysis myself, built the dashboards, ran the A/B tests, owned the product pod, made the resource trade-offs, and presented the behavior-based model that convinced the founders to bet the platform on practice.”
Q10. “How does this prove you can operate in our model — high standards, tight scope, deep expertise?”
Your Answer:
“I went deep on the domain until the behavioral model was unambiguous, then scoped the smallest set of high-leverage iterations — homepage test, editor fixes, gamified loop — and shipped weekly until leading indicators confirmed the direction.
Depth → insight → tight scope → high-velocity execution.”
Master Domain
Learn User Deeply
Reverse Engineer Real Constraints
Cost-to-Serve vs Retention Death Spiral
The marginal cost of delivering high-touch, outcome-based learning at scale was structurally higher than the marginal willingness-to-pay of our mass-market learners.
If cost-to-serve > value extracted before dropout, then growth = death.
This forces a move toward lower-touch, self-serve, scalable tooling → exactly the pivot that became CodeStudio.
Learning Outcomes Require Behavioral Change, Not Just Content
We discovered that the bottleneck in improving learning outcomes wasn’t content quality—it was the learner’s behavior, consistency, and intrinsic motivation. And no content platform can fix that at scale without solving for motivation itself.
Why this hits hard:
- 80% of learners drop off not because content was poor, but because:
- no accountability
- no feedback loop
- no learning habit
- Every product feature that tried to “motivate” ended up increasing operational load.
- Scaling behavior change is structurally harder than scaling video/content delivery.
- You can scale servers; you can’t scale human discipline.
How this pushes the pivot:
The only scalable solution was to shift from “teaching” to “enabling practice + feedback loops + discovery”, which is the internal DNA of CodeStudio.
“The real constraint was that high-touch, outcome-oriented learning had a cost-to-serve curve that scaled linearly, but retention scaled sub-linearly. No amount of operational excellence could bend that curve — we needed to redesign the system.”
Mental Model
Behavioural & motivational
Heuristic of what worked
Questions
- What’s an example of an insight or BrainLift that fundamentally changed a product direction here?
- If I join, what is the one outcome you would want me to accomplish that tells you the hire was a success?
- What’s one thing about product work at Trilogy that outsiders usually misunderstand?
- Given my background — fast iteration, retention loops, data-first — where do you see the strongest fit or biggest opportunity for leverage?

