<aside> 💡

Parsnip is building a new kind of AI learning system—one that models skills, progress, and practice explicitly, and understands learners the way great teachers do. We’re looking for a deep partnership to apply it to a new domain.

</aside>

Why don’t AI tutors & coaches actually work?

Any technology-based learning approach — whether it’s built for students, employees, customers, or the general public — runs into the same wall.

You can produce fantastic content. You can add assessments, dashboards, AI-generated explanations, even live AI avatars. And yet, learning outcomes somehow plateau quickly.

This isn’t for lack of effort, but because learning technology still doesn’t understand the learner.

Over 40 years ago, educational psychologist Benjamin Bloom discovered the 2-sigma problem, where students working with a skilled 1:1 tutor outperformed those in traditional classrooms by two standard deviations. Technology-enabled personalized learning has promised us the potential to close that gap, at scale, for everyone. Why then, despite decades of progress, does that promise remain unfulfilled?

The answer is subtle: the core of effective tutoring isn’t conversation, presentation, or even feedback. It’s the teacher’s theory of mind about the learner. A great tutor or coach develops and continually updates a mental model of what someone knows, what they can do, how they’re progressing, and what they need to practice next.

Most learning technology, including today’s AI-driven systems, doesn’t build this model. Courses, content libraries, and even conversational agents treat learning as exposure to information, lightly personalized at the surface. Without an underlying model of skills and progression, “AI tutors” will remain fundamentally ineffective as teachers while being deceptively articulate and believable.

Until recently, this limitation was unavoidable. Explicitly modeling skills, structuring knowledge, and tracking learner progress all the way from theoretical knowledge to real-world practice required enormous manual effort. It only made sense in specific, standardized domains, if at all.

But, we believe this is how AI will truly, genuinely change the equation for education. It won’t be through generating more multiple-choice questions, better explanations, or more convincing avatars — but by making it finally feasible to systematically structure knowledge and model the process of learning itself.

With a structured map of knowledge, AI can become the “GPS”

We’re building Parsnip Knowledge, an AI-native learning platform with two conceptual parts:

image.png

  1. A map creator for knowledge and skills that learners can attain. This map creates structure from unstructured knowledge, creating a theory of mind — a way for an AI system to model learners and make inferences about them.
  2. A “GPS” system, with this map as a foundation, that becomes a personalized tutor/coach using multiple modalities: personalized content creation, assessments, and feedback loops that measure progress, modeling how human teachers understand a student’s knowledge and personalize their growth.

Building and maintaining these knowledge repositories and leveraging them for teaching has always been possible in theory, yet prohibitively labor intensive, making it worthwhile only for highly standardized, mass-market curricula. In domains without existing structure or schools, doing this would be completely unrealistic. Using AI models that organize and structure knowledge, it becomes possible to not only build these maps, but powerful “GPS” personal tutors, which are especially impactful for domains where learning is messy, tacit, and practice-based.

How Parsnip Knowledge works