← All initiatives
OR·01  ·  Platform · Education · Research

Chalkless.

A system for recovering the logical flow of a lecture — how definitions, examples, and key concepts develop over time, not just what is said.

In Development Research · submission in prep Multimodal Sequence modeling
Fourier Series definition f(x) = a₀/2 + Σ aₙ cos(nx) + bₙ sin(nx) example — square wave key concept orthogonality enables decomposition
StatusIn development
ResearchIn submission
TrackPedagogical AI
DomainMathematics
CodeOR·01
The board — scroll to reconstruct

A lecture doesn't arrive all at once.

Chalkless treats a lecture the way a student does — as a sequence that only makes sense once you can see the whole arc. Scroll through a reconstruction below.

lecture 07 · harmonic analysis · spring e^(iπ) + 1 = 0 sin x a b c
Current phase Blank board
Topic
Definition
Example
Key concept
Transition
Harmonic Analysis lecture 07 — intro HA Fourier series Fourier transform Laplace Convolution definition f(x) = a₀/2 + Σ [aₙ cos(nx) + bₙ sin(nx)] where aₙ = (1/π) ∫ f(x) cos(nx) dx and bₙ = (1/π) ∫ f(x) sin(nx) dx DC term orthogonal basis · coefficients aₙ, bₙ basis — n=1 n=2 n=3 example — reconstructing a square wave f(x) approximated by partial Fourier series · N = 1, 3, 7 π +1 −1 0 x f(x) Gibbs ≈ 9% overshoot target N=1 N=3 N=7 coefficients · bₙ = 4/(nπ) b₁ b₃ b₅ b₇ b₉ key concept orthogonality enables unique decomposition visual proof — why ⟨sin(mx), sin(nx)⟩ = 0 sin(2x) sin(3x) sin(2x)·sin(3x) ∫ = 0 next lecture → Parseval's identity energy preserved across domains (1/π) ∫ |f(x)|² dx = a₀²/2 + Σ (aₙ² + bₙ²) time f(x) frequency b₁ b₃ b₅ b₇

Chalkless identifies each segment's role and stabilizes the sequence across the whole lecture.

Overview / 01

The structure is the lesson.

Most lecture tools record audio or capture video. Very few preserve the reasoning process behind the material — the way an instructor introduces a topic, defines a concept, unfolds an example, and lands on the idea that will carry into the next lecture.

Chalkless reconstructs that sequence. It is a multimodal analysis system designed specifically for board-based mathematics lectures, and it answers a question most tools don't ask: what role is this segment playing in the arc of the lesson?

The system is built around one observation. Lecture structure is not locally observable. Meaning emerges only from the relationships between segments over time — which is why a static classifier will always misread a lecture.

Lecture understanding is fundamentally a structured sequence problem, not a classification problem.
Problem / 02

Why local methods fail.

The standard approach treats lecture understanding as a local classification task — short segments are labeled independently of their neighbors. In practice this breaks down almost immediately.

  • The same content (equations, formal language) can appear in multiple pedagogical roles.
  • Meaning depends on what came before and what comes next.
  • Lecture structure follows patterns over time, not isolated signals.

As a result, local methods produce fragmented, inconsistent interpretations — the opposite of what a student needs.

Approach / 03

A two-stage pipeline.

Local inference first, then global stabilization. The second stage is where the work happens.

01 · Input

Lecture capture

Audio + video of a board-based lecture. No labels, no manual preparation.

02 · Local inference

MLSI engine

Whisper transcription, OpenCV board activity, lightweight LM reasoning. Produces a first-pass labeling per segment.

03 · Stabilization

8-pass temporal

Explicit rules derived from how lectures are structured. Converts noisy local labels into global coherence.

04 · Output

Structured segmentation

Timestamped roles: topic, definition, example, key concept, transition. Usable downstream.

Stabilization rulesHeuristics derived from lecture form

  • Lectures begin with a topic phase.
  • Examples tend to occur in continuous spans.
  • Key concepts often follow examples.
  • Rapid label switching is usually noise, not signal.

These are not learned — they are encoded. The contribution is that encoding domain knowledge in the inference pass outperforms learning from unlabeled data.

Contribution / 04

The technical idea.

Lecture structure can be recovered through deterministic temporal constraints, without requiring labeled training data.

This differs from CRFs and neural sequence models, which depend on large annotated datasets. Chalkless instead encodes domain knowledge, structural priors, and interpretable rules as part of the inference process itself.

The method is legible, extensible, and — crucially — debuggable. When it fails, you can read the rule that fired.

Direction / 05

What this unlocks.

A reconstructed lecture structure is not the product. It is the substrate for one.

  • Structured lecture replay — jump to the example, skip to the key concept.
  • Concept-based navigation across a semester's worth of material.
  • Automatic identification of definitions and worked examples.
  • Note-taking systems aligned to reasoning flow, not timestamps.
Research paper / 06

The paper.

Paper · Multimodal systems · Sequence modeling

Local Classification is Insufficient: Recovering Lecture Structure Through Temporal Constraint Enforcement

TypeResearch paper
StatusIn preparation · submission
AuthorsORIA research

The paper introduces Chalkless as a multimodal system for lecture segmentation that replaces local classification with sequence-level inference using deterministic temporal constraints. It argues that pedagogical roles in lectures cannot be identified from local signals alone, and demonstrates that enforcing global structure improves segmentation outcomes — even without labeled training data.

Multimodal Systems Educational AI Sequence Modeling Lecture Analysis