LOGEMICS Human Intelligence, First.
Why It's Different

Critical thinking cannot be reduced to a single score.

Most assessment tools capture what a student answered. Logemics captures how they reasoned. The difference is architectural — built into every layer of the system, from task design to measurement model.

The foundation is a low-stakes measurement philosophy: no pressure, no ranking, no punitive grading. Students engage with reasoning scenarios rather than knowledge recall tests. The system observes patterns across multiple sessions — not a single performance on a single day.

Traditional assessment

One score

Logemics

Six independent dimensions

Traditional assessment

Single snapshot

Logemics

Longitudinal tracking

Traditional assessment

Knowledge recall

Logemics

Reasoning process

Traditional assessment

High-stakes pressure

Logemics

Low-stakes scenarios

Traditional assessment

Opaque AI grading

Logemics

Transparent, auditable rubrics

Traditional assessment

Certainty without evidence

Logemics

Explicit uncertainty bands

How It Works

Five steps. One scientific chain.

Every score Logemics produces is traceable back to a student response, through five explicit steps. Nothing is hidden.

01

Targeted Reasoning Scenarios

Students engage with short interactive scenarios — evaluating an argument, identifying a logical flaw, interpreting conflicting evidence. These tasks are designed to isolate specific cognitive skills, not test subject-matter knowledge. A student who has never studied WWI can still reason about a primary source from that period.

02

Beyond Right or Wrong

For each response, the system captures three signals — not one:

  • Accuracy: Did the student identify the problem correctly?
  • Justification: Can they explain their reasoning?
  • Confidence: How certain are they? (The gap between confidence and accuracy is itself a cognitive signal — called metacognitive calibration.)

These three signals together are more informative than any single answer.

03

AI as Infrastructure, Not Judge

The system uses AI strictly to extract features from student responses and apply predefined rubrics — evaluation grids designed by pedagogical experts. AI has no decision-making authority. Every inference is bounded by the expert-designed framework. Every score is auditable back to the evidence that produced it.

No black box. No opaque grading. Full transparency.

04

A Multidimensional Reasoning Profile

Rather than collapsing all evidence into a single score, the system generates an independent estimate for each of the 6 cognitive dimensions. Each dimension has its own trajectory. A student can be strong in logical coherence and still developing in metacognitive calibration — and both signals are visible and actionable.

Logical Coherence
Causal & Probabilistic Reasoning
Evaluation of Evidence & Justification
Consideration of Alternatives & Counterarguments
Metacognitive Calibration
Critical Comprehension
05

Longitudinal Tracking — Growth, Not Grades

The real value is not a single session result. It is the trajectory across sessions — weeks, months, a full school year. As more evidence accumulates, estimates become more precise and growth becomes visible.

The teacher dashboard shows each student's reasoning profile and flags where the gap between confidence and performance is widening. The system explicitly displays its level of certainty — uncertainty is shown, not hidden. Teachers retain full control and access subject-specific activity suggestions to act on what they see.

Built On

Evidence-Centered Design

ECD is the gold standard in educational psychometrics — the framework used by the world's leading assessment bodies. It structures measurement as an explicit chain: from what we want to measure (claims), to the evidence we collect (tasks), to the inferences we draw (dimension estimates).

Every element of Logemics is designed within this chain. Nothing is added that cannot be justified. Nothing is inferred beyond what the evidence supports.

Claims
Tasks
Evidence
Rubrics
Dimension Estimates
In Practice

A tool designed for development — not sorting.

We do not rank students against each other. Every profile is individual.
We do not produce diagnostic labels. A signal is not a verdict.
We do not hide uncertainty. When evidence is thin, we say so.

See it in action.

Try the demo See the teacher view