ORCLE by kwale

The brain encoder that powers everything we build

A 6-stream multimodal neural prediction model trained on 720+ human subjects' fMRI data. Sub-5-second inference across 20,484 cortical vertices.

Join Waitlist
The Problem

No public API for predicting brain responses to media

Existing content analysis tools use surface-level heuristics -- pixel counts, audio levels, keyword matching, sentiment classification. None of them model how the human brain actually processes multimodal information. They analyze content at the signal level, not the neural level.

Academic neuroscience has produced remarkable models of brain encoding, but they remain locked in research labs, require fMRI scanners, and cannot run at scale. There is no public API that lets developers and researchers predict brain responses to arbitrary media.

ORCLE bridges this gap. It is a purpose-built neural prediction model that processes content through six parallel streams mirroring human cognition, predicting activation across 20,484 cortical vertices in under 5 seconds.

20,484
cortical vertices on the fsaverage5 brain surface, predicted per-second for any video input. The most comprehensive neural prediction model available via API.
How It Works

Six streams, one brain prediction

01

6 frozen encoders extract features

Video (V-JEPA 2.1 ViT-G), audio (Whisper-large-v3-turbo), text (ModernBERT-large), OCR (Qwen3-VL-2B), long-context language (Qwen3.5-9B), and reasoning (Qwen3-Omni-30B) -- each processes its modality in parallel.

02

Fusion transformer integrates streams

A trainable fusion transformer models cross-modal interactions -- how audio enhances visual attention, how text reinforces spoken narrative, how visual composition interacts with soundtrack emotion.

03

Prediction heads map to cortex

Task-specific prediction heads output per-second activation predictions across all 20,484 vertices of the fsaverage5 cortical surface, covering every functionally defined brain region.

04

20 interpretable metrics

The neural atlas aggregates vertex-level predictions into 20 interpretable metrics: attention, emotion, memory, narrative, reward, social cognition, cognitive load, and more -- each mapped to specific brain regions.

Key Capabilities

The most comprehensive brain encoder available

6-Stream Multimodal

Video, audio, text, OCR, long-context language, and reasoning streams -- processing content through the same parallel pathways the brain uses.

All modality-specific cortical areas

Demographic Conditioning

Adjust predictions for specific audience segments. Age, gender, and interest profiles modify the neural model to predict how different populations respond.

TPJ -- Social cognition & group differences

Modality Ablation

Isolate the neural contribution of each sensory channel. Turn off audio and see how visual-only engagement changes. Essential for understanding what drives the experience.

Cross-modal integration regions

Sub-5-Second Inference

Hybrid deployment architecture with frozen feature extractors on cloud APIs and the trainable adapter running locally or on edge. Fast enough for real-time scoring workflows.

Optimized for production latency

WebGPU-Ready

The trainable adapter is small enough to run in-browser via WebGPU. Feature extraction happens on cloud APIs, but the final prediction can run on the client device with zero server cost.

Client-side neural prediction

Hybrid Deployment

Run on our cloud, your infrastructure, or a hybrid of both. ORCLE-Lite (4 streams) for speed, ORCLE-Full (6 streams) for maximum depth. Scale from hobby to enterprise.

Flexible architecture, any scale
The Science

The neuroscience behind ORCLE

ORCLE predicts activation across all 20,484 vertices on the fsaverage5 cortical surface -- the standard brain template used in computational neuroscience. It was trained on 451.6 hours of fMRI data from 720+ human subjects watching naturalistic video content.

All 20,484 vertices on fsaverage5
Trained on 720+ human subjects
451.6 hours of fMRI training data
Group correlation R ~ 0.40 on held-out subjects
6 parallel encoding streams
Sub-5-second inference latency
Selected References
Allen, E.J., et al. (2022). A massive 7T fMRI dataset to bridge cognitive neuroscience and AI. Nature Neuroscience, 25, 116-126.
Gifford, A.T., et al. (2023). The Algonauts Project 2023 Challenge: How the human brain makes sense of natural scenes. arXiv:2301.15469.
Huth, A.G., et al. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532, 453-458.
Schrimpf, M., et al. (2021). The neural architecture of language: Integrative modeling converges on predictive processing. PNAS, 118(45).
View full research references →
Pricing

Choose your plan

Research

Free

100 scores per month

  • 10 scores/day, 100/month
  • ORCLE-Lite (4 streams)
  • Composite score only
  • Python SDK
  • Best-effort latency
  • Docs + community support
Join Waitlist

Developer

$0.10/score

Full access, pay per use

  • 100 scores/min rate limit
  • ORCLE-Full (6 streams)
  • All 6 dimensions + 18 metrics + timeseries
  • Batch endpoint (up to 100 videos)
  • Webhooks
  • Python + JavaScript SDKs
  • Email support
  • Volume discounts at 1K+ scores/mo
Join Waitlist

Enterprise

Custom

Volume + custom models

  • 600 scores/min rate limit
  • Full + custom demographic conditioning
  • Raw vertex data + modality ablation
  • Unlimited batch processing
  • Python + JS + Go + custom SDKs
  • Dedicated engineer
  • Custom SLA
Join Waitlist

Ready to build on the brain encoder?

Join the waitlist for ORCLE API access and start predicting brain responses at scale.

Join Waitlist

Built on peer-reviewed neuroscience. Patent pending.