Why I Am Working on Brain-Computer Interfaces

A systems architect looks at BCI and asks the question nobody in the field is asking.

lobirus ·


I am not a neuroscientist. I am not a physicist. I am not pretending to be either. I have been building software systems for over two decades, and my strength is in seeing how complex systems connect, where data flows break down, and where the architecture is missing. That is the lens I am bringing to BCI.

And from that lens, something about this field looks broken.

The Separation Problem

BCI sits at the intersection of neuroscience, physics, electrical engineering, signal processing, machine learning, and software architecture. In practice, these fields barely talk to each other.

Neuroscientists understand the brain but often work with whatever sensing hardware is available off the shelf. They optimize experiments within the constraints of existing instruments rather than questioning the instruments themselves. Physicists develop extraordinary sensing technologies (quantum sensors, THz spectroscopy, NV-center magnetometry) but rarely think about neural applications because that is not their field. Electrical engineers build signal chains for known modalities. ML researchers optimize decoders for the data they are given.

Nobody is standing back and asking: what is the actual information architecture of this problem? What signals exist? Which ones are we capturing and which ones are we ignoring? What would the sensing, ingestion, and decoding pipeline look like if we designed it from scratch without inheriting the assumptions of any single discipline?

That is a systems question. That is my question.

What I Actually Bring

When I look at a complex system, I see data flows. Sources, transformations, storage, processing, output. I see where information gets lost, where latency accumulates, where the bottleneck actually lives versus where people assume it lives.

In BCI, the conventional assumption is that the bottleneck is the skull. Signals are weak. Tissue attenuates them. Therefore we need better sensors or we need to go invasive.

I think the bottleneck is upstream. The problem is not that the signals are too weak to detect. The problem is that we are detecting the wrong signals. Or more precisely, we are only detecting the signals that the historically available instruments happen to be sensitive to, and assuming that is all there is. That is like saying the only information in a forest is what you can see, because you happened to bring a camera and not a microphone.

My role is not to build the microphone. It is to be the person who says: we should also be listening. And then to build the data platform that can ingest, process, and make sense of whatever the microphone picks up.

The Data Layer

Every BCI system has a data pipeline, but most of them are built around a single modality. EEG rigs come with EEG software. fMRI scanners come with fMRI analysis packages. Each one is a silo. The data formats are different, the processing assumptions are different, the temporal resolutions are different, the output representations are different.

If you want to test whether multiple sensing modalities carry complementary information about neural states, you need a unified data architecture. Something modality-agnostic that can ingest time-series data from any sensor, align it temporally, normalize it, store it efficiently, and feed it into a decoder that learns the mapping between sensor inputs and cognitive states.

This is not a neuroscience problem. This is an information architecture problem. It is a data engineering problem. It is exactly the kind of problem I have spent my career solving in other domains.

The platform I am building is designed to be the missing infrastructure layer. It handles ingestion, storage, transformation, and ML/AI processing for multi-modal brain sensing data. It does not care whether the input comes from EEG, radar, impedance tomography, quantum sensors, or something that does not exist yet. It cares about data quality, temporal alignment, feature extraction, and decoder performance.

Connecting the Dots

I have spent months reading across neuroscience, quantum physics, sensing technology, and signal processing. Not to become an expert in any of them, but to understand enough to see where the connections are that the specialists are missing because they are inside their own field.

Some of what I found:

Neurons produce physical changes in at least six distinct domains when they fire: electrical, mechanical, thermal, optical, dielectric, and magnetic. Current non-invasive BCI exploits only one or two of these. The others are essentially untouched.

Active sensing (sending a controlled signal and measuring how the brain modifies it) is a fundamentally different paradigm from passive detection (listening for emissions), with orders of magnitude better SNR in every other field where both have been tried: radar, sonar, MRI, ultrasound.

Quantum sensing technologies that could detect neural correlates through new physical channels already exist in physics labs but have never been pointed at the BCI problem.

Recent evidence suggests the brain may exploit quantum mechanical effects for computation (Fisher 2015, Kerskens and Lopez Perez 2022), which would mean classical sensors are fundamentally unable to capture the full picture, no matter how sensitive they become.

None of these observations require deep expertise in any single field. They require reading across fields and asking: why hasn't anyone connected these?

That is what I do. I connect systems. I see the architecture. And right now, I see a field that is stuck because it is optimizing within the wrong constraints, and nobody is stepping back far enough to see the full picture.

I am stepping back.