What would it make for telepathy to be accessible to everyone?
Let's start by breaking it down into what it means and what it entails.
It's the transmission of any kind of information between two human minds, through any means other than the classical sensory channels that we are accustomed to.
That means no sounds, no visual cues, no tactile triggers and any other ways of using our senses.
It's safe to say that unless you are in control of the immaterial connectivity of everything (congratulations if you are), or if you secretly discovered the quantum properties and behaviour of consciousness (big congratulations if you did), that's not currently achievable.
Again, without stepping into the less-known potential quantum behaviour of the mind, our current understanding of the world breaks it into two components: matter and energy.
As far as we can tell, a mix of matter (the brain, the hardware) and energy exchanges ("thinking", the software) is what defines what's going on in the mind.
That's easy - it's what Brain-Computer Interfaces do!
That's a bit harder. But fortunately we're getting some allied reinforcements.
Until now we've been able to listen to the brain, without tools to translate it very well. A bit like arriving on a foreign planet and landing in the middle of a busy market: it's only noise.
This is where Deep Learning with its Transformers and Large-Language Models comes to help: like having a device that listens to the noise for a while, and slowly starts to make sense of it and connect it contextually to known concepts and ideas.
Obviously, this pertains to the inner speech, the silent monologue (or dialogue, why not?).
We don't need a word-by-word translation. Me thinking of a McIntosh Apple is very particular - but as a start let's detect it's an apple.
And this is currently being done by very smart colleagues in the field.
For example, Minds Applied are making great progress with Cognichat and Cognitrol: using imagined words or sentences to communicate or, why not, control things through imagined speech.
This is where things become, or stay, more "Fi" than "Sci".
The transferring part isn't the problem, but the receiving and interpretation is.
Getting a destination brain to receive our message over long stretches of distance has one big problem when talking about BCIs: they're good at reading, they suck at "writing".
There are some devices that might induce some raw states of mind through mild transcranial electrical stimulation, but as far as non-invasive technology is concerned - we're simply not even close yet.
So the best we could do is display the message on a mobile phone, Apple Watch or AR/VR glasses.
Then again, this would overflow the definition of telepathy and we're getting to pretty much "accessible communication".
Which isn't bad at all.
ChatGPT, and other GPTs (Generative Pre-trained Transformers), are a very hot subject today.
Without going into the GPT side - please have a look here to learn more on GPT-4 - how can they help Telepathy, or at least make sense of our internal speech and visuals?
Well, the hard task of making sense of our brain's electrical patterns is to match them against something meaningful.
Instead of turning noisy data intro word by word constructs of a whole sentence - we can now make use of the GPT abilities to infer, complete and summarize otherwise broken and incomplete starting patterns.
Thinking of the words "me", "sentence", "write" which have a much greater meaning and weight inside my cognitive space, can then be reinterpreted and rearranged by a GPT model that can even have contextual and historical information.
In short, while playing Call of Duty thinking of the number "2" and "front" will definitely be interpreted differently than while playing Poker".
Use your imagination - can you think what's the difference?
Brain-Computer Interfaces and Large Language Models can't enable FULL Telepathy. Not end-to-end.
Yet.
Our thoughts can be read - be it imagined speech, imagined visuals; they can be transmitted; but we don't have the tech to make the destination mind receive them yet.
But LLMs will make it so much easier, very fast, for the right ideas, concepts and context to be read through Brain-Computer Interfaces.
Some sort of telepathy happens without any tech: Brain waves already synchronise when people interact in a shared social environment.
And encouragingly, studies on the contextual meaning of narrative constructs between multiple minds show that we're understanding in the same way, for example when we're watching the same videos or listening to the same stories.
In the end, we're only creating auxiliary wheels to what might already be possible: theoretical physicists and philosophers are teasing the idea that consciousness lives in a different dimension, and the material world is only an interpretation of it.
More Sci, less Fi doesn't always conclude that a sci-fi idea is in fact possible today.
But its purpose is to assess what we can do with the current science.
We're not going to touch on the obvious medical use-cases which do wonders for many disabled bodies. uCat is promising great progress in that aspect, among many others.
In multiplayer games we can translate team-mates thoughts into actions: thinking "Danger to the right!" can trigger a haptic reaction in our own controller. This happens much faster than ordinary speech, thus has increased benefits for casual and professional gaming.
In a smart home, consider interacting with Alexa or Siri by thinking. The previously mentioned Minds Applied paradigms might enable just that: I come home with my friends, silently think "Play that new Kanye song", and there it goes - Spotify reacts, my friends are amazed.
There are endless options, come think BIG with us.