After he attempts the world’s first human head transplant, neurosurgeon Sergio Canavero plans to attempt another world first: reawakening a brain that has been cryogenically frozen.
ONE WORLD’S FIRST AFTER ANOTHER
Given the remarkable advances that have been made in medicine in recent years, it’s hard to believe anything is still truly impossible. Artificial intelligences are diagnosing diseases, real-life cyborgs walk among us, and we’re finding promising new clues on our quest for immortality. Even more remarkable breakthroughs are on the way, but if any one research team truly faces seemingly insurmountable odds, it has to be that of Professor Sergio Canavero, Director of the Turin Advanced Neuromodulation Group.
However, the most remarkable news to come out of Canavero’s interview doesn’t have anything to do with the head transplant at all, but what he plans to do afterwards: “As soon as the first human head transplant has taken place, i.e., no later than in 2018, we will be able to attempt to reawaken the first frozen head.”Four years ago, the acclaimed neurosurgeon announced his plan to complete the world’s first human head transplant, and this week, in an interview with OOOM, he confirmed that the controversial operation will take place within the next 10 months. According to Canavero, the operation will occur in Harbin, China, with Xiaoping Ren of Harbin Medical University leading the surgical team, and contrary to previous reports, a Chinese citizen, not Russian Valery Spiridonov, will be the recipient of a donor body.
LIFE AFTER DEATH?
Canavero plans to remove the brain from a head that has been frozen at -196 degrees Celsius (-320 degrees Fahrenheit) and submerged in liquid nitrogen. He’ll then place the brain in a donor body in an attempt to effectively bring the patient back from the dead and, in the process, clear up humanity’s questions about the afterlife.
“If we bring this person back to life, we will receive the first real account of what actually happens after death,” said Canavero. “The head transplant gives us the first insight into whether there is an afterlife, a heaven, a hereafter, or whatever you may want to call it or whether death is simply a flicking off of the light switch and that’s it.”
Clearly, this is the stuff of science fiction, and the medical community — and society at large — has every reason to be very skeptical of its potential for success.
“The advocates of cryogenics are unable to cite any study in which a whole mammalian brain … has been resuscitated after storage in liquid nitrogen,” Clive Coen, Professor of Neuroscience at King’s College London, told The Telegraph, adding, “Irreversible damage is caused during the process of taking the mammalian brain into sub-zero temperatures.”
Even if it did work and the frozen brain did “wake up,” there’s no telling what kinds of complications the patient could experience, from decreased mental faculties to unimaginable mental trauma. Though we do now live in a world in which the seemingly impossible is becoming possible, some experiments might be better suited for works of sci-fi than modern hospitals.
At Facebook’s developer conference last week, Oculus Research predicted that AR glasses would replace smartphones in the near future. The ability to augment reality is just one of the futuristic technologies Facebook is working on.
Last week, Facebook’s annual developer conference (FB8) gave us a glimpse of the future. While most of the announcements made during the event were meant for developers, it doesn’t take a techie to understand how they will impact the lives of Facebook’s more than 90 million consumers.
According to Michael Abrash, chief scientist at Facebook-owned Oculus Research, super augmented reality (AR) glasses could replace smartphones as the everyday computing gadget in the next five years.
It’s definitely not an outlandish prediction. Abrash explained that despite all the current hype around AR, the tech hasn’t yet reached its defining moment. “[I]t will be five years at best before we’re really at the start of the ramp to widespread, glasses-based augmented reality, before AR has its Macintosh moment,” he said on Day 2 of FB8.
Widespread adoption, however, would take a few more years. “20 or 30 years from now, I predict that instead of carrying stylish smartphones everywhere, we’ll wear stylish glasses,” claimed Abrash. “Those glasses will offer [virtual reality], AR, and everything in between, and we’ll use them all day.”
If Facebook’s Oculus team has any say, these super AR glasses would be capable of far more than just augmenting reality. They could give the user “superpowers” by enhancing the wearer’s memory, providing them with instant foreign and sign language translation, and isolating and muting distracting sounds and noise.
Facebook isn’t the only company invested in AR. Apple CEO Tim Cook has also been rather bullish about AR as the technology of the future, and with so many tech behemoths involved, five years seems like a completely realistic timeline for tech that will change everything about reality as we know it. After that, it’ll be on to combining these AR glasses with BCI, and that’s a truly high-tech future worth waiting for.
With the work of brain-computer interface (BCI) pioneers like Elon Musk and Bryan Johnson, humanity is on the cusp of having the ability to control machines with our minds. The implications for a variety of fields are far-reaching and exciting.
LEAVING LIMITATIONS BEHIND
Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?
Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.
How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in
Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities. You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.
The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we’re exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury. We’ve shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.
With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.
STILL EARLY DAYS
But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs. Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information but distort the experience of music. And to make all these technologies work, electrodes have to be surgically implanted – a prospect most people today wouldn’t consider.
But all these demos have been in the laboratory – where the rooms are quiet, the test subjects aren’t distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible. It’s proved very difficult to make these systems fast and robust enough to be of practical use in the real world.
Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network. What might this mean for neuroengineers?
Imagine you’re trying to understand a conversation between a big group of friends about a complicated subject, but you’re allowed to listen to only a single person. You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion. Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we’re nowhere near understanding the full conversation.
There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions. This native electro-chemical language can be interpreted with electrical circuits, but it’s not easy. Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical “accent.” This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.
Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials – the wires that connect to brain tissue – tend to be very rigid. This means that implanted electronics often cause scarring and immune reactions that mean the implants to lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.
Ultimately, we believe a “co-adaptive” bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.
Elon Musk’s new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence. He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities – possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities. Such a vision certainly may seem far-off or fanciful, but we shouldn’t dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago – and now share our roads.
In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality. At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.
Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories. Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.
One day, your regular old contact lenses could serve as non-invasive diagnostic device, thanks to innovative new biosensors.
BIO-SENSING CONTACT LENS
Using ultra-thin transistor technology, researchers from Oregon State University have found a way to design contact lenses capable of registering information about the wearer’s physiological state.
The prototype thus far is only able to detect blood glucose levels, but that could be particularly useful for many patients, particularly those who are diabetic. If the lenses can actively detect physiological changes — like a rise or drop in blood sugar — and subsequently alert the wearer, the implications as a medical device would be clear.
The researchers developed a biosensor with transparent sheet of IGZO transistors and glucose oxidase as a prototype. The mechanism of the biosensor would ideally function by the enzyme oxidizing the blood sugar when it comes into contact with it. This would cause a shift in PH levels which would in turn elicit a change in the electrical current flowing through the transistors. That means the sensors would ideally be able to detect even the smallest levels of glucose concentrations, which are present in tears.
NONINVASIVE DIAGNOSTIC DEVICE
These futuristic contact lenses are still very much in the prototype phase, though the researchers hope to complete animal testing in the coming year. Even though they won’t be hitting pharmacy shelves anytime soon, given that researchers have access to the technology they need to create their noninvasive diagnostic device, the turnaround could be quick in the grand scheme of development, clinical trials, and approval. Beyond just making it available with glucose-detecting capabilities, researchers are optimistic that it will be able to one day be used to monitor other medical conditions, and perhaps even detect cancer.
There is a fair amount of information that can be monitored in a teardrop. Of course, there is glucose, but also lactate (sepsis, liver disease), dopamine (glaucoma), urea (renal function), and proteins (cancers). Our goal is to expand from a single sensor to multiple sensors.
While the sensors are in development, researchers haven’t yet managed to attach them to the contact lenses. But the biomedical tech industry is already abuzz with interest: the researchers have published their work in several journals, including Nanoscale, and have presented at theNational Meeting & Exposition of the American Chemical Society.
A new virtual reality ad service can use gaze tracking to verify that users have watched commercials.
VR AD SERVICE
HTC’s new virtual reality (VR) platform now allows brands to identify whether or not a user has already seen an ad via its VR headsets.
This new strictly opt-in VR Ad Service — where ads will only show in content that developers have specified to include them in — means advertisers will only have to pay for an ad after a user has seen it. The platform is capable of carrying ad formats like scene banners, 2D and 3D in-app placements, and app recommendation banners.
Ads that appear in immersive VR environments can not only provide more effective impressions, they can also track whether the users have viewed them or have turned away their gaze.
This technology aims to give advertisers the means to effectively reach and pique the interest of their audience while simultaneously enhancing brand image, and attracting more users to directly download their apps in the VR environment.
The technology was launched at the 2017 VIVE Ecosystem Conference.
ADVERTISING IN THE AGE OF VR
In-game advertisement, even in the traditional sense, offers a lot of incentive for developers to support the development of their games. But ads are also something that viewers naturally try to avoid. With VR gaining a strong foothold in mainstream media, companies are now trying to monetize the platform by introducing VR ads — a concept, while fascinating, is also slightly disconcerting for some.
On one hand, ads viewed within HTC’s immersive VR environment are based on precise re-targeting, which means advertisers can ensure that they are actually showing ads relevant to its viewers. But, since the payout is linked to people actually viewing the ads, the tech must verify this — which it does, by tracking the viewer’s gaze. It wouldn’t be hard to imagine a future where people are already wearing VR or augmented reality (AR) equipment on a daily basis (perhaps in the form of contact lenses), meaning they quite literally could not look away from a commercials — or any other content for that matter. That hypothesis aside, HTC points out that their aim for VR advertising isn’t meant to be an interruption of the VR or AR experience — it’s actually designed to complement it.
Only time will tell if it will succeed from a consumer perspective. Until then, we can only hope that VR and AR companies find the right balance between creating a viable advertising revenue stream and ensuring a great AR and VR user experience. Ideally, one that doesn’t force us to consume media, commercials or otherwise.