Category: NeuroScience

11 Oct 2018
ManahelThabet

Can Neuroscience Teach Robot Cars to Be Less Annoying?

Robot cars make for annoying drivers.

Relative to human motorists, the driverless vehicles now undergoing testing on public roads are overly cautious, maddeningly slow, and prone to abrupt halts or bizarre paralysis caused by bikers, joggers, crosswalks or anything else that doesn’t fit within the neat confines of binary robot brains. Self-driving companies are well aware of the problem, but there’s not much they can do at this point. Tweaking the algorithms to produce a smoother ride would compromise safety, undercutting one of the most-often heralded justifications for the technology.

It was just this kind of tuning to minimize excessive braking that led to a fatal crash involving an Uber Technologies Inc. autonomous vehicle in March, according to federal investigators. The company has yet to resume public testing of self-driving cars since shutting down operations in Arizona following the crash.

If driverless cars can’t be safely programmed to mimic risk-taking human drivers, perhaps they can be taught to better understand the way humans act. That’s the goal of Perceptive Automata, a Boston-based startup applying research techniques from neuroscience and psychology to give automated vehicles more human-like intuition on the road: Can software be taught to anticipate human behavior?

“We think about what that other person is doing or has the intent to do,” said Ann Cheng, a senior investment manager at Hyundai Cradle, the South Korean automaker’s venture arm and one of the investors that just helped Perceptive Automata raise $16 million. Toyota Motor Corp. is also backing the two-year-old startup founded by researchers and professors at Harvard University and Massachusetts Institute of Technology.

“We see a lot of AI companies working on more classical problems, like object detection [or] object classification,” Cheng said. “Perceptive is trying to go one layer deeper—what we do intuitively already.”

This predictive aspect of self-driving tech “was either misunderstood or completely underestimated” in the early stages of autonomous development, said Jim Adler, the managing director of Toyota AI Ventures.

With Alphabet Inc.’s Waymo planning to roll out an autonomous taxi service to paying customers in the Phoenix area later this year, and General Motor Co.’s driverless unit racing to deploy a ride-hailing business in 2019, the shortcomings of robot cars interacting with humans are coming under increased scrutiny. Some experts have advocated for education campaigns to train pedestrians to be more mindful of autonomous vehicles. Startups and global automakers are busy testing external display screens to telegraph the intent of a robotic car to bystanders.

But no one believes that will be enough to make autonomous cars move seamlessly among human drivers. For that, the car needs to be able to decipher intent by reading body language and understanding social norms. Perceptive Automata is trying to teach machines to predict human behavior by modeling how humans do it.

Sam Anthony, chief technology officer at Perceptive and a former hacker with a PhD in cognition and brain behavior from Harvard, developed a way to take image recognition tests used in psychology and use them to train so-called neural networks, a kind of machine learning based loosely on how the human brain works. His startup has drafted hundreds of people across diverse age ranges, driving experiences and locales to look at thousands of clips or images from street life—pedestrians chatting on a corner, a cyclist looking at his phone—and decide what they’re doing, or about to do. All those responses then get fed into the neural network, or computer brain, until it has a reference library it can call on to recognize what’s happening in real life situations.

Perceptive has found it’s important to incorporate regional differences, since jaywalking is commonplace in New York City and virtually non-existent elsewhere. “No one jaywalks in Tokyo, I’ve never seen it,” says Adler of Toyota. “These social mores and norms of how our culture will evolve and how different cultures will evolve with this tech is incredibly fascinating and also incredibly complex.”

Perceptive is working with startups, suppliers and automakers in the U.S., Europe, and Asia, although it won’t specify which. The company is hoping to have its technology integrated into mass production cars with self-driving features as soon as 2021. Even at the level of partial autonomy, with features such as lane-keeping and hands-off highway driving, deciphering human intent is relevant.

Autonomous vehicles “are going to be slow and clunky and miserable unless they can understand how to deal with humans in a complex environment,” said Mike Ramsey, an analyst at Gartner. Still, he cautioned that Perceptive’s undertaking “is exceptionally difficult.”

Even if Perceptive proves capable of doing what it claims, Ramsey said, it may also surface fresh ethical questions about outsourcing life or death decisions to machines. Because the startup is going beyond object identification to mimicking human intuition, it could be liable for programming the wrong decision if an error occurs.

It’s also not the only company working on this problem.  It’s reasonable to assume that major players like Waymo, GM’s Cruise LLC, and Zoox Inc. are trying to solve it internally, said Sasha Ostojic, former head of engineering at Cruise who is now a venture investor at Playground Global in Silicon Valley.

Until anyone makes major headway, however, be prepared to curb your road rage while stuck behind a robot car that drives like a grandma. “The more responsible people in the AV industry optimize for safety rather than comfort,” Ostojic said.

Source: Bloomberg

28 Aug 2018
Manahel Thabet

Brain cell discovery could help scientists understand consciousness

A team of scientists today unveiled the discovery of a new kind of brain neuron called the rosehip cell. What makes this find important? It may be unique to the human brain – and it’s found in the same area thought to be responsible for consciousness.

A team of international researchers consisting of dozens of scientists made the discovery after running complex RNA sequencing experiments on tissue samples from the cerebral cortices of two brain donors. The results were then confirmed with live tissue taken from patients who’d undergone brain surgery.

Upon discovering the rosehip cell, the researchers immediately tried to replicate the finding using samples gathered from laboratory mice – to no avail. It appears the cell is specific to humans, or potentially primates, but the researchers point out they’re only speculating these neurons are unique to humans at this time.

What matters is what the rosehip cell does. Unfortunately, the scientists aren’t sure. Neurons are tough nuts to crack, but what they do know is this one is belongs to the inhibitor class of brain neurons. It’s possible the rosehip cell is an integral inhibitor to our brain activity, and at least partially responsible consciousness.

Some scientists believe that human consciousness has something to do with wrangling reality from the chaos inside our brains. It’s been shown that an infant’s brain functions much like that of someone on LSD – babies are basically tripping all the time. Perhaps these neural inhibitors develop as our brains grow and help us to separate reality from whatever babies are dealing with.

But, of course, the real science isn’t quite as speculative. For the most part, the rosehip cell research is exciting because it’s filling in some missing pages in our atlas of human neural activity.

The brain is one of the most complex constructs in the universe, and the cerebral cortex is its most complicated part. It’s going to take a long time to figure the whole thing out.

The team intends to look for the rosehip cell in the brains of people who suffer from neurological disorders next – work that could lead to a vastly increased understanding of how the brain functions, and what causes it to break down.

Source: https://thenextweb.com/insider/2018/08/27/brain-cell-discovery-could-help-scientists-understand-consciousness/

24 Jul 2018
Manahel Thabet

A New Connection Between Smell and Memory Identified

Summary: A new study reveals how smells we encounter throughout life are encoded in memory. The findings could help develop new smell tests for Alzheimer’s disease.

Source: University of Toronto.

Neurobiologists at the University of Toronto have identified a mechanism that allows the brain to recreate vivid sensory experiences from memory, shedding light on how sensory-rich memories are created and stored in our brains.

Using smell as a model, the findings offer a novel perspective on how the senses are represented in memory, and could explain why the loss of the ability to smell has become recognized as an early symptom of Alzheimer’s disease.

“Our findings demonstrate for the first time how smells we’ve encountered in our lives are recreated in memory,” said Afif Aqrabawi, a PhD candidate in the Department of Cell & Systems Biology in the Faculty of Arts & Science at U of T, and lead author of a study published this month in Nature Communications.

“In other words, we’ve discovered how you are able to remember the smell of your grandma’s apple pie when walking into her kitchen.”

There is a strong connection between memory and olfaction – the process of smelling and recognizing odours – owing to their common evolutionary history. Examining this connection in mice, Aqrabawi and graduate supervisor Professor Junchul Kim in the Department of Psychology at U of T found that information about space and time integrate within a region of the brain important for the sense of smell – yet poorly understood – known as the anterior olfactory nucleus (AON).

Continue Reading.

 

“Read more about Dr Manahel Thabet.”

Manahel Thabet Membership, LinksResearch, Articles , In the Media, Public Speaking, Blogs, Photo Gallery, and Video Gallery