X-Frame-Options: SAMEORIGIN

Category: Neurofeedback

27 Oct 2018

New tool provides real-time glimpse of brain activity in mice

A transparent set of electrodes enables researchers to simultaneously record electrical signals and visualize neurons in the brains of awake mice1.

Syncing neuronal signals with videos of neurons helps researchers map those signals to particular sites in the brain. The technology could yield insights into how the brain works and what goes awry in conditions such as autism.

Two-photon calcium imaging and electroencephalography (EEG) are both popular tools for studying the brain, but combining them has proved challenging. In the former technique, researchers tag calcium ions with fluorescent proteins. When neurons fire, a microscope picks up the fluorescence as calcium ions rush into the cells. EEG requires inserting a recording electrode into the brain. However, the electrode blocks light in the area from reaching the microscope.

In the new study, researchers built electrodes that transmit light. They layered a metallic material into a flat plastic mold, roughly the size of a single neuron, that is studded with hundreds of plastic spheres. The material fills the space around the spheres, creating holes that allow light to pass through.

Read more: https://www.spectrumnews.org/news/toolbox/new-tool-provides-real-time-glimpse-brain-activity-mice/

24 Oct 2018
Manahel Thabet

How Neuro-Physiotherapy Imparts Quality to Life

Since the last decade or so, we have been witnessing an upsurge in neurological problems such as strokes, Parkinson’s disease, diabetic neuropathy, and motor neuron diseases in our society. An alarming concern is that these problems have started affecting people at a younger age. Worldwide, neurological disorders are associated with higher rates of morbidity and mortality which in turn inflict higher cost of rehabilitation upon the sufferers. Given the topography, changing life style and the stressors, Kashmiris , per se, have a strong affinity toward neurological problems.
A belief that still dominates the clinical decision making of most healthcare professionals is that the recovery from neurological disorders is strictly a time bound phenomenon and to expect it happen after a set time frame, is unrealistic. Research has nullified it and suggests that brain can modify itself at any point in time provided the treatment is channelized in a right direction.
Unfortunately, we all come across a chunk of people who have fallen prey to such dogmas and live a lifeless life. Another chunk of the patient population is suffering because of its contentment with regard to the menial and irrelevant improvements. Needless to mention, it is the acumen of a skilled neuro-physiotherapist that determines the potential of rewiring of central nervous system connections essential for recovery. The concept of recovery has changed over a period of time; earlier, recovery was perceived as patients’ ability to achieve nominal and insignificant improvements that would enable them to come out of bed and walk a few steps. On the contrary, recovery now is tantamount to movements with a purpose in order to help patients regain functions, and eventually fulfill their social responsibilities.
Rehabilitation of patients with neurological problems is a high cost affair with huge financial and social costs. Soon after a person gets afflicted with a neurological disorder, besides the patient, the family members start bearing the brunt of the disease. Research reports reveal that the caregivers of neurologically impaired patients are exposed to a high level of stress which affects their productivity and, in turn, compromises the role they play in society. Recovery from neurological disorders, being relatively slower, demands close supervision and assistance from family members. In the meantime family members start dedicating their time and money towards the rehabilitation of the patient. Moreover, with modern family systems, every ailing person does not enjoy the luxury of extended social support and, eventually a number of impediments start emerging in the path of recovery.
In a nutshell, neurological problems not only affect patients but pose a massive challenge to family members too. The best strategy to cope up with the neurological problems is to facilitate patients’ functional independence as rapidly as possible that will eventually offload the family members to a greater extent.
Neurorehabilitation has undergone timely refinements to ensure best possible and evidence based care to patients. Modern day Neurorehabilitation uses approaches that emphasize minimizing compensations to ensure complete functional recovery. Functional independence is its essence and a neuro-physiotherapist proves to be an apt resource to deliver the best in order to achieve the short-term and long-term functional milestones. People in the valley have a limited knowledge of neuro-physiotherapy and the role a neuro-physiotherapist plays. A neuro-Physiotherapist, being a responsible member of healthcare team, plays a vital role right from the onset of a neurological problem to the stage of community rehabilitation of a patient.
Since Physiotherapists are movement science experts, fellow medical professionals and patients’ families can’t afford taking a neuro-Physiotherapist’s consultation and advice for granted. An insignificant problem, if left unaddressed, can have devastating repercussions later. For instance, a trivial fault in the shoulder after stroke/brain injury can affect a patient’s ability to drink and eat with the hand. Therefore, physiotherapy consultation from the outset remains crucial in determining a patient’s functional outcomes and ignoring it is at one’s peril.
Physiotherapists too need to be well versed in the latest developments in the field of neuro-physiotherapy to ensure quality care delivery. A neuro-Physiotherapist can make best use of treatments methods such as Constraint Induced Movement Therapy (CIMT), Virtual Reality (VR), Functional Electrical Stimulation (FES), Proprioceptive Neuromuscular Facilitation (PNF), Neurodevelopmental Treatment (NDT), Motor Relearning Programme (MRP), Task Specific Training, Partial Body Weight Support Treadmill Training (PBWSTT), and Robotics and so on. In order to achieve set functional objectives, neuro-physiotherapists equipped with the magic wand will surely help patients impart quality to their lives.

Source: https://kashmirreader.com/2018/10/24/how-neuro-physiotherapy-imparts-quality-to-life/

23 Oct 2018
Manahel Thabet

Study shows easy-to-use, noninvasive stimulation device can help prevent migraine attacks

A migraine is much more than just a bad headache. Migraine symptoms, which can be debilitating for many people, are the sixth leading cause of disability, according to the World Health Organization. While there is no cure, a new study published in Cephalalgia in March shows single-pulse transcranial magnetic stimulation is a new way to prevent migraine attacks. It’s safe, easy to use and noninvasive.

Researchers at Mayo Clinic and other major academic headache centers across the U.S. recently conducted the study that examined the effectiveness of using a single-pulse transcranial magnetic stimulation device to prevent migraine attacks. The eNeura SpringTMS Post-Market Observational U.S. Study of Migraine study, also known as ESPOUSE, instructed participants to self-administer four pulses with the device in the morning and four pulses at night over three months to prevent and treat migraine attacks as needed. Spring TMS stands for Spring transcranial magnetic stimulation or sTMS.

“The migraine brain is hyperexcitable, and basic science studies have demonstrated modulation of neuronal excitability with this treatment modality,” says Amaal Starling, M.D., a Mayo Clinic neurologist, who is first author of the study. “Our study demonstrated that the four pulses emitted from this device twice daily reduce the frequency of headache days by about three days per month, and 46 percent of patients had at least 50 percent or less migraine attacks per month on the treatment protocol. This data is clinically significant. Based on the current study and prior studies in acute migraine attack treatment, sTMS not only helps to stop a migraine attack, but it also helps prevent them.”

“For certain patients, treatment options for migraines, such as oral medications, are not effective, well-tolerated or preferred,” Dr. Starling adds. “The sTMS may be a great option for these patients and allow doctors to better meet their unique needs.”

The U.S. Food and Drug Administration already had approved the sTMS device for the acute treatment of migraine with aura. The FDA now has approved it to prevent migraine, as well.

Source: https://medicalxpress.com/news/2018-03-easy-to-use-noninvasive-device-migraine.html#nRlv

25 Sep 2018
Manahel Thabet

AI Detects Depression in Conversation

Summary: Researchers at MIT have developed a new deep learning neural network that can identify speech patterns indicative of depression from audio data. The algorithm, researchers say, is 77% effective at detecting depression.

Source: MIT.

To diagnose depression, clinicians interview patients, asking specific questions — about, say, past mental illnesses, lifestyle, and mood — and identify the condition based on the patient’s responses..

In recent years, machine learning has been championed as a useful aid for diagnostics. Machine-learning models, for instance, have been developed that can detect words and intonations of speech that may indicate depression. But these models tend to predict that a person is depressed or not, based on the person’s specific answers to specific questions. These methods are accurate, but their reliance on the type of question being asked limits how and where they can be used.

In a paper being presented at the Interspeech conference, MIT researchers detail a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression. Given a new subject, it can accurately predict if the individual is depressed, without needing any other information about the questions and answers.

The researchers hope this method can be used to develop tools to detect signs of depression in natural conversation. In the future, the model could, for instance, power mobile apps that monitor a user’s text and voice for mental distress and send alerts. This could be especially useful for those who can’t get to a clinician for an initial diagnosis, due to distance, cost, or a lack of awareness that something may be wrong.

“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you want to deploy [depression-detection] models in scalable way … you want to minimize the amount of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual.”

The technology could still, of course, be used for identifying mental distress in casual conversations in clinical offices, adds co-author James Glass, a senior research scientist in CSAIL. “Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” he says. “This is a step forward in seeing if we can do something assistive to help clinicians.”

The other co-author on the paper is Mohammad Ghassemi, a member of the Institute for Medical Engineering and Science (IMES).

Context-free modeling

The key innovation of the model lies in its ability to detect patterns indicative of depression, and then map those patterns to new individuals, with no additional information. “We call it ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions,” Alhanai says.

Other models are provided with a specific set of questions, and then given examples of how a person without depression responds and examples of how a person with depression responds — for example, the straightforward inquiry, “Do you have a history of depression?” It uses those exact responses to then determine if a new individual is depressed when asked the exact same question. “But that’s not how natural conversations work,” Alhanai says.

The researchers, on the other hand, used a technique called sequence modeling, often used for speech processing. With this technique, they fed the model sequences of text and audio data from questions and answers, from both depressed and non-depressed individuals, one by one. As the sequences accumulated, the model extracted speech patterns that emerged for people with or without depression. Words such as, say, “sad,” “low,” or “down,” may be paired with audio signals that are flatter and more monotone. Individuals with depression may also speak slower and use longer pauses between words. These text and audio identifiers for mental distress have been explored in previous research. It was ultimately up to the model to determine if any patterns were predictive of depression or not.

“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai says. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”

This sequencing technique also helps the model look at the conversation as a whole and note differences between how people with and without depression speak over time.

Detecting depression

The researchers trained and tested their model on a dataset of 142 interactions from the Distress Analysis Interview Corpus that contains audio, text, and video interviews of patients with mental-health issues and virtual agents controlled by humans. Each subject is rated in terms of depression on a scale between 0 to 27, using the Personal Health Questionnaire. Scores above a cutoff between moderate (10 to 14) and moderately severe (15 to 19) are considered depressed, while all others below that threshold are considered not depressed. Out of all the subjects in the dataset, 28 (20 percent) are labeled as depressed.

In experiments, the model was evaluated using metrics of precision and recall. Precision measures which of the depressed subjects identified by the model were diagnosed as depressed. Recall measures the accuracy of the model in detecting all subjects who were diagnosed as depressed in the entire dataset. In precision, the model scored 71 percent and, on recall, scored 83 percent. The averaged combined score for those metrics, considering any errors, was 77 percent. In the majority of tests, the researchers’ model outperformed nearly all other models.

MIT researchers have developed a neural-network model that can analyze raw text and audio data from interviews to discover speech patterns indicative of depression. This method could be used to develop diagnostic aids for clinicians that can detect signs of depression in natural conversation. NeuroscienceNews.com image is adapted from the MIT news release.

One key insight from the research, Alhanai notes, is that, during experiments, the model needed much more data to predict depression from audio than text. With text, the model can accurately detect depression using an average of seven question-answer sequences. With audio, the model needed around 30 sequences. “That implies that the patterns in words people use that are predictive of depression happen in shorter time span in text than in audio,” Alhanai says. Such insights could help the MIT researchers, and others, further refine their models.

This work represents a “very encouraging” pilot, Glass says. But now the researchers seek to discover what specific patterns the model identifies across scores of raw data. “Right now it’s a bit of a black box,” Glass says. “These systems, however, are more believable when you have an explanation of what they’re picking up. … The next challenge is finding out what data it’s seized upon.”

Source: NeuroScienceNews

28 Aug 2018
Manahel Thabet

Brain cell discovery could help scientists understand consciousness

A team of scientists today unveiled the discovery of a new kind of brain neuron called the rosehip cell. What makes this find important? It may be unique to the human brain – and it’s found in the same area thought to be responsible for consciousness.

A team of international researchers consisting of dozens of scientists made the discovery after running complex RNA sequencing experiments on tissue samples from the cerebral cortices of two brain donors. The results were then confirmed with live tissue taken from patients who’d undergone brain surgery.

Upon discovering the rosehip cell, the researchers immediately tried to replicate the finding using samples gathered from laboratory mice – to no avail. It appears the cell is specific to humans, or potentially primates, but the researchers point out they’re only speculating these neurons are unique to humans at this time.

What matters is what the rosehip cell does. Unfortunately, the scientists aren’t sure. Neurons are tough nuts to crack, but what they do know is this one is belongs to the inhibitor class of brain neurons. It’s possible the rosehip cell is an integral inhibitor to our brain activity, and at least partially responsible consciousness.

Some scientists believe that human consciousness has something to do with wrangling reality from the chaos inside our brains. It’s been shown that an infant’s brain functions much like that of someone on LSD – babies are basically tripping all the time. Perhaps these neural inhibitors develop as our brains grow and help us to separate reality from whatever babies are dealing with.

But, of course, the real science isn’t quite as speculative. For the most part, the rosehip cell research is exciting because it’s filling in some missing pages in our atlas of human neural activity.

The brain is one of the most complex constructs in the universe, and the cerebral cortex is its most complicated part. It’s going to take a long time to figure the whole thing out.

The team intends to look for the rosehip cell in the brains of people who suffer from neurological disorders next – work that could lead to a vastly increased understanding of how the brain functions, and what causes it to break down.

Source: https://thenextweb.com/insider/2018/08/27/brain-cell-discovery-could-help-scientists-understand-consciousness/

18 Aug 2018
Manahel Thabet

Developing bionics: How IBM is adapting mind-control for accessibility

What if there was a way to give everyone suffering from conditions like paralysis or Locked-in syndrome the means to operate prosthetic devices and tech gadgets using mind-control? Well, there is – or at least, there will be.

IBM Research recently developed an end-to-end proof-of-concept for a method of controlling an off-the-shelf robotic arm with a brain-computer interface built using a take-home EEG monitor. To accomplish this, the researchers developed AI to interpret the data from the EEG monitor as commands for the robotic arm.

That may not sound like something that will change everything overnight – and IBM isn’t the only or first company to dabble in brain-computer interfaces. But they’re one of the only that appear interested in figuring out how to build a system that uses inexpensive hardware that’s already available.

We reached out to Stefan Harrer, a research scientist at IBM Research working on the project. :

Our primary design goals were (i) low-cost and (ii) suitable for use in an unrestricted real-life environment. (i) allows the system to transition from an expensive research grade exploratory setup (the status-quo of BMIs) to a setup that is affordable for the broad public (the first of our main objectives) – (ii) allows the system to be taken out of highly specialized research lab environments and moved into everyday environments for use by the broad public (the second of our main objectives).

This early work indicates people can control machines with their minds alone, using commonly available technology and cutting-edge AI. That’s huge for those who don’t have that same control over their own bodies.

Harrer told us that, with further development, the same machine learning techniques could potentially be applied to control a prosthetic limb or even a robot assistant.

IBM‘s system isn’t ready for prime-time just yet though. Harrer says the team is working on reducing latency and doesn’t have any current plans for human trials. But the proof-of-concept indicates it’s only a matter of time before devices built using this technology become a common accessibility solution.

Source: TNW

24 Jul 2018
Manahel Thabet

A New Connection Between Smell and Memory Identified

Summary: A new study reveals how smells we encounter throughout life are encoded in memory. The findings could help develop new smell tests for Alzheimer’s disease.

Source: University of Toronto.

Neurobiologists at the University of Toronto have identified a mechanism that allows the brain to recreate vivid sensory experiences from memory, shedding light on how sensory-rich memories are created and stored in our brains.

Using smell as a model, the findings offer a novel perspective on how the senses are represented in memory, and could explain why the loss of the ability to smell has become recognized as an early symptom of Alzheimer’s disease.

“Our findings demonstrate for the first time how smells we’ve encountered in our lives are recreated in memory,” said Afif Aqrabawi, a PhD candidate in the Department of Cell & Systems Biology in the Faculty of Arts & Science at U of T, and lead author of a study published this month in Nature Communications.

“In other words, we’ve discovered how you are able to remember the smell of your grandma’s apple pie when walking into her kitchen.”

There is a strong connection between memory and olfaction – the process of smelling and recognizing odours – owing to their common evolutionary history. Examining this connection in mice, Aqrabawi and graduate supervisor Professor Junchul Kim in the Department of Psychology at U of T found that information about space and time integrate within a region of the brain important for the sense of smell – yet poorly understood – known as the anterior olfactory nucleus (AON).

Continue Reading.

 

“Read more about Dr Manahel Thabet.”

Manahel Thabet Membership, LinksResearch, Articles , In the Media, Public Speaking, Blogs, Photo Gallery, and Video Gallery

23 Jul 2018
Manahel Thabet

How the Brain Reacts to Food May Be Linked to Overeating

Summary: A new study reports when certain brain areas react more strongly to food rewards than financial rewards, children are more likely to overeat, even if they are not hungry or overweight.

Source: Penn State.

The reason why some people find it so hard to resist finishing an entire bag of chips or bowl of candy may lie with how their brain responds to food rewards, leaving them more vulnerable to overeating.

In a study with children, researchers found that when certain regions of the brain reacted more strongly to being rewarded with food than being rewarded with money, those children were more likely to overeat, even when the child wasn’t hungry and regardless of if they were overweight or not.

Shana Adise, a postdoctoral fellow at the University of Vermont who led the study while earning her doctorate at Penn State, said the results give insight into why some people may be more prone to overeating than others. The findings may also give clues on how to help prevent obesity at a younger age.

“If we can learn more about how the brain responds to food and how that relates to what you eat, maybe we can learn how to change those responses and behavior,” Adise said. “This also makes children an interesting population to work with, because if we can stop overeating and obesity at an earlier age, that could be really beneficial.”

Previous research on how the brain’s response to food can contribute to overeating has been mixed. Some studies have linked overeating with brains that are more sensitive to food rewards, while others have found that being less sensitive to receiving food rewards makes you more likely to overeat.

Additionally, other studies have shown that people who are willing to work harder for food than other types of rewards, like money, are more likely to overeat and gain weight over time. But the current study is the first to show that children who have greater brain responses to food compared to money rewards are more likely to overeat when appealing foods are available.

“We know very little about the mechanisms that contribute to overeating,” Adise said. “The scientific community has developed theories that may explain overeating, but whether or not they actually relate to food intake hadn’t yet been evaluated. So we wanted to go into the lab and test whether a greater brain response to anticipating and winning food, compared to money, was related to overeating.”

For the study, 59 children between the ages of 7 and 11 years old made four visits to the Penn State’s Children’s Eating Behavior Laboratory.

During the first three visits, the children were given meals designed to measure how they eat in a variety of different situations, such as a typical meal when they’re hungry versus snacks when they’re not hungry. How much the children ate at each meal was determined by weighing the plates before and after the meals.

On their fourth visit, the children had fMRI scans as they played several rounds of a game in which they guessed if a computer-generated number would be higher or lower than five. They were then told that if they were right, they would win either money, candy or a book, before it was revealed if they were correct or not.

The researchers found that when various regions of the brain reacted more to anticipating or winning food compared to money, those children were more likely to overeat.

“We also found that the brain’s response to food compared to money was related to overeating regardless of how much the child weighed,” Adise said. “Specifically, we saw that increased brain responses in areas of the brain related to cognitive control and self control when the children received food compared to money were associated with overeating.”

Previous research on how the brain’s response to food can contribute to overeating has been mixed. Some studies have linked overeating with brains that are more sensitive to food rewards, while others have found that being less sensitive to receiving food rewards makes you more likely to overeat. NeuroscienceNews.com image is in the public domain.

Adise added that this is important because it suggests there may be a way to identify brain responses that can predict the development of obesity in the future.

Kathleen Keller, associate professor of nutritional sciences, Penn State, said the study — recently published in the journal Appetite — backs up the theory that an increased brain response in regions of the brain related to rewards is associated with eating more food in a variety of situations.

“We predicted that kids who had an increased response to food relative to money would be the ones to overeat, and that’s what we ended up seeing,” Keller said. “We specifically wanted to look at kids whose brains responded to one type of a reward over another. So it wasn’t that they’re overly sensitive to all rewards, but that they’re highly sensitive to food rewards.”

Keller said the findings give insight into how the brain influences eating, which is important because it could help identify children who are at risk for obesity or other poor eating habits before those habits actually develop.

“Until we know the root cause of overeating and other food-related behaviors, it’s hard to give good advice on fixing those behaviors,” Keller said. “Once patterns take over and you overeat for a long time, it becomes more difficult to break those habits. Ideally, we’d like to prevent them from becoming habits in the first place.”

Source: NeuroScience

10 Jul 2018
Manahel Thabet

Synthetic DNA Artificial Neural Network Recognizes Handwriting

Summary: Researchers have created an artificial neural network from synthetic DNA that is able to correctly identify handwritten numbers.

Source: CalTech.

Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. The work is a significant step in demonstrating the capacity to program artificial intelligence into synthetic biomolecular circuits.

The work was done in the laboratory of Lulu Qian, assistant professor of bioengineering. A paper describing the research appears online on July 4 and in the July 19 print issue of the journal Nature.

“Though scientists have only just begun to explore creating artificial intelligence in molecular machines, its potential is already undeniable,” says Qian. “Similar to how electronic computers and smart phones have made humans more capable than a hundred years ago, artificial molecular machines could make all things made of molecules, perhaps including even paint and bandages, more capable and more responsive to the environment in the hundred years to come.”

Artificial neural networks are mathematical models inspired by the human brain. Despite being much simplified compared to their biological counterparts, artificial neural networks function like networks of neurons and are capable of processing complex information. The Qian laboratory’s ultimate goal for this work is to program intelligent behaviors (the ability to compute, make choices, and more) with artificial neural networks made out of DNA.

“Humans each have over 80 billion neurons in the brain, with which they make highly sophisticated decisions. Smaller animals such as roundworms can make simpler decisions using just a few hundred neurons. In this work, we have designed and created biochemical circuits that function like a small network of neurons to classify molecular information substantially more complex than previously possible,” says Qian.

To illustrate the capability of DNA-based neural networks, Qian laboratory graduate student Kevin Cherry chose a task that is a classic challenge for electronic artificial neural networks: recognizing handwriting.

Human handwriting can vary widely, and so when a person scrutinizes a scribbled sequence of numbers, the brain performs complex computational tasks in order to identify them. Because it can be difficult even for humans to recognize others’ sloppy handwriting, identifying handwritten numbers is a common test for programming intelligence into artificial neural networks. These networks must be “taught” how to recognize numbers, account for variations in handwriting, then compare an unknown number to their so-called memories and decide the number’s identity.

In the work described in the Nature paper, Cherry, who is the first author on the paper, demonstrated that a neural network made out of carefully designed DNA sequences could carry out prescribed chemical reactions to accurately identify “molecular handwriting.” Unlike visual handwriting that varies in geometrical shape, each example of molecular handwriting does not actually take the shape of a number. Instead, each molecular number is made up of 20 unique DNA strands chosen from 100 molecules, each assigned to represent an individual pixel in any 10 by 10 pattern. These DNA strands are mixed together in a test tube.

“The lack of geometry is not uncommon in natural molecular signatures yet still requires sophisticated biological neural networks to identify them: for example, a mixture of unique odor molecules comprises a smell,” says Qian.

Given a particular example of molecular handwriting, the DNA neural network can classify it into up to nine categories, each representing one of the nine possible handwritten digits from 1 to 9.

First, Cherry built a DNA neural network to distinguish between handwritten 6s and 7s. He tested 36 handwritten numbers and the test tube neural network correctly identified all of them. His system theoretically has the capability of classifying over 12,000 handwritten 6s and 7s — 90 percent of those numbers taken from a database of handwritten numbers used widely for machine learning — into the two possibilities.

Crucial to this process was encoding a “winner take all” competitive strategy using DNA molecules, developed by Qian and Cherry. In this strategy, a particular type of DNA molecule dubbed the annihilator was used to select a winner when determining the identity of an unknown number.

“The annihilator forms a complex with one molecule from one competitor and one molecule from a different competitor and reacts to form inert, unreactive species,” says Cherry. “The annihilator quickly eats up all of the competitor molecules until only a single competitor species remains. The winning competitor is then restored to a high concentration and produces a fluorescent signal indicating the networks’ decision.”

Conceptual illustration of a droplet containing an artificial neural network made of DNA that has been designed to recognize complex and noisy molecular information, represented as ‘molecular handwriting.’ NeuroscienceNews.com image is credited to Olivier Wyart.

Next, Cherry built upon the principles of his first DNA neural network to develop one even more complex, one that could classify single digit numbers 1 through 9. When given an unknown number, this “smart soup” would undergo a series of reactions and output two fluorescent signals, for example, green and yellow to represent a 5, or green and red to represent a 9.

Qian and Cherry plan to develop artificial neural networks that can learn, forming “memories” from examples added to the test tube. This way, Qian says, the same smart soup can be trained to perform different tasks.

“Common medical diagnostics detect the presence of a few biomolecules, for example cholesterol or blood glucose.” says Cherry. “Using more sophisticated biomolecular circuits like ours, diagnostic testing could one day include hundreds of biomolecules, with the analysis and response conducted directly in the molecular environment.”

Source: NeuroScienceNews

02 Jul 2018

New Device Could Help Bring Back Lost Brain Function

Scientists at Stanford University and the SLAC National Accelerator Laboratory in California are developing a device that could help bring back lost brain function . The device combines electrical brain stimulation with electroencephalogram (EEG) recordings, which is where electrodes are attached to the scalp and measure electrical activity.

Christopher Kenny, a senior scientist at SLAC, says, ‘The device works similar to radar, which sends out electromagnetic waves and passively listens for the weaker reflected waves. Here, we send electrical pulses into the head via the electrodes of an EEG monitoring system, and in the time between those strong pulses we use the same electrodes to pick up the much weaker electrical signals from inside the head.’

Although this sounds relatively simple, it can be quite hard to do since the weaker reflected waves can be a million times weaker than the signals sent into the head, making the weak signals quite difficult to detect. Previously, signals would be sent into the head and then the brain waves and the behavioral response would be tracked in different sessions. But, with this new device, they can be monitored at the same time the stimulus is applied, providing a more accurate link between the behavioral response and the brain wave activity.

Anthony Norcia, a professor of psychology from Stanford initiated the project and strongly believes that this device could open new paths in the treatment of brain disorders and in selectively turning certain brain activities on or off. Norcia and his team developed models relating to cases of visual impairment and how electrical activity from the brain’s visual centers travel to the scalp so that it can be detected by the external electrodes. Norcia says, ‘Our models give us a pretty good idea for how to design an array of electrodes to reach specific volumes inside the head. But we also want to be able to ‘listen’ to the brain’s response at the same time to figure out whether an applied stimulus had the desired effect.’

Norcia teamed up with members of the SLAC laboratory who specialize in detector development in order to develop the device. In particular, Martin Breidenbach from SLAC, says, ‘At SLAC, we’re trying to answer some of the really big questions about our universe, and figuring out how the human mind works seems to be right up there. We certainly have the engineering skills and resources to help with some of the technical issues in neuroscience. With our background in high-energy physics, we’re also used to multidisciplinary collaborations and know how to make them work.’

With the collaboration of physicists at SLAC, the team took a standard EEG monitor and paired it with one that they built which delivers the signals into the head and is generated with a 9-volt battery. They took the hit and tested the prototype on themselves, fortunately, this was successful.

In the future, the team hope to develop the device on the chip. Jeff Olsen, an electrical engineer from SLAC says, ‘In the next generation, we’ll be able to program the device, which will let us choose different types of signal shapes and synchronize electrical signals with other external triggers, such as visual stimulation.’ This is a very exciting device which would help those with brain disorders and return valuable brain functions, which is incredibly valuable. Watch this space.

Source: Forbes