Author: Manahel Thabet

07 Apr 2020

Neuroscientist find memory cells that help us interpret new situations

Neurons that store abstract representations of past experiences are activated when a new, similar event takes place.

Imagine you are meeting a friend for dinner at a new restaurant. You may try dishes you haven’t had before, and your surroundings will be completely new to you. However, your brain knows that you have had similar experiences — perusing a menu, ordering appetizers, and splurging on dessert are all things that you have probably done when dining out.

MIT neuroscientists have now identified populations of cells that encode each of these distinctive segments of an overall experience. These chunks of memory, stored in the hippocampus, are activated whenever a similar type of experience takes place, and are distinct from the neural code that stores detailed memories of a specific location.

The researchers believe that this kind of “event code,” which they discovered in a study of mice, may help the brain interpret novel situations and learn new information by using the same cells to represent similar experiences.

“When you encounter something new, there are some really new and notable stimuli, but you already know quite a bit about that particular experience, because it’s a similar kind of experience to what you have already had before,” says Susumu Tonegawa, a professor of biology and neuroscience at the RIKEN-MIT Laboratory of Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory.

Tonegawa is the senior author of the study, which appears today in Nature Neuroscience. Chen Sun, an MIT graduate student, is the lead author of the paper. New York University graduate student Wannan Yang and Picower Institute technical associate Jared Martin are also authors of the paper.

Encoding abstraction

It is well-established that certain cells in the brain’s hippocampus are specialized to store memories of specific locations. Research in mice has shown that within the hippocampus, neurons called place cells fire when the animals are in a specific location, or even if they are dreaming about that location.

In the new study, the MIT team wanted to investigate whether the hippocampus also stores representations of more abstract elements of a memory. That is, instead of firing whenever you enter a particular restaurant, such cells might encode “dessert,” no matter where you’re eating it.

To test this hypothesis, the researchers measured activity in neurons of the CA1 region of the mouse hippocampus as the mice repeatedly ran a four-lap maze. At the end of every fourth lap, the mice were given a reward. As expected, the researchers found place cells that lit up when the mice reached certain points along the track. However, the researchers also found sets of cells that were active during one of the four laps, but not the others. About 30 percent of the neurons in CA1 appeared to be involved in creating this “event code.”

“This gave us the initial inkling that besides a code for space, cells in the hippocampus also care about this discrete chunk of experience called lap 1, or this discrete chunk of experience called lap 2, or lap 3, or lap 4,” Sun says.

To further explore this idea, the researchers trained mice to run a square maze on day 1 and then a circular maze on day 2, in which they also received a reward after every fourth lap. They found that the place cells changed their activity, reflecting the new environment. However, the same sets of lap-specific cells were activated during each of the four laps, regardless of the shape of the track. The lap-encoding cells’ activity also remained consistent when laps were randomly shortened or lengthened.

“Even in the new spatial locations, cells still maintain their coding for the lap number, suggesting that cells that were coding for a square lap 1 have now been transferred to code for a circular lap 1,” Sun says.

The researchers also showed that if they used optogenetics to inhibit sensory input from a part of the brain called the medial entorhinal cortex (MEC), lap-encoding did not occur. They are now investigating what kind of input the MEC region provides to help the hippocampus create memories consisting of chunks of an experience.

Two distinct codes

These findings suggest that, indeed, every time you eat dinner, similar memory cells are activated, no matter where or what you’re eating. The researchers theorize that the hippocampus contains “two mutually and independently manipulatable codes,” Sun says. One encodes continuous changes in location, time, and sensory input, while the other organizes an overall experience into smaller chunks that fit into known categories such as appetizer and dessert.

“We believe that both types of hippocampal codes are useful, and both are important,” Tonegawa says. “If we want to remember all the details of what happened in a specific experience, moment-to-moment changes that occurred, then the continuous monitoring is effective. But on the other hand, when we have a longer experience, if you put it into chunks, and remember the abstract order of the abstract chunks, that’s more effective than monitoring this long process of continuous changes.”

The new MIT results “significantly advance our knowledge about the function of the hippocampus,” says Gyorgy Buzsaki, a professor of neuroscience at New York University School of Medicine, who was not part of the research team.

“These findings are significant because they are telling us that the hippocampus does a lot more than just ‘representing’ space or integrating paths into a continuous long journey,” Buzsaki says. “From these remarkable results Tonegawa and colleagues conclude that they discovered an ‘event code,’ dedicated to organizing experience by events, and that this code is independent of spatial and time representations, that is, jobs also attributed to the hippocampus.”

Tonegawa and Sun believe that networks of cells that encode chunks of experiences may also be useful for a type of learning called transfer learning, which allows you to apply knowledge you already have to help you interpret new experiences or learn new things. Tonegawa’s lab is now working on trying to find cell populations that might encode these specific pieces of knowledge.

The research was funded by the RIKEN Center for Brain Science, the Howard Hughes Medical Institute, and the JPB Foundation.

Source: http://news.mit.edu/2020/neuroscience-memory-cells-interpret-new-0406

02 Apr 2020

This Startup’s Computer Chips Are Powered by Human Neurons

Biological “hybrid computer chips” could drastically lower the amount of power required to run AI systems.

Australian startup Cortical Labs is building computer chips that use biological neurons extracted from mice and humans, Fortune reports.

The goal is to dramatically lower the amount of power current artificial intelligence systems need to operate by mimicking the way the human brain.

According to Cortical Labs’ announcement, the company is planning to “build technology that harnesses the power of synthetic biology and the full potential of the human brain” in order to create a “new class” of AI that could solve “society’s greatest challenges.”

The mouse neurons are extracted from embryos, according to Fortune, but the human ones are created by turning skin cells back into stem cells and then into neurons.

The idea of using biological neurons to power computers isn’t new. Cortical Labs’ announcement comes one week after a group of European researchers managed to turn on a working neural network that allows biological and silicon-based brain cells to communicate with each other over the internet.

Researchers at MIT have also attempted to use bacteria, not neurons, to build a computing system in 2016.

As of right now, Cortical’s mini-brains have less processing power than a dragonfly brain. The company is looking to get its mouse-neuron-powered chips to be capable of playing a game of “Pong,” as CEO Hon Weng Chong told Fortune, following the footsteps of AI company DeepMind, which used the game to test the power of its AI algorithms back in 2013.

“What we are trying to do is show we can shape the behavior of these neurons,” Chong told Fortune.

Source: https://futurism.com/startup-computer-chips-powered-human-neurons

29 Mar 2020

The distorted idea of ‘cool’ brain research is stifling psychotherapy

There has never been a problem facing mankind more complex than understanding our own human nature. And no shortage of neat, plausible, and wrong answers purporting to plumb its depths.

Having treated many thousands of psychiatric patients in my career, and having worked on the American Psychiatric Association’s efforts to classify psychiatric symptoms (published as the Diagnostic and Statistical Manual of Mental Disorders, or DSM-IV and DSM-5), I can affirm confidently that there are no neat answers in psychiatry. The best we can do is embrace an ecumenical four-dimensional model that includes all possible contributors to human functioning: the biological, the psychological, the social, and the spiritual. Reducing people to just one element – their brain functioning, or their psychological tendencies, or their social context, or their struggle for meaning – results in a flat, distorted image that leaves out more than it can capture.

The National Institute of Mental Health (NIMH) was established in 1949 by the federal government in the United States with the practical goal of providing ‘an objective, thorough, nationwide analysis and reevaluation of the human and economic problems of mental health.’ Until 30 years ago, the NIMH appreciated the need for this well-rounded approach and maintained a balanced research budget that covered an extraordinarily wide range of topics and techniques.

But in 1990, the NIMH suddenly and radically switched course, embarking on what it tellingly named the ‘Decade of the Brain.’ Ever since, the NIMH has increasingly narrowed its focus almost exclusively to brain biology – leaving out everything else that makes us human, both in sickness and in health. Having largely lost interest in the plight of real people, the NIMH could now more accurately be renamed the ‘National Institute of Brain Research’.

This misplaced reductionism arose from the availability of spectacular research tools (eg, the Human Genome Project, functional magnetic resonance imaging, molecular biology, and machine learning) combined with the naive belief that brain biology could eventually explain all aspects of mental functioning. The results have been a grand intellectual adventure, but a colossal clinical flop. We have acquired a fantastic window into gene and brain functioning, but little to help clinical practice.

The more we learn about genetics and the brain, the more impossibly complicated both reveal themselves to be. We have picked no low-hanging fruit after three decades and $50 billion because there simply is no low-hanging fruit to pick. The human brain has around 86 billion neurons, each communicating with thousands of others via hundreds of chemical modulators, leading to trillions of potential connections. No wonder it reveals its secrets only very gradually and in a piecemeal fashion.

Genetics offers the same baffling complexity. For instance, variation in more than 100 genes contributes to vulnerability to schizophrenia, with each gene contributing just the tiniest bit, and interacting in the most impossibly complicated ways with other genes, and also with the physical and social environment. Even more discouraging, the same genes are often implicated in vulnerability to multiple mental disorders – defeating any effort to establish specificity. The almost endless permutations will defeat any easy genetic answers, no matter how many decades and billions we invest.

The NIMH has boxed itself into a badly unbalanced research portfolio. Playing with ‘cool’ brain and gene research toys trumps the much harder and less intellectually rewarding task of helping real people.

Contrast this current NIMH failure with a great success story from NIMH’s distant past. One of the high points of my career was sitting on the NIMH granting committee that funded psychotherapy studies in the 1980s. We helped to support the US psychologist Marsha Linehan’s research that led her to develop dialectical behavior therapy; the US psychiatrist Aaron T Beck’s development of cognitive therapy; along with numerous other investigators and themes. Subsequent studies have established that psychotherapy is as effective as medications for mild-to-moderate depression, anxiety, and other psychiatric problems, and avoids the burden of medication side-effects and complications. Many millions of people around the world have already been helped by NIMH psychotherapy research.

In a rational world, the NIMH would continue to fund a robust psychotherapy research budget and promote its use as a public-health initiative to reduce the current massive overprescription of psychiatric medication in the US. Brief psychotherapy would be the first-line treatment of most psychiatric problems that require intervention. Drug treatments would be reserved for severe psychiatric problems and for those people who haven’t responded sufficiently to watchful waiting or psychotherapy.

Unfortunately, we don’t live in a rational world. Drug companies spend hundreds of millions of dollars every year influencing politicians, marketing misleadingly to doctors, and pushing pharmaceutical treatments on the public. They successfully sold the fake marketing jingle that all emotional symptoms are due to a ‘chemical imbalance’ in the brain and therefore all require a pill solution. The result: 20% of US citizens use psychotropic drugs, most of which are no more than expensive placebos, all of which can produce harmful side-effects.

Drug companies are commercial Goliath with enormous political and economic power. Psychotherapy is a tiny David with no marketing budget; no salespeople mobbing doctors’ offices; no TV ads; no internet pop-ups; no influence with politicians or insurance companies. No surprise then that the NIMH’s neglect of psychotherapy research has been accompanied by its neglect in clinical practice. And the NIMH’s embrace of biological reductionism provides an unintended and unwarranted legitimization of the drug-company promotion that there is a pill for every problem.

A balanced NIMH budget would go a long way toward correcting the two biggest mental-health catastrophes of today. Studies comparing psychotherapy versus medication for a wide variety of mild to moderate mental disorders would help to level the playing field for the two, and eventually reduce our massive overdependence on drug treatments for nonexistent ‘chemical imbalances’. Health service research is desperately needed to determine best practices to help people with severe mental illness avoid incarceration and homelessness, and also escape from them.

The NIMH is entitled to keep an eye on the future, but not at the expense of the desperate needs of the present. Brain research should remain an important part of a balanced NIMH agenda, not its sole preoccupation. After 30 years of running down a bio-reductionistic blind alley, it is long past time for the NIMH to consider a biopsychosocial reset, and to rebalance its badly uneven research portfolio.

Source: https://thenextweb.com/syndication/2020/03/29/the-distorted-idea-of-cool-brain-research-is-stifling-psychotherapy/

23 Mar 2020

Researchers Find Captivating New Details In Image of Black Hole

Last April, the international coalition of scientists who run the Event Horizon Telescope (EHT), a network of eight telescopes from around the world, revealed the first-ever image of a black hole.

Now, a team of researchers at the Center for Astrophysics at Harvard have revealed calculations, as detailed in a paper published in the journal Science Advances today, that predict an intricate internal structure within black hole images caused by extreme gravitational light bending.

The new research, they say, could lead to much sharper images when compared to the blurry ones we’ve seen so far.

“With the current EHT image, we’ve caught just a glimpse of the full complexity that should emerge in the image of any black hole,” said Michael Johnson a lecturer at the Center for Astrophysics, in a statement.

The EHT image was able to catch the black hole’s “photon sphere” or “photon ring,” a region around a black hole where gravity is so overpowering, it forces photons to travel in orbits.

But as it turns out, there’s even more to the image.

“The image of a black hole actually contains a nested series of rings,” Johnson said. “Each successive ring has about the same diameter but becomes increasingly sharper because its light orbited the black hole more times before reaching the observer.”

Until last year, that internal structure of black holes remained shrouded in mystery.“As a theorist, I am delighted to finally glean real data about these objects that we’ve been abstractly thinking about for so long,” Alex Lupsasca from the Harvard Society of Fellows said in the statement.

These newly discovered substructures could allow for even sharper images in the future. “What really surprised us was that while the nested subrings are almost imperceptible to the naked eye on images — even perfect images — they are strong and clear signals for arrays of telescopes called interferometers,” Johnson added.

“While capturing black hole images normally requires many distributed telescopes, the subrings are perfect to study using only two telescopes that are very far apart,” Johnson said. “Adding one space telescope to the EHT would be enough.”

There might be other ways as well. In November, a team of Dutch astronomers suggested sending two to three satellites equipped with radio imaging technology to observe black holes at five times the sharpness of the last attempt.

Source: https://futurism.com/researchers-take-sharper-black-hole-images

15 Mar 2020

Scientists Discover “Peculiar” Teardrop-Shaped Star

“I’ve been looking for a star like this for nearly 40 years and now we have finally found one.”

A team of astronomers have discovered a strange star that oscillates in a rhythmic pattern — but only on one side, causing gravitational forces to distort it into a teardrop shape.

“We’ve known theoretically that stars like this should exist since the 1980s,” said professor Don Kurtz from the University of Central Lancashire and co-author of the paper published in Nature Astronomy on Monday, in a statement. “I’ve been looking for a star like this for nearly 40 years and now we have finally found one.”

The star, known as HD74423, is about 1.7 times the mass of the Sun and was spotted around 1,500 light years from Earth — still within the confines of the Milky Way — using public data from NASA’s planet-hunting TESS satellite.

“What first caught my attention was the fact it was a chemically peculiar star,” said co-author Simon Murphy from the Sydney Institute for Astronomy at the University of Sydney in the statement. “Stars like this are usually fairly rich with metals – but this is metal poor, making it a rare type of hot star.”

Stars have been found to oscillate at different rhythms and to different degrees — including our own Sun. Astronomers suspect they’re caused by convection and magnetic field forces inside the star.

While the exact causes of these pulsations vary, these oscillations have usually been observed on all sides of the star. HD74423, however, was found to pulsate on only one side because of its red dwarf companion with which it makes up a binary star system.

They were found to do such a close dance — an orbital period of just two days — that the larger star is being distorted into a teardrop shape.

The astronomers suspect it won’t be the last of its kind to be discovered.

“We expect to find many more hidden in the TESS data,” said co-author Saul Rappaport, a professor at MIT.

Source: https://futurism.com/scientists-peculiar-teardrop-shaped-star

07 Mar 2020

How to Leverage AI to Upskill Employees

Artificial intelligence is the answer to polishing math skills and plugging our workforce pipeline.

 

One of the largest economic revolutions of our time is unfolding around us. Technology, innovation and automation are redrawing the career paths of millions of people. Most headlines focus on the negative, i.e. machines taking our jobs. But in reality, these developments are opening up a world of opportunity for people who can make the move to a STEM career or upskill in their current job. There’s also another part to this story: How AI can help boost the economy by improving how we learn.

In 2018, 2.4 million STEM jobs in the U.S. went unfilled. That’s almost equal to the entire population of Los Angeles or Chicago. It’s a gap causing problems for employers trying to recruit and retain workers, whether in startups, small businesses or major corporations. We just don’t have enough workers.

The Unspoken Barrier 

The barrier preventing new or existing employees from adding to their skill set and filling the unfulfilled jobs? Math. Calculus to be specific. It has become a frustrating impediment to many people seeking a STEM career. For college students, the material is so difficult that one-third of them in the U.S. fail the related course or drop it out of frustration. For adults, learning calculus is not always compulsory for the day to day of every STEM job, but learning its principles can help sharpen logic and reasoning. Plus, simply understanding how calculus relates to real-world scenarios is helpful in many STEM jobs. Unfortunately, for many people, the thought of tackling any level of math is enough to scare them away from a new opportunity.  We need to stop looking at math as a way to filter people out of the STEM pipeline. We need to start looking at it as a way to help more people, including professionals looking to pivot careers.

How AI Can Change How Employees Learn

How do we solve this hurdle and fill plug the pipeline? Artificial intelligence. We often discuss how AI can be used to help data efficiencies and process automation, but AI can also assist in personal tutoring to get people over the barriers of difficult math. The recently released Aida Calculus app uses AI to create a highly personalized learning experience and is the first of its kind to use a very complex combination of AI algorithms that provide step-by-step feedback on equations and then serve up custom content showing how calculus works in the real world.

While the product is important, the vision behind it is much bigger. This is a really impactful application of AI for good. It also shows that math skills can be developed in everyone and technology like AI can change the way people learn difficult subjects. The goal is to engage anyone, be it a student or working adult, who is curious about how to apply math in their daily lives. By making calculus relevant and relatable, we can begin to instill the confidence people need to take on STEM careers, even if those jobs don’t directly use calculus.

Leveraging AI Through Human Development

When people boost their complex math skills or even their general understanding of basic math concepts, there’s a world of opportunity waiting. STEM jobs outearn non-STEM jobs by up to 30 percent in some cases. A 2017 study commissioned by Qualcomm suggested that 5G will create 22 million jobs globally by 2035. The U.S. Labor Department says that IT fields will add half a million new jobs in the next eight years and that jobs in information security will grow by 30 percent. Job growth in STEM is outpacing overall U.S. job growth. At the same time, Pearson’s own Global Learners Survey said that 61 percent of Americans are likely to change careers entirely. It’s a good time for that 61 percent to consider STEM.

To equip themselves for this new economy, people will have to learn how learn. Whether it’s math or any other subject, they’ll likely need to study again, and that is hard. But we can use innovation and technology to make the tough subjects a little easier and make the whole learning experience more personalized, helping a whole generation of people take advantage of the opportunity to become the engineers, data analysts and scientists we need.

Source: https://www.entrepreneur.com/article/345502

29 Feb 2020

Human Intelligence and AI for Oncology advancement

As the Vatican workshop on ethics in Artificial Intelligence ends, Dr. Alexandru Floares speaks on the possibilities of medical innovation through collaboration between AI and human intellect.

The increase in the number of cancer cases worldwide is a major cause for concern for the medical community.

Doctor Alexandru Floares, a speaker at a 3-day workshop organized by the Pontifical Academy for Life on Ethics and Artificial Intelligence (AI), spoke to Vatican Radio on the potential for larger strides in the field of oncology and medical research through the efficiency that AI provides.

Dr. Floares, a Neurologist, specialist in AI applications in Oncology, and President of Solutions of Artificial Intelligence Applications (SAIA), gave a presentation titled “AI in Oncology.”

In his interview with Vatican Radio, Dr. Floares spoke on issues bordering on access to data for medical research, solutions to the emerging issues surrounding the use of AI in healthcare, and the revolutionary role of AI in the field of medicine.

“The problems related to applying AI to medicine and oncology can be solved relatively easily,” he said. “This means that when a problem is clearly and pragmatically formulated, it can be solved in a matter of months or at most a year. The benefits of applying AI in medicine, when we put them in a balance, are very important.”

Poor man’s Approach

Speaking on steps towards eliminating bias in data collection, Dr. Floares noted that bias is predominantly the fault of human data input into the algorithm and not an inbuilt AI defect.

“We should not blame the AI for poor results if we do not put in the proper data to assist the AI’s predictive capabilities,” he said.

Giving the example of his experience while collecting data for his molecular diagnostic test for cancer diagnosis, he expressed his suspicion of already available data. He rather opted for what he refers to as the “poor man approach.”

“I found mine to be better because the data is less biased. It is better to have 1,000 patients from various studies and to integrate them instead of having one big study with 1,000 patients because the data is less biased and so the predictive model behind the test is more robust, generating better to (represent) different kinds of population that were not involved when the system was developed. So the poor man’s strategy became a good strategy for fighting against bias in data.”

Checking misuse of AI and data

On the issue of the possible misuse of AI, Dr. Floares is of the opinion that the different actors in the field of AI will help curb excesses. 

“Collecting data is a good idea. I am on the optimistic side. I am sure there will be opposition too and all these forces working in different directions will create equilibrium for humanity. Hopefully the best.”

He furthermore insisted on active involvement in reining in excesses.

“Test AI systems. That is the most pragmatic way to do things to see if it is good or not. Instead of debating and having few actions,” he said.

AI revolution at hand

“A revolution that started in 2012 and is just showing its first impressive results… I did not believe that AI will ever beat the human in dealing with images because our brain – the result of evolution is very well developed. I realize that this is possible and the strategy is for human intelligence to collaborate with artificial intelligence. This will be the greatest team we have ever seen,” he said.

Source: https://www.vaticannews.va/en/vatican-city/news/2020-02/human-intelligence-and-ai-for-oncology-advancement.html

02 Feb 2020

How you can get your business ready for AI

  • 90% of executives see promise in the use of artificial intelligence.
  • AI set to add $15.7 trillion to global economy.
  • Only 4% planning major deployment of technology in 2020.

They say you have to learn to walk before you can run. It turns out the same rule applies when it comes to the rollout of artificial intelligence.

A new report on AI suggests that companies need to get the basics of the technology right before scaling up its use. In a PwC survey, 90% of executives said AI offers more opportunities than risks, but only 4% plan to deploy it enterprise-wide in 2020, compared with 20% who said they intended to do so in 2019.

Slow and steady wins the race

By 2030, AI could add $15.7 trillion to the global economy. But its manageable implementation is a global challenge. The World Economic Forum is working with industry experts and business leaders to develop an AI toolkit that will help companies understand the power of AI to advance their business and to introduce the technology in a sustainable way.

Focusing on the fundamentals first will allow organizations to lay the groundwork for a future that brings them all the rewards of AI.

Here are five things PwC’s report suggests companies can do in 2020 to prepare.

1. Embrace the humdrum to get things done

One of the key benefits that company leaders expect from investment in AI is the streamlining of in-house processes. The automation of routine tasks, such as the extrication of information from tax forms and invoices, can help companies operate more efficiently and make significant savings.

AI can already be used to manage the threat of fraud and cybersecurity – something that 38% of executives see as a key capability of the technology. For example, AI can recognize unauthorized network entry and identify malicious behaviour in software.

2. Turn training into real-world opportunity

For companies to be ready for AI at scale, they need to do more than just offer training opportunities. Employees have to be able to use the new skills they have learned, in a way that continuously improves performance.

It’s also important to make teams ‘multilingual’, with both tech and non-tech skills integrated across the business, so that colleagues can not only collaborate on AI-related challenges, but also decide which problems AI can solve.

3. Tackle risks and act responsibly

Along with helping employees to see AI not as a threat to their jobs but as an opportunity to undertake higher-value work, companies must ensure they have the processes, tools and controls to maintain strong ethics and make AI easy to understand. In some cases, this might entail collaboration with customers, regulators, and industry peers.

As AI usage continues to grow, so do public fears about the technology in applications such as facial recognition. That means risk management is becoming more critical. Yet not all companies have centralized governance around AI, and that could increase cybersecurity threats, by making the technology harder to manage and secure.

4. AI, all the time

Developing AI models requires a ‘test and learn’ approach, in which the algorithms are continually learning and the data is being refined. That is very different from the way that software is developed, and a different set of tools are needed. Machines learn through the input of data, and more – and better quality – data is key to the rollout of AI.

Some of AI’s most valuable uses come when it works 24/7 as part of broader operational systems, such as marketing or finance. That’s why leaders in the field are employing it across multiple functions and business units, and fully integrating it with broader automation initiatives and data analytics.

5. A business model for the future

It’s worth remembering that despite AI’s growing importance, it is still just one weapon in the business armoury. Its benefit could come through its use as part of a broader automation or business strategy.

Weaving it successfully into a new business model includes a commitment to employee training and understanding return on investment. For now, that investment could be as simple as using robotic process automation to handle customer requests.

AI’s impact may be incremental at first, but its gradual integration into business operations means that game-changing disruption and innovation are not far away.

Source: https://www.weforum.org/agenda/2020/01/artificial-intelligence-predictions-2020-pwc/

28 Jan 2020

Here’s what AI experts think will happen in 2020

But it’s time to let the past go and point our bows toward the future. It’s no  longer possible to estimate how much the machine learning and AI markets are worth, because the line between what’s an AI-based technology and what isn’t has become so blurred that Apple, Microsoft, and Google are all “AI companies” that also do other stuff.

Your local electricity provider uses AI and so does the person who takes those goofy real-estate agent pictures you see on park benches. Everything is AI — an axiom that’ll become even truer in 2020.

We solicited predictions for the AI industry over the next year from a panel of experts, here’s what they had to say:

Marianna Tessel, CTO at Intuit

AI and human will collaborate. AI will not “replace humans,” it will collaborate with humans and enhance how we do things. People will be able to provide higher level work and service, powered by AI. At Intuit, our platform allows experts to connect with customers to provide tax advice and help small businesses with their books in a more accurate and efficient way, using AI. It helps work get done faster and helps customers make smarter financial decisions. As experts use the product, the product gets smarter, in turn making the experts more productive. This is the decade where, through this collaboration, AI will enhance human abilities and allow us to take our skills and work to a new level.

AI will eat the world in ways we can’t imagine today: AI is often talked about as though it is a Sci-Fi concept, but it is and will continue to be all around us. We can already see how software and devices have become smarter in the past few years and AI has already been incorporated into many apps. AI enriched technology will continue to change our lives, every day, in what and how we operate. Personally, I am busy thinking about how AI will transform finances – I think it will be ubiquitous. Just the same way that we can’t imagine the world before the internet or mobile devices, our day-to-day will soon become different and unimaginable without AI all around us, making our lives today seem so “obsolete” and full of “unneeded tasks.”

We will see a surge of AI-first apps: As AI becomes part of every app, how we design and write apps will fundamentally change. Instead of writing apps the way we have during this decade and add AI, apps will be designed from the ground up, around AI and will be written differently. Just think of CUI and how it creates a new navigation paradigm in your app. Soon, a user will be able to ask any question from any place in the app, moving it outside of a regular flow. New tools, languages, practices and methods will also continue to emerge over the next decade.

Jesse Mouallek, Head of Operations for North America at Deepomatic

We believe 2020 to be the year that industries that aren’t traditionally known to be adopters of sophisticated technologies like AI, reverse course. We expect industries like waste management, oil and gas, insurance, telecommunications and other SMBs to take on projects similar to the ones usually developed by the tech giants like Amazon, Microsoft and IBM. As the enterprise benefits of AI become more well-known, the industries outside of Silicon Valley will look to integrate these technologies.

If companies don’t adapt to the current trends in AI, they could see tough times in the future. Increased productivity, operational efficiency gains, market share and revenue are some of the top line benefits that companies could either capitalize or miss out on in 2020, dependent on their implementation. We expect to see a large uptick in technology adoption and implementation from companies big and small as real-world AI applications, particularly within computer vision, become more widely available.

We don’t see 2020 as another year of shiny new technology developments. We believe it will be more about the general availability of established technologies, and that’s ok. We’d argue that, at times, true progress can be gauged by how widespread the availability of innovative technologies is, rather than the technologies themselves. With this in mind, we see technologies like neural networks, computer vision and 5G becoming more accessible as hardware continues to get smaller and more powerful, allowing edge deployment and unlocking new use cases for companies within these areas.

Hannah Barnhardt, VP of Product Strategy Marketing at Deluxe Entertainment

2020 is the year AI/ML capabilities will be truly operationalized, rather than companies pontificating about its abilities and potential ROI. We’ll see companies in the media and entertainment space deploy AI/ML to more effectively drive investment and priorities within the content supply chain and harness cloud technologies to expedite and streamline traditional services required for going to market with new offerings, whether that be original content or Direct to Consumer streaming experiences.

Leveraging AI toolsets to automate garnering insights into deep catalogs of content will increase efficiency for clients and partners, and help uphold the high-quality content that viewers demand. A greater number of studios and content creators will invest and leverage AI/ML to conform and localize premium and niche content, therefore reaching more diverse audiences in their native languages.

Tristan Greene, reporter for The Next Web

I’m not an industry insider or a machine learning developer, but I covered more artificial intelligence stories this year than I can count. And I think 2019 showed us some disturbing trends that will continue in 2020. Amazon and Palantir are poised to sink their claws into the government surveillance business during what could potentially turn out to be President Donald Trump’s final year in office. This will have significant ramifications for the AI industry.

The prospect of an Elizabeth Warren or Bernie Sanders taking office shakes the Facebooks and Microsofts of the world to their core, but companies who are already deeply invested in providing law enforcement agencies with AI systems that circumvent citizen privacy stand to lose even more. These AI companies could be inflated bubbles that pop in 2021, in the meantime they’ll look to entrench with law enforcement over the next 12 months in hopes of surviving a Democrat-lead government.

Look for marketing teams to get slicker as AI-washing stops being such a big deal and AI rinsing — disguising AI as something else — becomes more common (ie: Ring is just a doorbell that keeps your packages safe, not an AI-powered portal for police surveillance, wink-wink).

Here’s hoping your 2020 is fantastic. And, if we can venture a final prediction: stay tuned to TNW because we’re going to dive deeper into the world of artificial intelligence in 2020 than ever before. It’s going to be a great year for humans and machines.

Source: https://thenextweb.com/artificial-intelligence/2020/01/03/heres-what-ai-experts-think-will-happen-in-2020/

11 Jan 2020

Mind-reading technology lets you control tech with your brain — and it actually works

  • CES featured several products that let you control apps, games and devices with your mind.
  • The technology holds a lot of promise for gaming, entertainment and even medicine.
  • NextMind and FocusOne were two of the companies that showed off mind-control technology at CES this year.

 

LAS VEGAS — It’s not the self-driving cars, flying cars or even the dish-washing robots that stick out as the most transformative innovation at this year’s Consumer Electronics Show: It’s the wearable gadgets that can read your mind.

There’s a growing category of companies focused on the “Brain-Computer Interface.” These devices can record brain signals from sensors on the scalp (or even devices implanted within the brain) and translate them into digital signals. This industry is expected to reach $1.5 billion this year, with the technology used for everything from education and prosthetics, to gaming and smart home control.

 

This isn’t science fiction. I tried a couple of wearables that track brain activity at CES this week, and was surprised to find they really work. NextMind has a headset that measures activity in your visual cortex with a sensor on the back of your head. It translates the user’s decision of where to focus his or her eyes into digital commands.

“You don’t see with your eyes, your eyes are just a medium,” Next Mind CEO Sid Kouider said. “Your vision is in your brain, and we analyze your vision in your brain and we can know what you want to act upon and then we can modify that to basically create a command.”

Kouider said that this is the first time there’s been a brain-computer interface outside the lab, and the first time you can theoretically control any device by focusing your thoughts on them.

Wearing a Next Mind headset, I could change the color of a lamp — red, blue and green — by focusing on boxes lit up with those colors. The headset also replaced a remote control. Staring at a TV screen, I could activate a menu by focusing on a triangle in a corner of the screen. From there, focusing my eyes, I could change the channel, mute or pause video, just by focusing on a triangle next to each command.

“We have several use cases, but we are also targeting entertainment and gaming because that’s where this technology is going to have its best use,” Kouider said. “The experience of playing or applying it on VR for instance or augmented reality is going to create some new experiences of acting on a virtual world.”

 

Next Mind’s technology isn’t available to consumers yet, but the company is selling a $399 developer kit with the hope that other companies to create new applications.

“I think it’s going to still take some time until we nail … the right use case,” Kouider said. “That’s the reason we are developing this technology, to have people use the platform and develop their own use cases.”

Another company focused on the brain-computer interface, BrainCo, has the FocusOne headband, with sensors on the forehead measuring the activity in your frontal cortex. The “wearable brainwave visualizer” is designed to measure focus, and its creators want it to be used in schools.

“FocusOne is detecting the subtle electrical signals that your brain is producing,” BrainCo President Max Newlon said. “When those electrical signals make their way to your scalp, our sensor picks them up, takes a look at them and determines, ‘Does it look like your brain is in a state of engagement? Or does it look like your brain is in a state of relaxation?’”

Wearing the headband, I tried a video game with a rocket ship. The harder I focused, the faster the rocket ship moved, increasing my score. I then tried to get the rocket ship to slow down by relaxing my mind. A light on the front of the headband turns red when your brain is intensely focused, yellow if you’re in a relaxed state and blue if you’re in a meditative state. The headbands are designed to help kids learn to focus their minds, and to enable teachers to understand when kids are zoning out. The headband costs $350 for schools and $500 for consumers. The headset comes with software and games to help users understand how to focus and meditate.

BrainCo also has a prosthetic arm coming to market later this year, which will cost $10,000 to $15,000, less than half the cost of an average prosthetic. BrainCo’s prosthetic detects muscle signals and feeds them through an algorithm that can help it operate better over time, Newlon said.

“The thing that sets this prosthetic apart, is after enough training, [a user] can control individual fingers and it doesn’t only rely on predetermined gestures. It’s actually like a free-play mode where the algorithm can learn from him, and he can control his hands just like we do,” Newlon said.

Source: CNBC