Month: May 2017

28 May 2017
Elon Musk’s Neural Lace

Elon Musk’s Neural Lace Could Use Brain-Stimulating Tech to Make Us Smarter

IN BRIEF

Transcranial direct current stimulation (tDCS) has great potential to help those with neurological diseases. It may also help enhance humans and our intelligence, and serve as a checkpoint on the road toward Elon Musk’s neural lace.

STIMULATING THE BRAIN

We are now poised at a time in history when brain-computer interfaces (BCI) are the obvious next step. On the brink of the automation age, we face the daunting prospect of artificial intelligence (AI) becoming more capable than ourselves. Now that touchscreens and voice recognition are part of our everyday devices, it’s time for us to be able to control our electronics with our minds, and tech companies like Facebook are itching to make it happen. And in this modern age, we are also edging closer to the elimination of various diseases, and we long for life without dementia, brain damage, and neurological diseases. The BCI provides us with a way to maintain control of our world and our electronics, to heal ourselves—and maybe even allow humanity itself to level up.

Elon Musk’s ambitious solution is an easily injectable neural electrode, a neural lace, which would be able to both stimulate and interpret electrical activity in the brain. While this hasn’t yet been developed, researchers are already stimulating electrical activity in the human brain using transcranial direct current stimulation, or tDCS. This is being used to treat consciousness disorders, and to help patients with minimal consciousness communicate. Researchers also have a prototype BCI in use, which people with locked-in syndrome can use to communicate.

These are simply the first steps on a more ambitious journey towards the high-level enhancement of the human brain, and by extension, of humanity itself. As envisioned, Musk’s neural lace is a far more transformative solution than tDCS could ever be; tDCS simply works with the neurons that are there. The neural lace would form an entirely new layer of the brain.

Additionally, tDCS works incrementally, slowly training more and more neurons in the brain to fire more readily and more often. The net benefit comes from the sheer numbers; if enough brain cells fire, you will see a result. In a person with minimal consciousness, you might see responsiveness; in a person with normal capacity, you may see improved intelligence, sharper problem-solving abilities, enhanced creativity, or other benefits. There is certainly a demand for this kind of enhancement, as the current nootropics craze proves — and nootropics fans are often also biohackers, willing to try physical solutions like BCIs.

Image Credit: Activedia/Pixabay
Image Credit: Activedia/Pixabay

A NEW HUMANITY?

So, could tDCS bring forth a new humanity? Or will neural lace or something similar be required for that to be possible? If tDCS is a step toward neural lace, does the difference really matter?

While tDCS has notable potential for treating neurological diseases, its potential for enhancing human intelligence is somewhat murkier. This is especially true in the context of AI, which, alongside the need to retain human rights, is the defining condition pushing for neural lace. The kinds of problems that tDCS can tackle, beyond neurological disease or paralysis, include the need to learn faster, upload skills, or retain plasticity.

To match wits with AI in any meaningful way, a more radical amplification of the brain will be necessary. In other words, the basic idea behind the neural lace is that we can’t beat AI: we’re going to need to join it instead, becoming part of a human/machine merger. This is beyond the realm of tDCS, delving into a new realm of cyborg humanity.

21 May 2017

How Will We Keep Our Thoughts Private in the Age of Mind-Reading Tech?

IN BRIEF

With BCIs having become commercially available after extensive use in the medical sector, recent research has found that they can be used to hack our brains for PINs or mine our minds for data.

WHAT IS AN EEG, AND WHAT HAVE STUDIES CONCERNING ITS SECURITY FOUND?

Two new studies by the University of Alabama and the University of Washington have revealed the malicious possibilities lurking in the shadows of impressive promises of brain-computer interface (BCI) developers: the ability to access PINs and other private information.

Electroencephalograms (EEG) are tests that detect electrical activity in your brain using a skullcap studded with electrodes. This technology has been used in the medical sector for years — for example, to diagnose schizophrenia as far back as 1998. However it is now due to be used for far more commercial enterprises. Rudimentary versions, such as Emotiv’s Epoc+ have been released with the promise of far more sophisticated versions just around the corner, including examples being developed by Elon Musk and Facebook.
The University of Alabama’s study discovered that hacking into a BCI could increase the chances of guessing a PIN from 1 in 10,000 to 1 in 20; it could shorten the odds of guessing a six-letter password by roughly 500,000 times to around 1 in 500. Emotiv has dismissed the criticisms, stating that all software using its headsets is vetted and that users would find the activity of inputting codes suspicious; but Alejandro Hernández, a security researcher with IOActive, claimed that the Alabama case is “100 percent feasible.”

 
The test involved people entering random pins and passwords while wearing the headset, allowing software to establish a link between what was typed and brain activity. After data from entering 200 characters was gathered, algorithms could then make educated guesses what characters they would enter next. Nitesh Saxena, Research Director of the department of Computer and Information Sciences at the University of Alabama, detailed a situation in which someone still logged on to a gaming session while checking their bank details could be at risk.

The University of Washington test focused on gathering data. In their study, subliminal messages flashed up in the corner of a gaming screen while EEG gauged the participant’s response. Tamara Bonaci, a University of Washington Electrical Engineer, said that “300 milliseconds after they saw a stimulus there is going to be a positive peak hidden within their EEG signal” if they have a strong emotional reaction to it. Howard Chizeck, Bonaci’s fellow electrical engineer who was also involved with the project, said, “This is kind of like a remote lie detector; a thought detector.” Potential uses of data could stretch from more targeted advertising than ever before to determining sexual orientation or other such personal information that could be used to coerce users.

HOW SERIOUS IS THE THREAT?

While some BCIs are being used in extremely positive ways, like diagnosing concussions or allowing people with severe motor disabilities control over robotic aids, the threat to security and privacy posed by commercially available and mainstream BCIs is huge.

Experts have advised that we begin to think of means of protection now, rather than after the technology has become more widespread. Howard Chizeck told Motherboard over Skype that “There’s actually very little time. If we don’t address this quickly, it’ll be too late,” while scientists from the University of Basal and the University from Zurich have called for a “right to mental privacy” in response to the developments.
As with many technologies, the novelty and potential of BCI is headily seductive, but we must beware of the practical consequences their use may give rise to.Worryingly, there has been very little work towards providing protection against such attacks. The BCI Anonymizer, still in its propositional stages, aims to “extract information corresponding to a user’s intended BCI commands, while filtering out any potentially private information.” But aside from this, there is very little else.

14 May 2017

AI Won’t Just Replace Workers. It’ll Also Help Them

IN BRIEF

We are living in the age of algorithms, and AI is the natural next step in this age’s evolution. We can’t excise the tech from our lives, but we can benefit from it more and protect even the most vulnerable from abuses by shaping how we use it.

AI: THE TOOL

Many people worry about artificial intelligence (AI) eliminating jobs and displacing workers, or even taking over human society. A February 2016 report from Citibank and the University of Oxford predicted that automation threatens 47 percent of U.S. jobs, 35 percent of U.K. jobs, and 77 percent of jobs in China. An August report from Forrester stated that customer service and transportation jobs will be gone by 2025, and that we’ll feel the impact of this change within five years.

These fears aren’t unfounded, but they may need refocusing. Few of us understand what algorithms are or how they work; to most of us, they are invisible. Like the electricity that flows unseen and taken for granted throughout our homes, offices, and cities, we don’t notice the many ways that algorithms already shape our experiences, large and small.

This is a problem, because the disconnect between understanding what algorithms do, how they work, and how we should be shepherding their use and our ideas about AI are artificially and unreasonably detached. Yes, algorithms control how AI works. However, they also control how we work to a large extent — and we made them that way because it saves us time and effort.

Algorithms run the internet and make all online searching possible. They direct our email along with us when using our GPS systems. Smartphone apps, social media, software: none of these things would function without algorithms. AI is also dependent on algorithms, and in fact is the next-level extension of our life in the age of algorithms; what we’ve done is teach algorithms to write other new algorithms, and to learn and teach themselves.

Just as we once feared that computers would put us all out of work, we now fear that AI will take all of our jobs away. We have seen the next level of our algorithmic age, and we’re not sure what to make of it. Evolution is never totally predictable and is often messy.

However, part of the way we navigate this transition successfully is by learning to see what it is that we’re concerned about, and what’s actually present around us right now. Pew Research Center and the Imagining the Internet Center of Elon University recently polled 1,302 scholars, technology experts, government leaders, and corporate practitioners about what will happen in the next decade. The respondents were asked just one question: will the net overall effect of algorithms be positive or negative for individuals and society?

NET BENEFITS

The canvassing of these respondents, which was non-scientific from a statistical perspective, found that 38 percent of the respondents predicted that the benefits of algorithms will outweigh the detriments for both individuals and society in general, while 37 percent felt the opposite way, and 25 percent thought it would be a draw. These results are interesting, but what was really significant were the respondents’ written comments elaborating their positions. There were seven general themes that emerged in the answers as a whole.

Almost all respondents agreed that algorithms are essentially invisible to the public, and that their influence will increase exponentially over the next decade. Barry Chudakov of Sertain Research and StreamFuzion Corp. breaks down the significance for Pew:

“Algorithms are the new arbiters of human decision-making in almost any area we can imagine. […] They are also a goad to consider [human] cognition: How are we thinking and what does it mean to think through algorithms to mediate our world? The main positive result of this is better understanding of how to make rational decisions, and in this measure a better understanding of ourselves. […] The main negative changes come down to a simple but now quite difficult question: How can we see, and fully understand the implications of, the algorithms programmed into everyday actions and decisions?”

We need to learn to see the ways we are thinking through algorithms so we can ensure we maintain oversight over our decisions and actions — and so we know their limitations and our own.

Another theme is that great benefits will keep coming, thanks to algorithms and AI: we will be processing and understanding far more data, and achieving more breakthroughs in science, technological conveniences, and access to information. This will mean healthcare decisions made with more of the whole picture in mind and decisions on bank loans considered with more context and detail. It might even mean an end to unfair practices like gerrymandering — which utterly depend on old-school ways of drawing up voting areas and disappear when algorithms draw them up instead.

Theme three is less rosy: advances in algorithms and big data sets will mean corporations and governments hold all of the cards and set all of the parameters. If algorithms are created to optimize and achieve profitability for a particular set of people without regard to the rest, AI and algorithms won’t correct this imbalance, but will make it worse. Clemson University assistant professor in human-centered computing Bart Knijnenburg told Pew: “Algorithms will capitalize on convenience and profit, thereby discriminating [against] certain populations, but also eroding the experience of everyone else. […] My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies and users into zombies who exclusively consume easy-to-consume items.”
The fourth theme has to do with biases that exist even in systems that are organized by algorithms. Even the most well-intentioned, inclusive, neutral algorithm creators build their own perspectives into their code, and there are even deficiencies and limitations within the datasets to which algorithms are applied.

Theme five centers upon the potential of access to algorithmically-aided living to deepen already existing cultural and political divides. Consider the differences that exist even now between groups of people consuming algorithmically-driven political news — more and more distinct ideological classes with less and less in common, and less empathy for each other. Algorithmic living makes it more possible for us to avoid and exclude each other; what will the end result of this separation be?

Or, as another example, consider the potential divide between the many highly-educated people who are learning to “biohack” or use nootropics to enhance their lives, and the numerous people of lower socioeconomic classes who lack education and the means or desire to engage in these activities — and lack access as well, even if they hoped to remain upwardly mobile in the algorithm age. Could this kind of progressively deepening division be enhanced by algorithmic living, and will it result in a kind of socio-biounderclass?

The sixth theme concerns unemployment, and many respondents do see the age of the algorithm as the age of mass unemployment. This unattributed response from one person surveyed reflects this overall theme: “I foresee algorithms replacing almost all workers with no real options for the replaced humans.” Other respondents emphasized the need for a universal basic income (UBI) to ensure that even those who have less access and ability to adapt to the changing economy have a basic means for survival.

The final theme from the report: the growing need for algorithmic oversight, transparency, and literacy.

Many respondents advocated for public algorithmic literacy education — the computer literacy of the 21st century — and for a system of accountability for those who create and evolve algorithms. Altimeter Group industry analyst Susan Etlinger told Pew, “Much like the way we increasingly wish to know the place and under what conditions our food and clothing are made, we should question how our data and decisions are made as well. What is the supply chain for that information? Is there clear stewardship and an audit trail? Were the assumptions based on partial information, flawed sources or irrelevant benchmarks? […] If there were ever a time to bring the smartest minds in industry together with the smartest minds in academia to solve this problem, this is the time.”

PUTTING ALGORITHMS TO WORK

One of the most important takeaways to glean from this report — and indeed, all reporting on AI right now — is that there is no way to excise algorithms and the advances that are coming with them, such as AI, from our lives. Even if we wanted to, for example, live without all computer technology, it’s too late. That means that strategic planning for the future isn’t about pointlessly trying to ban things that are already coming. The smarter course is to find ways to make algorithms and AI technology work for us.

If we can collaborate with it, AI has the potential to make our working lives better, giving us higher levels of job satisfaction, relieving us of more dangerous and less interesting work. It can also ensure that the best candidates get jobs, and otherwise work to equalize the playing field — if we can ensure that’s how it learns to operate. We are deeply flawed teachers, considering that workplace discrimination, for example, persists. However, with self-awareness and algorithmic literacy, we can also teach ourselves.

07 May 2017

This Robot Completes a 2-Hour Brain Surgery Procedure in Just 2.5 Minutes

IN BRIEF

Researchers believe their surgery-assisting robot is capable of performing complex brain surgeries. The machine can reduce the time of surgeries by cutting down the time it takes to cut into the skull from two hours to two and a half minutes.

DOC BOT

Brain surgery is precision business, and one slip can spell doom for affected patients. Even in one of the most skilled jobs in the world, human error can still be a factor. Researchers from the University of Utah are looking to provide less opportunity for those errors to occur. A robot that the team is developing is able to reduce the time it takes to complete a complicated procedure by 50 times. According to CNN, the robot can reduce the time it takes to drill into the skull from two hours to two-and-a-half minutes.
The team’s lead neurosurgeon William Couldwell told CNN, “We can program [it] to drill the bone out safely just by using the patient’s CT criteria,” he said. “It basically machines out the bone.”The research was published in the journal Neurosurgical Focus and the team says it is a “proof of principle” that the robot is capable of performing complex surgeries. The robot is guided around vulnerable areas of the skull by data gleaned from CT scans and entered into the robot’s programming. The CT scans show the programmer the location of nerves or veins that the bot will have to avoid.

A SAVINGS MACHINE

Aside from the obvious life-saving capabilities that such a machine would have, it also could potentially save money in the long run. Shorter surgery times will allow for lower costs per surgery as well. There’s also the added benefit of lowering the time a patient is under anesthesia, which can cause its own complications.

Robotics and automation are slowly transforming the way doctors are performing surgery. Some patients may initially balk at the thought of some machine cutting into them and messing with their insides, but these robots can perform with a precision that may be impossible for humans to achieve.