What if there was a way to give everyone suffering from conditions like paralysis or Locked-in syndrome the means to operate prosthetic devices and tech gadgets using mind-control? Well, there is – or at least, there will be.
IBM Research recently developed an end-to-end proof-of-concept for a method of controlling an off-the-shelf robotic arm with a brain-computer interface built using a take-home EEG monitor. To accomplish this, the researchers developed AI to interpret the data from the EEG monitor as commands for the robotic arm.
That may not sound like something that will change everything overnight – and IBM isn’t the only or first company to dabble in brain-computer interfaces. But they’re one of the only that appear interested in figuring out how to build a system that uses inexpensive hardware that’s already available.
We reached out to Stefan Harrer, a research scientist at IBM Research working on the project. :
Our primary design goals were (i) low-cost and (ii) suitable for use in an unrestricted real-life environment. (i) allows the system to transition from an expensive research grade exploratory setup (the status-quo of BMIs) to a setup that is affordable for the broad public (the first of our main objectives) – (ii) allows the system to be taken out of highly specialized research lab environments and moved into everyday environments for use by the broad public (the second of our main objectives).
This early work indicates people can control machines with their minds alone, using commonly available technology and cutting-edge AI. That’s huge for those who don’t have that same control over their own bodies.
Harrer told us that, with further development, the same machine learning techniques could potentially be applied to control a prosthetic limb or even a robot assistant.
IBM‘s system isn’t ready for prime-time just yet though. Harrer says the team is working on reducing latency and doesn’t have any current plans for human trials. But the proof-of-concept indicates it’s only a matter of time before devices built using this technology become a common accessibility solution.
Artificial intelligence will reshape the world of finance over the next decade or so by automating investing and other services—but it could also introduce troubling systematic weaknesses and risks, according to a new report from the World Economic Forum (WEF).
Compiled through interviews with dozens of leading financial experts and industry leaders, the report concludes that artificial intelligence will disrupt the industry by allowing early adopters to outmaneuver competitors. It also suggests that the technology will create more convenient products for consumers, such as sophisticated tools for managing personal finances and investments.
But most notably, the report points to the potential for big financial institutions to build machine-learning-based services that live in the cloud and are accessed by other institutions.
“The dynamics of machine learning create a strong incentive to network the back office,” says the report’s main author, Jesse McWaters, who leads the AI in Financial Services Project at the World Economic Forum. “A more networked world is more vulnerable to cybersecurity risks, and it also creates concentration risks.”
In other words, financial systems that incorporate machine learning and are accessed through the cloud by many different institutions could present a juicy target for hackers and a single point of systemic failure.
Wall Street is already rapidly adopting machine learning, the technology at the center of the artificial-intelligence boom. Finance firms generally have lots of data and plenty of incentive to innovate. Hedge funds and banks are hiring AI researchers as quickly as they can, and the financial industry is experimenting with back-office automation in a big way. The automation of high-frequency trading has already created systemic risks, as highlighted by several runaway trading events, or “flash crashes,” in recent years.
Andrew Lo, a professor at MIT’s Sloan School of Management, researches the issue of systemic risk in the financial system, and he has previously warned that the system as a whole may be vulnerable because of its sheer complexity.
The WEF report raises other issues as well. It says that big tech companies will have an opportunity to get into finance, often through tie-ins with financial firms, because of their expertise in AI as well as their access to consumer data.
And McWaters says that as AI becomes more widely used in finance, it will be important to consider issues like biased algorithms, which can discriminate against certain groups of people. Financial companies should not be too eager to simply replace staff either, he says. As the study suggests, human skills will remain important even as automation becomes more widespread.
QUALITY OF LIFE. Patients with glioblastoma, a malignant tumor in the brain or spinal cord, typically live no more than five years after receiving their diagnosis. And those five years can be painful — in an effort to minimize the tumor, doctors often prescribe a combination of radiation therapy and drugs that can cause debilitating side effects for patients.
Now, researchers from MIT Media Lab have developed artificial intelligence (AI) that can determine the minimum drug doses needed to effectively shrink glioblastoma patients’ tumors. They plan to present their research at Stanford University’s 2018 Machine Learning for Healthcare conference.
CARROT AND STICK. To create an AI that could determine the best dosing regimen for glioblastoma patients, the MIT researchers turned to a training technique known as reinforcement learning (RL).
First, they created a testing group of 50 simulated glioblastoma patients based on a large dataset of those that had previously undergone treatment for their disease. Then they asked their AI to recommend doses of several drugs typically used to treat glioblastoma [oftemozolomide (TMZ) and a combination of procarbazine, lomustine, and vincristine (PVC)] for each patient at regular intervals (either weeks or months).
After the AI prescribed a dose, it would check a computer model capable of predicting how likely a dose is to shrink a tumor. When the AI prescribed a tumor-shrinking dosage, it received a reward. However, if the AI simply prescribed the maximum dose all the time, it received a penalty.
According to the researchers, this need to strike a balance between a goal and the consequences of an action — in this case, tumor reduction and patient quality of life respectively — is unique in the field of RL. Other RL models simply work toward a goal; for example, DeepMind’s AlphaZero simply has to focus on winning a game.
“If all we want to do is reduce the mean tumor diameter, and let it take whatever actions it wants, it will administer drugs irresponsibly,” principal investigator Pratik Shah told MIT News. “Instead, we said, ‘We need to reduce the harmful actions it takes to get to that outcome.’”
GETTING PERSONAL. The AI conducted about 20,000 test runs for each simulated patient to complete its training. Next, the researchers tested the AI on a group of 50 new simulated patients and found it could decrease both the doses and their frequency while still reducing tumor size. It could also take into account information specific to each patient, such as their tumor size, medical history, and biomarkers.
“We said [to the model], ‘Do you have to administer the same dose for all the patients? And it said, ‘No. I can give a quarter dose to this person, half to this person, and maybe we skip a dose for this person,’” said Shah. “That was the most exciting part of this work, where we are able to generate precision medicine-based treatments by conducting one-person trials using unorthodox machine-learning architectures.”
The AI will still need to undergo further testing and vetting by the Food and Drug Administration (FDA) before doctors could put it into practice. But if it passes those tests, it could eventually help people with glioblastoma attack their brain tumors without causing them more pain in the process.
For some children with autism, interacting with other people can be an uncomfortable, mystifying experience. Feeling overwhelmed with face-to-face interaction, such children may find it difficult to focus their attention and learn social skills from their teachers and therapists—the very people charged with helping them learn to socially adapt.
What these children need, say some researchers, is a robot: a cute, tech-based intermediary, with a body, that can teach them how to more comfortably interact with their fellow humans.
On the face of it, learning human interaction from a robot might sound counter-intuitive. Or just backward. But a handful of groups are studying the technology in an effort to find out just how effective these robots are at helping children with autism spectrum disorder (ASD).
One of those groups is LuxAI, a young company spun out of the University of Luxembourg. The company says its QTrobot can actually increase these children’s willingness to interact with human therapists, and decrease discomfort during therapy sessions. University of Luxembourg researchers working with QTrobot plan to present their results on 28 August at RO-MAN 2018, IEEE’s international symposium on robot and human interactive communication, held in Nanjing, China.
“When you are interacting with a person, there are a lot of social cues such as facial expressions, tonality of the voice, and movement of the body which are overwhelming and distracting for children with autism,” says Aida Nazarikhorram, co-founder of LuxAI. “But robots have this ability to make everything simplified,” she says. “For example, every time the robot says something or performs a task, it’s exactly the same as the previous time, and that gives comfort to children with autism.”
Microsoft recently taught its XiaoIce chatbot, a Chinese language conversational AI, how to interpret pictures as poems. We’re not sure if that counts as inspired writing, but it’s an interesting step toward better mimicking humans.
It’s not the first AI to write poetry, but it’s the first we’ve seen that can generate Chinese language poems inspired by images. There may be a little bit lost in translation, but some of the bot’s bars aren’t too shabby:
Wings hold rocks and water lightly
in the loneliness
Stroll the empty
The land becomes soft
The problem of teaching AI how to generate text descriptions or captions for images is a popular one among machine learning researchers. If an AI can eventually be taught to see the world the same way we do, starting with one image at a time, eventually we can teach it to see things our way all the time.
As humans, we can look at a picture and make inferences, point out objects, describe people’s facial features, and more – this is all based on our ability to draw upon context. Computer vision and deep learning artificial neural networks make it possible for machines to do this as well, but so far they’re nowhere near as good at it as an average human child.
One of the paths to creating a better autonomous image description AI is to keep tweaking a machine learning model until its output is indistinguishable from the work done by humans. But, at some point, simple descriptions like “I see a red apple on a brown table” are good enough to fool everyone. It’s pointless to try and figure out if a machine or a human wrote that, because it could obviously be either.
To further their work researchers have to find ways to make machines better at mimicking us, which means trying more difficult natural language processing tasks. And poetry is obviously more complex than simple captions.
So how does it work? Scientists input some rules and then make a neural network split itself into a side that generates a poem, and a side that judges a poem. If the judge side thinks the rules have been met and particular poem is good enough for human eyes, it lets it through. A human then checks the results. If it’s not good enough, they go back to tweaking the parameters until its spitting out good stuff.
Microsoft has actually been working on this problem for awhile, having created an AI recently that generates English language poetry from images. But, writing Chinese poetry involves a different set of rules and parameters, even for humans. Chutian Xiao, a researcher studying English poetry at Durham University, says “it is a tricky thing to learn poetry and write poems in different languages.” and he’s just talking about humans.
When it comes to Chinese poetry, the rules have actually changed over the years. It’s harder, according to the Microsoft researchers working on the Chinese language bot, to write modern poetry:
While traditional Chinese poetry is constructed with strict rules and patterns (e.g., five-word quatrains are required to contain four sentences and each sentence has five Chinese characters, also words need rhymes in specific positions), modern Chinese poetry is unstructured in vernacular Chinese. Compared to traditional Chinese poetry, although the readability of vernacular Chinese makes modern Chinese poetry easier to strike a chord, errors in words or grammar can more easily be criticized by users. Good modern poetry also requires more imagination and creative uses of language. From these perspectives, it may be more difficult to generate a good modern poem than a classic poem.
So how good is Microsoft’s AI at writing poetry? Well, that’s entirely subjective. Futurism writer Dan Robitzki said its English language poet bot was as good as an angsty teenager. I, by contrast, rather liked some its stuff. But I’m more interested in LeCun than Longfellow, so I tend to lean in support of the machines.
To scientifically determine the efficacy of their poetry bot, the researchers conducted experiments where, according to them, people overwhelmingly chose the results from their bot over results from similar neural networks:
A large scale user study indicates that our generated moderna poems earn much more favor than the generated captions. The reasons are that our generated poems are more imaginative, touching and impressive.
While a rose by any other name would surely smell as sweetly – to paraphrase The Bard – whether or not a machine can actually create art, much less poetry, is still up for debate.
What ever your personality this AI will recognized your personality even you are neurotic, an extrovert conscientious or agreeable.
Well machine learning is improving over the years and now by just bringing a new meaning to the phrase “in the blink on of an eye” with the help of University of South Australia and the University of Stuttgart researchers. The of done by researchers found the close link between eyes and personality and may help to enhance the interactions of Robots with Human.
By applying machine learning approach, now we can auto analyze a large movement characteristic and help to determine the human personality traits. On the analyzation researcher found that the approach also allows to identify the new links between previously under-investigate eye movement.
Their Artificial Intelligence tools SensoMotoric Instruments can monitor the 42 individual movements track and cross-checked it with the well-established questionnaires to define human personality traits.
Using the five key traits Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism.
Observing the Eye Tracking Movement
Conducted a 42 people test with the eye tracker they give a 5 Australian dollar and for 10 minutes to make purchase in the university campus shop, after they returned the machine is now filled with the information of the personality and curiosity questionnaires.
Their finding shows how trait-specific eye movements vary across activities.
HALF MAN, HALF MACHINE. Full-blown automation may be the future of manufacturing, but we’re not there yet. While some machines have taken over the more painstaking tasks on the factory floor, humans still play a vital role in the production line. But often, it isn’t easy work. Tasks typically require being on one’s feet, and some even involve making repetitive arm motions up to 4,600 times a day or one million times a year. Ouch.
At Ford though, this might all be changing. Exoskeleton use on Ford’s factory floors could soon shift into overdrive, according to Engadget.
ENTER THE EXOSKELETONS. In November 2017, EksoVest Exoskeletons, built by Ekso Bionics, were given to workers in two Ford factories. Now ,up to 75 exoskeletons will be distributed to employees at 15 factories across the world. The exoskeletons don’t have motors, or even batteries, but provide “passive assistance” in the form of arm support from five to 15 pounds. By giving more arm support the higher a person reaches, the device takes strain off of the arm muscles. If you’re not convinced it would make a difference, hold your hand above your head for a few minutes.
WE HAVE THE TECHNOLOGY. This is only the beginning for exoskeletons at Ford. “Today, it’s only the passive upper-arm support skeleton that helps with overhead work,” Marty Smets, Ford’s technical expert of human systems and virtual manufacturing, told Engadget.
Taking one step at a time could lead Ford to other avenues of exoskeleton use within its factories. By establishing systems for the use now, Ford is well positioned to adapt new devices as they become available. “We wanted to focus on one exoskeleton initially, then expand from there as the space grows,” Smet said.
Time will tell, but perhaps man and machine can co-exist peacefully after all.
Writing smart contracts for Ethereum $ETH▼8.17% is no longer the preserve of coders and programmers: there is now software that can automatically do that for you – or so the claims go.
Enter Fondu. The app provides a tool that helps non-coders get into the Ethereum smart contract game and launch their ICO without the need for learning how to code, or even pay a coder to write it for them.
The process is simple. All you have to do is answer some questions regarding the characteristics of your ICO, and the required code is automatically generated. The files required to launch your ICO are also made available for download. What’s more, the deployment program is also included in the downloaded files, so it will walk you through the launch process too.
I was concerned that if more tools like this become prevalent, the market would begin to see a flood of weak, buggy or illegitimate ICOs, compiled by people with no intention of delivering a product or service attached to the ICO’s token. However, Kolmogorov believes that this won’t be the case, as an ICO smart contract is such a small part of the overall launch of a cryptocurrency or token.
Nikita Kolmogorov assures us, “I’m pretty sure that my tool — making ICO’s slightly more accessible — will not cause any new scam-projects to appear. They would’ve appeared even if I didn’t launch the product — and fortunately, smart-contract is like at most 10 [percent] of the ICO costs, so a scam company wouldn’t benefit from it that much. Even more — just launching website, smart-contract and media-campaign is no longer enough to hold a successful token sale, so we should be safe here.”
That said, this isn’t a guarantee, clearly Kolmogorov means well and has honest intentions, but there is nothing to stop Fondu becoming a factory for future “shitcoins.” The market was already saturated with mindless cryptocurrency projects – many of which are now defunct.
If a project like Fondu makes it easier to start a cryptocurrency project, it’s reducing a barrier to entry which might be good, but it also opens the flood gates for anyone to get involved. And as history has taught us, not everyone’s intentions are as pure as Kolmogorov’s.
Kolmogorov continued, “Way more important is that now anybody can launch ICO with ERC20 [a popular Ethereum token protocol] tokens in [about] 15 minutes. My real cause here is to democratize the ICO and crowdfunding concepts. So that anybody can launch crowdfunding or token sale campaign in no time and for free. Like a house-wife or a house-husband launching a book club economy with ERC20 tokens. Could you imagine this before? I couldn’t — that’s why I tried to reimagine things.”
If we have whole swathes of inexperienced coders developing and deploying smart contracts with little motivation to build quality products – or worse, a huge ambition to build products that might not be all that well-engineered. Consider the fact that one single bug in a smart contract wrought havoc across the entire Ethereum blockchain.
Currently, Fondu is just a proof of concept to see if functional smart contracts could be written and deployed from just a few clicks of a mouse. Fondu urges users to check the code before it is deployed, so some knowledge – or rather access to someone with knowledge – is still required, so it’s not a full democratization of smart contracts just yet.
It’s clear that Fondu is trying to do a good thing, and bring the world of the Ethereum blockchain and smart contracts to the masses and the average Joe, but as is the case with most do-good software, there is nothing stopping bad guys from using it to their advantage.
Nearly a decade ago scientists got pretty excited over a glow coming out of the center of our galaxy. They believed it to be gamma ray emissions resulting from self-destructing dark matter. Unfortunately, it turns out, the Milky Way’s glowing “bulge” wasn’t related to suicidal dark matter. It was probably just gas.
A team of researchers from the University of Amsterdam and the University of Grenoble Alpes today published work indicating the glow is actually just a profile of the stars the bulge surrounds.
We find that an emission profile that traces stellar mass in the boxy and nuclear bulge provides the best description of the excess emission, providing strong circumstantial evidence that the excess is due to a stellar source population in the Galactic bulge. We find a luminosity to stellar mass ratio of (2.1 ± 0.2) × 1027 erg s−1 M−1 for the boxy bulge, and of (1.4±0.6)×1027 erg s−1 M−1 for the nuclear bulge. Stellar mass related templates are preferred over conventional DM profiles with high statistical significance.
The other explanation, the one we all wished were true, was that suicidal dark matter was sending gamma rays exploding out into the galaxy. Were this the case, we could finally zero in on dark matter and prove its existence.
Alas, while we won’t be unraveling dark matter’s mysteries in our own backyard – at least not by examining that particular galactic bulge – we’ve gained an amazing companion on our quest: AI.
The scientists didn’t just guess that the profile of the bulge’s glow better matched the gravitational profile of the stars near it than that of hypothetical dark matter. They used machine learning to do the heavy lifting and demonstrated through scientific rigor that it was the most likely explanation. To do so, they created a set of proprietary algorithms called Sky Factorization with Adaptive Constrained Templates (SkyFACT) to handle the huge problem of figuring out the exact specifications of a glowing field of gamma rays in space.
Another recent experiment involving gamma rays and AI could help scientists approach the issue from a different angle. Scientists working beneath the France/Switzerland border at the Large Hadron Collider (LHC ) accelerated actual atoms for the first time last month.
The LHC team sped up ionized lead atoms to near the speed of light, and observed the zany particle physics that ensued. This was the first time this had been done with an atom — previous attempts used protons and atomic nuclei. In successfully accelerating an actual atom (though stripped of all but one electron) the team believes they’ve laid the groundwork for what could eventually become an incredibly high-intensity gamma ray factory.
This, of course, means that researchers will be able to create Incredible Hulks on demand. Actually, that’s a lie. What it really does is much cooler: It gives scientists a way to create gamma rays the likes of which are currently impossible, something which they hope will provide new insights into physics problems such as — you guessed it — detecting dark matter.
And none of this would be possible without today’s modern AI techniques. The LHC produces an unfathomable one million gigabytes of data per second. Without deep learning networks to sort and sift through this data, the scientists may as well be searching for a needle in the universe’s biggest haystack.
Kazuhiro Terao, one of the physicists working on the gamma ray factory experiment, told a reporter from Stanford’s SLAC National Accelerator Laboratory:
Today we’re using machine learning mostly to find features in our data that can help us answer some of our questions. Ten years from now, machine learning algorithms may be able to ask their own questions independently and recognize when they find new physics.
Scientists hypothesized the existence of dark matter more than a century ago. And, even though it turned out the Milky Way’s bulge wasn’t the answer we were looking for today, we’re closer than ever to figuring out how to detect it.
And by “we,” I mean humans and machines working together.
Well as of today’s standard most of what we see is now running by Artificial Intelligence like Tesla started self-driving vehicles, in medical field doctor is now replacing by AI-powered Robot doctors. And the biggest innovation in AI and Deep Learning will be the next level.
The Deep refers as the number of the layers in artificial neurons in a network of Deep Learning. This is as an artificial nervous system with more layers of neurons that is more sophisticated kinds of learning.
Well by comparison of human brain vs Artificial Intelligences in year 2000s “human brain still more powerful than fastest computer and simplified that Internet complexity is just equivalent of entire Internet but this is not the end of battle between them, estimate by the computer Experts says Estimates around 2040 will AI will surpass the capabilities of Human Brain. Stephen Hawking has said, “The development of full Artificial Intelligence could spell the end of the human race.”
In term of storing information, well the AI still is on long way to go before surpassing the human brain capabilities, considering that DNA as memory. DNA hold much smaller than any computer Technologies. As of now, machine learning is now in reality like applied in Siri, Google, Netflix, or Amazon. Machine learning is now helping many companies to interpret data, find correlations and learn from missed forecast.
Now interaction like Siri is now near human, but still need to access a lot of information. Technology is always moving fast speed, well all of us should get ready for the disruptors that are coming on our world. Improving AI can seem scary but there a great opportunity that human can used to improve our world.