Month: September 2018

30 Sep 2018

Innovation Is on the Rise Worldwide. How Do You Measure It?

How do you measure innovation?

Thanks to dropping costs, technology has become far more accessible than it used to be, and its proliferation has unleashed the creativity and resourcefulness of people around the world. We know great ideas are coming to life from China to Brazil and everywhere in between. But how do you get a read on the pulse of innovation in a given country across its economy?

A new report from the Singularity University chapter in Kiev, Ukraine aims to measure the country’s innovation with a broad look at several indicators across multiple sectors of the economy. The authors hope the Ukraine in the Global Innovation Dimension report can serve as a useful guide and an inspiration for those interested in similarly examining progress in their own countries or cities.

Over the 10-year period between 2007 and 2017, the authors looked at overall patenting activity, research in information technologies, international scientific publications by Ukrainian authors, mechanical engineering research, and patenting activity in agriculture, renewable energy, and pharmaceuticals.

Report co-author Igor Novikov said, “We chose agrotech, renewables, and pharma because there’s plenty of hype and media coverage surrounding these spheres, with a common understanding that Ukraine is quite strong in these fields. We wanted our first report to explore whether that in fact is the case.”

The authors used neighboring Poland as a basis of comparison for patenting activity. For perspective, Ukraine has a population of almost 44 million people, while Poland’s population is just over 38 million.

“Poland has strong historic and business connections with Ukraine, and is traditionally viewed as the closest ally and friend,” Novikov said. “Poland went through what Ukraine is going through right now over 27 years ago, and we wanted to see how a similar country, but within the EU market, is doing.”

He added that comparing Ukraine to the US, China, or even Russia wouldn’t be practical, as the countries’ circumstances are drastically different. However, it’s becoming more and more relevant to account for and be aware of activity in places that aren’t known as innovation hubs.

Silicon Valley’s heyday as the center of all things tech shows signs of being on the decline. But besides the issues the Valley and its most well-known companies have faced, the decentralized, accessible nature of technology itself is also helping democratize innovation. If you have a mobile phone, internet connectivity, and the time and dedication to bring an idea to life, you can do it—almost anywhere in the world. Those who stand to benefit most from this wave of nascent innovation are people farthest-removed from traditional tech hubs, in places with local problems that require local solutions.

The authors of the Ukraine report noted that innovation isn’t worth much if it doesn’t catalyze economic growth; the opportunity to commercialize intellectual property is crucial.

Novikov and his coauthors see their report as just the beginning. They plan to delve into additional industries and further examine the factors influencing creativity and inventiveness in Ukraine.

“This report is actually the first part of a series of such studies, our mission being to fully understand the innovation landscape of our country,” Novikov said.

Source: SingularityHub

29 Sep 2018

The Spooky Genius of Artificial Intelligence

Can artificial intelligence be smarter than a person? Answering that question often hinges on the definition of artificial intelligence. But it might make more sense, instead, to focus on defining what we mean by “smart.”

In the 1950s, the psychologist J. P. Guilford divided creative thought into two categories: convergent thinking and divergent thinking. Convergent thinking, which Guilford defined as the ability to answer questions correctly, is predominantly a display of memory and logic. Divergent thinking, the ability to generate many potential answers from a single problem or question, shows a flair for curiosity, an ability to think “outside the box.” It’s the difference between remembering the capital of Austria and figuring how to start a thriving business in Vienna without knowing a lick of German.

When most people think of AI’s relative strengths over humans, they think of its convergent intelligence. With superior memory capacity and processing power, computers outperform people at rules-based games, complex calculations, and data storage: chess, advanced math, and Jeopardy. What computers lack, some might say, is any form of imagination, or rule-breaking curiosity—that is, divergence.

But what if that common view is wrong? What if AI’s real comparative advantage over humans is precisely its divergent intelligence—its creative potential? That’s the subject of the latest episode of the podcast Crazy/Genius, produced by Kasia Mychajlowycz and Patricia Yacob.

One of the more interesting applications of AI today is a field called generative design, where a machine is fed oodles of data and asked to come up with hundreds or thousands of designs that meet specific criteria. It is, essentially, an exercise in divergent potential.

For example, when the architecture-software firm Autodesk wanted to design a new office, it asked its employees what they wanted from the ideal workplace: How much light? Or privacy? Or open space? Programmers entered these survey responses into the AI, and the generative-design technology produced more than 10,000 different blueprints. Then human architects took their favorite details from these computer-generated designs to build the world’s first large-scale office created using AI.

“Generative design is like working with an all-powerful, really painfully stupid genie,” said Astro Teller, the head of X, the secret research lab at Google’s parent company Alphabet. That is, it can be both magical and mind-numbingly over-literal. So I asked Teller where companies could use this painfully dense genie. “Everywhere!” he said. Most importantly, generative design could help biologists simulate the effect of new drugs without putting sick humans at risk. By testing thousands of variations of a new medicine in a biological simulator, we could one day design drugs the way we design commercial airplanes—by exhaustively testing their specifications before we put them in the air with several hundred passengers.

AI’s divergent potential is one of the hottest subjects in the field. This spring, several dozen computer scientists published an unusual paper on the history of AI. This paper was not a work of research. It was a collection of stories—some ominous, some hilarious—that showed AI shocking its own designers with its ingenuity. Most of the stories involved a kind of AI called machine learning, where programmers give the computer data and a problem to solve without explicit instructions, in the hopes that the algorithm will figure out how to answer it.

First, an ominous example. One algorithm was supposed to figure out how to land a virtual airplane with minimal force. But the AI soon discovered that if it crashed the plane, the program would register a force so large that it would overwhelm its own memory and count it as a perfect score. So the AI crashed the plane, over and over again, presumably killing all the virtual people on board. This is the sort of nefarious rules-hacking that makes AI alarmists fear that a sentient AI could ultimately destroy mankind. (To be clear, there is a cavernous gap between a simulator snafu and SkyNet.)

But the benign examples were just as interesting. In one test of locomotion, a simulated robot was programmed to travel forward as quickly as possible. But instead of building legs and walking, it built itself into a tall tower and fell forward. How is growing tall and falling on your face anything like walking? Well, both cover a horizontal distance pretty quickly. And the AI took its task very, very literally.

According to Janelle Shane, a research scientist who publishes a website about artificial intelligence, there is an eerie genius to this forward-falling strategy. “After I had posted [this paper] online, I heard from some biologists who said, ‘Oh yeah, wheat uses this strategy to propagate!’” she told me. “At the end of each season, these tall stalks of wheat fall over, and their seeds land just a little bit farther from where the wheat stalk heads started.”

From the perspective of the computer programmer, the AI failed to walk. But from the perspective of the AI, it rapidly mutated in a simulated environment to discover something which had taken wheat stalks millions of years to learn: Why walk, when you can just fall? A relatable sentiment.

The stories in this paper are not just evidence of the dim-wittedness of artificial intelligence. In fact, they are evidence of the opposite: A divergent intelligence that mimics biology. “These anecdotes thus serve as evidence that evolution, whether biological or computational, is inherently creative and should routinely be expected to surprise, delight, and even outwit us,” the lead authors write in the conclusion. Sometimes, a machine is more clever than its makers.

This is not to say that AI displays what psychologists would call human creativity. These machines cannot turn themselves on, or become self-motivated, or ask alternate questions, or even explain their discoveries. Without consciousness or comprehension, a creature cannot be truly creative.

But if AI, and machine learning in particular, does not think as a person does, perhaps it’s more accurate to say it evolves, as an organism can. Consider the familiar two-step of evolution. With mutation, genes diverge from their preexisting structure. With natural selection, organisms converge on the mutation best adapted to their environment. Thus, evolutionary biology displays a divergent and convergent intelligence that is a far better metaphor for to the process of machine learning, like generative design, than the tangle of human thought.

AI might not be “smart” in a human sense of the word. But it has already shown that it can perform an eerie simulation of evolution. And that is a spooky kind of genius.

Source: Theatlantic

26 Sep 2018

Disruptive technology switches sides

Disruptive technology has become a popular catchphrase since the term was coined just over 20 years ago. And, to be fair, it’s been said a lot in those 20 years, shaking up and dramatically reshaping the world around us. The world in which millennials have grown up looks very different to that inhabited by their parents and grandparents.

Semiconductors and computing are predominantly responsible for the changed landscape. The picture keeps changing, however. The Internet of Things (IoT) is the latest kid on the block. While there are many column inches devoted to that technology in the consumer press, it has also infiltrated and made heavy inroads into the world of business and increasingly, into the area of manufacturing.

That’s somewhat ironic – normally it works the other way round: business is the early developer and adopter of technology. The consumer market follows behind, adding volume sales and bringing down the cost through sheer economy of scale.

IoT moves to center stage

Earlier this year, respected market research company, Forrester, predicted: “IoT is likely to become more specialized in the coming year, moving away from generic hardware and software into platforms designed for specific industries.” The report added: “As the IoT industry continues to grow, you won’t need to be generic to achieve economies of scale” and that “more and more of IoT connectivity and integrations will happen in the cloud…. At the same time, however, in an effort to cut costs and trim latency, IoT data processing and analysis will also move from the core to the edge of the network.”

Of course, that’s precisely what’s already happening with the Industrial Internet of Things (IIoT), where lack of latency is vital and failsafe data transfer is key. LNS Research predicted late last year that 2018 would be the year in which “inter-cloud connectivity will become both a requirement and reality,” with each end user having “multiple platforms for multiple use cases.”

There’s no doubt that IoT has genuinely disrupted both consumer and industrial markets. From having traditionally been a marketplace in which technology was developed slowly and carefully, with plenty of time for testing new developments, the industrial marketplace is rapidly learning to become more consumer-like and consumer-based. This means that there’s increased pressure on manufacturers to develop lower-cost devices – and to develop them faster. That puts demands on original equipment manufacturers (OEMs) too.

Disruptive technologies bring many benefits to those early adopters who are either smart or lucky enough to see what’s on the horizon. Increasingly the marketplace is favoring those who are quick on their feet. To derive maximum benefit, suppliers need to be flexible in terms of their engineering, responsive to the needs of their customers and to be able to keep development costs down.

Standard products are no longer necessarily the answer. There are now so many different variants of products required, depending on customer needs, that custom mixed-signal integrated circuits (ICs) will often fit the bill much better than trying to customize a standard product and for little if any additional cost. The picture is further complicated by the (frankly bewildering) range of wireless technologies and quasi-standards available, with the proprietary Sigfox and LoRa competing with LTE and other cellular offerings.

By choosing the right supplier, with a wealth of application expertise, customers can often reduce their bill of materials (BoM) significantly. S3 Semiconductors, for instance, offers the SmartEdge platform that allows customers essentially to mix and match their requirements.

It’s not necessarily easy to find a component off-the-shelf that will provide the right networking technology coupled with the calibration, control and security functions that your application demands. It can be much easier to have your own custom application specific IC (ASIC) made up for you. Overall performance will be better, power consumption lower and the device will occupy less space on the printed circuit board (PCB).

What’s not to like about that? You can be secure in the knowledge that you’re keeping ahead of the curve, as well as being fast to market.

Tommy Mullane is a Senior Systems Architect at S3 Semiconductors, a division of Adesto. He received the B.E. degree in Electronic Engineering from the National University of Ireland, Dublin (UCD) in 1997. He has worked in research in optoelectronic devices and received a master’s in technology management from UCD in 2006. From 2000 to 2014, he worked for a Dublin based start-up called Intune Networks – on next generation optical telecommunication systems, working in a variety of technical disciplines, from optics to chip design, software and systems. He holds 5 patents and has published a number of papers.

Source: http://www.embedded-computing.com/iot/disruptive-technology-switches-sides

25 Sep 2018
Manahel Thabet

AI Detects Depression in Conversation

Summary: Researchers at MIT have developed a new deep learning neural network that can identify speech patterns indicative of depression from audio data. The algorithm, researchers say, is 77% effective at detecting depression.

Source: MIT.

To diagnose depression, clinicians interview patients, asking specific questions — about, say, past mental illnesses, lifestyle, and mood — and identify the condition based on the patient’s responses..

In recent years, machine learning has been championed as a useful aid for diagnostics. Machine-learning models, for instance, have been developed that can detect words and intonations of speech that may indicate depression. But these models tend to predict that a person is depressed or not, based on the person’s specific answers to specific questions. These methods are accurate, but their reliance on the type of question being asked limits how and where they can be used.

In a paper being presented at the Interspeech conference, MIT researchers detail a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression. Given a new subject, it can accurately predict if the individual is depressed, without needing any other information about the questions and answers.

The researchers hope this method can be used to develop tools to detect signs of depression in natural conversation. In the future, the model could, for instance, power mobile apps that monitor a user’s text and voice for mental distress and send alerts. This could be especially useful for those who can’t get to a clinician for an initial diagnosis, due to distance, cost, or a lack of awareness that something may be wrong.

“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you want to deploy [depression-detection] models in scalable way … you want to minimize the amount of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual.”

The technology could still, of course, be used for identifying mental distress in casual conversations in clinical offices, adds co-author James Glass, a senior research scientist in CSAIL. “Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” he says. “This is a step forward in seeing if we can do something assistive to help clinicians.”

The other co-author on the paper is Mohammad Ghassemi, a member of the Institute for Medical Engineering and Science (IMES).

Context-free modeling

The key innovation of the model lies in its ability to detect patterns indicative of depression, and then map those patterns to new individuals, with no additional information. “We call it ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions,” Alhanai says.

Other models are provided with a specific set of questions, and then given examples of how a person without depression responds and examples of how a person with depression responds — for example, the straightforward inquiry, “Do you have a history of depression?” It uses those exact responses to then determine if a new individual is depressed when asked the exact same question. “But that’s not how natural conversations work,” Alhanai says.

The researchers, on the other hand, used a technique called sequence modeling, often used for speech processing. With this technique, they fed the model sequences of text and audio data from questions and answers, from both depressed and non-depressed individuals, one by one. As the sequences accumulated, the model extracted speech patterns that emerged for people with or without depression. Words such as, say, “sad,” “low,” or “down,” may be paired with audio signals that are flatter and more monotone. Individuals with depression may also speak slower and use longer pauses between words. These text and audio identifiers for mental distress have been explored in previous research. It was ultimately up to the model to determine if any patterns were predictive of depression or not.

“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai says. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”

This sequencing technique also helps the model look at the conversation as a whole and note differences between how people with and without depression speak over time.

Detecting depression

The researchers trained and tested their model on a dataset of 142 interactions from the Distress Analysis Interview Corpus that contains audio, text, and video interviews of patients with mental-health issues and virtual agents controlled by humans. Each subject is rated in terms of depression on a scale between 0 to 27, using the Personal Health Questionnaire. Scores above a cutoff between moderate (10 to 14) and moderately severe (15 to 19) are considered depressed, while all others below that threshold are considered not depressed. Out of all the subjects in the dataset, 28 (20 percent) are labeled as depressed.

In experiments, the model was evaluated using metrics of precision and recall. Precision measures which of the depressed subjects identified by the model were diagnosed as depressed. Recall measures the accuracy of the model in detecting all subjects who were diagnosed as depressed in the entire dataset. In precision, the model scored 71 percent and, on recall, scored 83 percent. The averaged combined score for those metrics, considering any errors, was 77 percent. In the majority of tests, the researchers’ model outperformed nearly all other models.

MIT researchers have developed a neural-network model that can analyze raw text and audio data from interviews to discover speech patterns indicative of depression. This method could be used to develop diagnostic aids for clinicians that can detect signs of depression in natural conversation. NeuroscienceNews.com image is adapted from the MIT news release.

One key insight from the research, Alhanai notes, is that, during experiments, the model needed much more data to predict depression from audio than text. With text, the model can accurately detect depression using an average of seven question-answer sequences. With audio, the model needed around 30 sequences. “That implies that the patterns in words people use that are predictive of depression happen in shorter time span in text than in audio,” Alhanai says. Such insights could help the MIT researchers, and others, further refine their models.

This work represents a “very encouraging” pilot, Glass says. But now the researchers seek to discover what specific patterns the model identifies across scores of raw data. “Right now it’s a bit of a black box,” Glass says. “These systems, however, are more believable when you have an explanation of what they’re picking up. … The next challenge is finding out what data it’s seized upon.”

Source: NeuroScienceNews