Month: September 2018

30 Sep 2018

Innovation Is on the Rise Worldwide. How Do You Measure It?

How do you measure innovation?

Thanks to dropping costs, technology has become far more accessible than it used to be, and its proliferation has unleashed the creativity and resourcefulness of people around the world. We know great ideas are coming to life from China to Brazil and everywhere in between. But how do you get a read on the pulse of innovation in a given country across its economy?

A new report from the Singularity University chapter in Kiev, Ukraine aims to measure the country’s innovation with a broad look at several indicators across multiple sectors of the economy. The authors hope the Ukraine in the Global Innovation Dimension report can serve as a useful guide and an inspiration for those interested in similarly examining progress in their own countries or cities.

Over the 10-year period between 2007 and 2017, the authors looked at overall patenting activity, research in information technologies, international scientific publications by Ukrainian authors, mechanical engineering research, and patenting activity in agriculture, renewable energy, and pharmaceuticals.

Report co-author Igor Novikov said, “We chose agrotech, renewables, and pharma because there’s plenty of hype and media coverage surrounding these spheres, with a common understanding that Ukraine is quite strong in these fields. We wanted our first report to explore whether that in fact is the case.”

The authors used neighboring Poland as a basis of comparison for patenting activity. For perspective, Ukraine has a population of almost 44 million people, while Poland’s population is just over 38 million.

“Poland has strong historic and business connections with Ukraine, and is traditionally viewed as the closest ally and friend,” Novikov said. “Poland went through what Ukraine is going through right now over 27 years ago, and we wanted to see how a similar country, but within the EU market, is doing.”

He added that comparing Ukraine to the US, China, or even Russia wouldn’t be practical, as the countries’ circumstances are drastically different. However, it’s becoming more and more relevant to account for and be aware of activity in places that aren’t known as innovation hubs.

Silicon Valley’s heyday as the center of all things tech shows signs of being on the decline. But besides the issues the Valley and its most well-known companies have faced, the decentralized, accessible nature of technology itself is also helping democratize innovation. If you have a mobile phone, internet connectivity, and the time and dedication to bring an idea to life, you can do it—almost anywhere in the world. Those who stand to benefit most from this wave of nascent innovation are people farthest-removed from traditional tech hubs, in places with local problems that require local solutions.

The authors of the Ukraine report noted that innovation isn’t worth much if it doesn’t catalyze economic growth; the opportunity to commercialize intellectual property is crucial.

Novikov and his coauthors see their report as just the beginning. They plan to delve into additional industries and further examine the factors influencing creativity and inventiveness in Ukraine.

“This report is actually the first part of a series of such studies, our mission being to fully understand the innovation landscape of our country,” Novikov said.

Source: SingularityHub

29 Sep 2018

The Spooky Genius of Artificial Intelligence

Can artificial intelligence be smarter than a person? Answering that question often hinges on the definition of artificial intelligence. But it might make more sense, instead, to focus on defining what we mean by “smart.”

In the 1950s, the psychologist J. P. Guilford divided creative thought into two categories: convergent thinking and divergent thinking. Convergent thinking, which Guilford defined as the ability to answer questions correctly, is predominantly a display of memory and logic. Divergent thinking, the ability to generate many potential answers from a single problem or question, shows a flair for curiosity, an ability to think “outside the box.” It’s the difference between remembering the capital of Austria and figuring how to start a thriving business in Vienna without knowing a lick of German.

When most people think of AI’s relative strengths over humans, they think of its convergent intelligence. With superior memory capacity and processing power, computers outperform people at rules-based games, complex calculations, and data storage: chess, advanced math, and Jeopardy. What computers lack, some might say, is any form of imagination, or rule-breaking curiosity—that is, divergence.

But what if that common view is wrong? What if AI’s real comparative advantage over humans is precisely its divergent intelligence—its creative potential? That’s the subject of the latest episode of the podcast Crazy/Genius, produced by Kasia Mychajlowycz and Patricia Yacob.

One of the more interesting applications of AI today is a field called generative design, where a machine is fed oodles of data and asked to come up with hundreds or thousands of designs that meet specific criteria. It is, essentially, an exercise in divergent potential.

For example, when the architecture-software firm Autodesk wanted to design a new office, it asked its employees what they wanted from the ideal workplace: How much light? Or privacy? Or open space? Programmers entered these survey responses into the AI, and the generative-design technology produced more than 10,000 different blueprints. Then human architects took their favorite details from these computer-generated designs to build the world’s first large-scale office created using AI.

“Generative design is like working with an all-powerful, really painfully stupid genie,” said Astro Teller, the head of X, the secret research lab at Google’s parent company Alphabet. That is, it can be both magical and mind-numbingly over-literal. So I asked Teller where companies could use this painfully dense genie. “Everywhere!” he said. Most importantly, generative design could help biologists simulate the effect of new drugs without putting sick humans at risk. By testing thousands of variations of a new medicine in a biological simulator, we could one day design drugs the way we design commercial airplanes—by exhaustively testing their specifications before we put them in the air with several hundred passengers.

AI’s divergent potential is one of the hottest subjects in the field. This spring, several dozen computer scientists published an unusual paper on the history of AI. This paper was not a work of research. It was a collection of stories—some ominous, some hilarious—that showed AI shocking its own designers with its ingenuity. Most of the stories involved a kind of AI called machine learning, where programmers give the computer data and a problem to solve without explicit instructions, in the hopes that the algorithm will figure out how to answer it.

First, an ominous example. One algorithm was supposed to figure out how to land a virtual airplane with minimal force. But the AI soon discovered that if it crashed the plane, the program would register a force so large that it would overwhelm its own memory and count it as a perfect score. So the AI crashed the plane, over and over again, presumably killing all the virtual people on board. This is the sort of nefarious rules-hacking that makes AI alarmists fear that a sentient AI could ultimately destroy mankind. (To be clear, there is a cavernous gap between a simulator snafu and SkyNet.)

But the benign examples were just as interesting. In one test of locomotion, a simulated robot was programmed to travel forward as quickly as possible. But instead of building legs and walking, it built itself into a tall tower and fell forward. How is growing tall and falling on your face anything like walking? Well, both cover a horizontal distance pretty quickly. And the AI took its task very, very literally.

According to Janelle Shane, a research scientist who publishes a website about artificial intelligence, there is an eerie genius to this forward-falling strategy. “After I had posted [this paper] online, I heard from some biologists who said, ‘Oh yeah, wheat uses this strategy to propagate!’” she told me. “At the end of each season, these tall stalks of wheat fall over, and their seeds land just a little bit farther from where the wheat stalk heads started.”

From the perspective of the computer programmer, the AI failed to walk. But from the perspective of the AI, it rapidly mutated in a simulated environment to discover something which had taken wheat stalks millions of years to learn: Why walk, when you can just fall? A relatable sentiment.

The stories in this paper are not just evidence of the dim-wittedness of artificial intelligence. In fact, they are evidence of the opposite: A divergent intelligence that mimics biology. “These anecdotes thus serve as evidence that evolution, whether biological or computational, is inherently creative and should routinely be expected to surprise, delight, and even outwit us,” the lead authors write in the conclusion. Sometimes, a machine is more clever than its makers.

This is not to say that AI displays what psychologists would call human creativity. These machines cannot turn themselves on, or become self-motivated, or ask alternate questions, or even explain their discoveries. Without consciousness or comprehension, a creature cannot be truly creative.

But if AI, and machine learning in particular, does not think as a person does, perhaps it’s more accurate to say it evolves, as an organism can. Consider the familiar two-step of evolution. With mutation, genes diverge from their preexisting structure. With natural selection, organisms converge on the mutation best adapted to their environment. Thus, evolutionary biology displays a divergent and convergent intelligence that is a far better metaphor for to the process of machine learning, like generative design, than the tangle of human thought.

AI might not be “smart” in a human sense of the word. But it has already shown that it can perform an eerie simulation of evolution. And that is a spooky kind of genius.

Source: Theatlantic

26 Sep 2018

Disruptive technology switches sides

Disruptive technology has become a popular catchphrase since the term was coined just over 20 years ago. And, to be fair, it’s been said a lot in those 20 years, shaking up and dramatically reshaping the world around us. The world in which millennials have grown up looks very different to that inhabited by their parents and grandparents.

Semiconductors and computing are predominantly responsible for the changed landscape. The picture keeps changing, however. The Internet of Things (IoT) is the latest kid on the block. While there are many column inches devoted to that technology in the consumer press, it has also infiltrated and made heavy inroads into the world of business and increasingly, into the area of manufacturing.

That’s somewhat ironic – normally it works the other way round: business is the early developer and adopter of technology. The consumer market follows behind, adding volume sales and bringing down the cost through sheer economy of scale.

IoT moves to center stage

Earlier this year, respected market research company, Forrester, predicted: “IoT is likely to become more specialized in the coming year, moving away from generic hardware and software into platforms designed for specific industries.” The report added: “As the IoT industry continues to grow, you won’t need to be generic to achieve economies of scale” and that “more and more of IoT connectivity and integrations will happen in the cloud…. At the same time, however, in an effort to cut costs and trim latency, IoT data processing and analysis will also move from the core to the edge of the network.”

Of course, that’s precisely what’s already happening with the Industrial Internet of Things (IIoT), where lack of latency is vital and failsafe data transfer is key. LNS Research predicted late last year that 2018 would be the year in which “inter-cloud connectivity will become both a requirement and reality,” with each end user having “multiple platforms for multiple use cases.”

There’s no doubt that IoT has genuinely disrupted both consumer and industrial markets. From having traditionally been a marketplace in which technology was developed slowly and carefully, with plenty of time for testing new developments, the industrial marketplace is rapidly learning to become more consumer-like and consumer-based. This means that there’s increased pressure on manufacturers to develop lower-cost devices – and to develop them faster. That puts demands on original equipment manufacturers (OEMs) too.

Disruptive technologies bring many benefits to those early adopters who are either smart or lucky enough to see what’s on the horizon. Increasingly the marketplace is favoring those who are quick on their feet. To derive maximum benefit, suppliers need to be flexible in terms of their engineering, responsive to the needs of their customers and to be able to keep development costs down.

Standard products are no longer necessarily the answer. There are now so many different variants of products required, depending on customer needs, that custom mixed-signal integrated circuits (ICs) will often fit the bill much better than trying to customize a standard product and for little if any additional cost. The picture is further complicated by the (frankly bewildering) range of wireless technologies and quasi-standards available, with the proprietary Sigfox and LoRa competing with LTE and other cellular offerings.

By choosing the right supplier, with a wealth of application expertise, customers can often reduce their bill of materials (BoM) significantly. S3 Semiconductors, for instance, offers the SmartEdge platform that allows customers essentially to mix and match their requirements.

It’s not necessarily easy to find a component off-the-shelf that will provide the right networking technology coupled with the calibration, control and security functions that your application demands. It can be much easier to have your own custom application specific IC (ASIC) made up for you. Overall performance will be better, power consumption lower and the device will occupy less space on the printed circuit board (PCB).

What’s not to like about that? You can be secure in the knowledge that you’re keeping ahead of the curve, as well as being fast to market.

Tommy Mullane is a Senior Systems Architect at S3 Semiconductors, a division of Adesto. He received the B.E. degree in Electronic Engineering from the National University of Ireland, Dublin (UCD) in 1997. He has worked in research in optoelectronic devices and received a master’s in technology management from UCD in 2006. From 2000 to 2014, he worked for a Dublin based start-up called Intune Networks – on next generation optical telecommunication systems, working in a variety of technical disciplines, from optics to chip design, software and systems. He holds 5 patents and has published a number of papers.


25 Sep 2018
Manahel Thabet

AI Detects Depression in Conversation

Summary: Researchers at MIT have developed a new deep learning neural network that can identify speech patterns indicative of depression from audio data. The algorithm, researchers say, is 77% effective at detecting depression.

Source: MIT.

To diagnose depression, clinicians interview patients, asking specific questions — about, say, past mental illnesses, lifestyle, and mood — and identify the condition based on the patient’s responses..

In recent years, machine learning has been championed as a useful aid for diagnostics. Machine-learning models, for instance, have been developed that can detect words and intonations of speech that may indicate depression. But these models tend to predict that a person is depressed or not, based on the person’s specific answers to specific questions. These methods are accurate, but their reliance on the type of question being asked limits how and where they can be used.

In a paper being presented at the Interspeech conference, MIT researchers detail a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression. Given a new subject, it can accurately predict if the individual is depressed, without needing any other information about the questions and answers.

The researchers hope this method can be used to develop tools to detect signs of depression in natural conversation. In the future, the model could, for instance, power mobile apps that monitor a user’s text and voice for mental distress and send alerts. This could be especially useful for those who can’t get to a clinician for an initial diagnosis, due to distance, cost, or a lack of awareness that something may be wrong.

“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you want to deploy [depression-detection] models in scalable way … you want to minimize the amount of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual.”

The technology could still, of course, be used for identifying mental distress in casual conversations in clinical offices, adds co-author James Glass, a senior research scientist in CSAIL. “Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” he says. “This is a step forward in seeing if we can do something assistive to help clinicians.”

The other co-author on the paper is Mohammad Ghassemi, a member of the Institute for Medical Engineering and Science (IMES).

Context-free modeling

The key innovation of the model lies in its ability to detect patterns indicative of depression, and then map those patterns to new individuals, with no additional information. “We call it ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions,” Alhanai says.

Other models are provided with a specific set of questions, and then given examples of how a person without depression responds and examples of how a person with depression responds — for example, the straightforward inquiry, “Do you have a history of depression?” It uses those exact responses to then determine if a new individual is depressed when asked the exact same question. “But that’s not how natural conversations work,” Alhanai says.

The researchers, on the other hand, used a technique called sequence modeling, often used for speech processing. With this technique, they fed the model sequences of text and audio data from questions and answers, from both depressed and non-depressed individuals, one by one. As the sequences accumulated, the model extracted speech patterns that emerged for people with or without depression. Words such as, say, “sad,” “low,” or “down,” may be paired with audio signals that are flatter and more monotone. Individuals with depression may also speak slower and use longer pauses between words. These text and audio identifiers for mental distress have been explored in previous research. It was ultimately up to the model to determine if any patterns were predictive of depression or not.

“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai says. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”

This sequencing technique also helps the model look at the conversation as a whole and note differences between how people with and without depression speak over time.

Detecting depression

The researchers trained and tested their model on a dataset of 142 interactions from the Distress Analysis Interview Corpus that contains audio, text, and video interviews of patients with mental-health issues and virtual agents controlled by humans. Each subject is rated in terms of depression on a scale between 0 to 27, using the Personal Health Questionnaire. Scores above a cutoff between moderate (10 to 14) and moderately severe (15 to 19) are considered depressed, while all others below that threshold are considered not depressed. Out of all the subjects in the dataset, 28 (20 percent) are labeled as depressed.

In experiments, the model was evaluated using metrics of precision and recall. Precision measures which of the depressed subjects identified by the model were diagnosed as depressed. Recall measures the accuracy of the model in detecting all subjects who were diagnosed as depressed in the entire dataset. In precision, the model scored 71 percent and, on recall, scored 83 percent. The averaged combined score for those metrics, considering any errors, was 77 percent. In the majority of tests, the researchers’ model outperformed nearly all other models.

MIT researchers have developed a neural-network model that can analyze raw text and audio data from interviews to discover speech patterns indicative of depression. This method could be used to develop diagnostic aids for clinicians that can detect signs of depression in natural conversation. image is adapted from the MIT news release.

One key insight from the research, Alhanai notes, is that, during experiments, the model needed much more data to predict depression from audio than text. With text, the model can accurately detect depression using an average of seven question-answer sequences. With audio, the model needed around 30 sequences. “That implies that the patterns in words people use that are predictive of depression happen in shorter time span in text than in audio,” Alhanai says. Such insights could help the MIT researchers, and others, further refine their models.

This work represents a “very encouraging” pilot, Glass says. But now the researchers seek to discover what specific patterns the model identifies across scores of raw data. “Right now it’s a bit of a black box,” Glass says. “These systems, however, are more believable when you have an explanation of what they’re picking up. … The next challenge is finding out what data it’s seized upon.”

Source: NeuroScienceNews

24 Sep 2018
Manahel Thabet

“Synthetic Skin” Could Give Prosthesis Users a Superhuman Sense of Touch


Today’s prosthetics can give people with missing limbs the ability to do almost anything — run marathons, climb mountains, you name it. But when it comes to letting those people feel what they could with a natural limb, the devices, however mechanically sophisticated, invariably fall short.

Now researchers have created a “synthetic skin” with a sense of touch that not only matches the sensitivity of natural skin, but in some cases even exceeds it. Now the only challenge is getting that information back into the wearer’s nervous system.


When something presses against your skin, your nerves receive and transmit that pressure to the brain in the form of electrical signals.

To mimic that biological process, the researchers suspended a flexible polymer, dusted with magnetic particles, over a magnetic sensor. The effect is like a drum: Applying even the tiniest amount of pressure to the membrane causes the magnetic particles to move closer to the sensors, and they transmit this movement electronically.

The research, which could open the door to super-sensitive prosthetics, was published Wednesday in the journal Science Robotics.


Tests shows that the skin can sense extremely subtle pressure, such as a blowing breeze, dripping water, or crawling ants. In some cases, the synthetic skin responded to pressures so gentle that natural human skin wouldn’t be able to detect them.

While the sensing ability of this synthetic skin is remarkable, the team’s research doesn’t address how to transmit the signals to the human brain. Other scientists are working on that, though, so eventually this synthetic skin could give prosthetic wearers the ability to feel forces even their biological-limbed friends can’t detect.

Source: Futurism

23 Sep 2018

Spray-on antennas could unlock potential of smart, connected technology

Drexel researchers develop antennas made from mxene ‘spray paint’

September 21, 2018
Drexel University
Engineering researchers report a method for spraying invisibly thin antennas, made from a type of two-dimensional, metallic material called MXene, that perform as well as those being used in mobile devices, wireless routers and portable transducers.

The promise of wearables, functional fabrics, the Internet of Things, and their “next-generation” technological cohort seems tantalizingly within reach. But researchers in the field will tell you a prime reason for their delayed “arrival” is the problem of seamlessly integrating connection technology — namely, antennas — with shape-shifting and flexible “things.”

But a breakthrough by researchers in Drexel’s College of Engineering, could now make installing an antenna as easy as applying some bug spray.

In research recently published in Science Advances, the group reports on a method for spraying invisibly thin antennas, made from a type of two-dimensional, metallic material called MXene, that perform as well as those being used in mobile devices, wireless routers and portable transducers.

“This is a very exciting finding because there is a lot of potential for this type of technology,” said Kapil Dandekar, PhD, a professor of Electrical and Computer Engineering in the College of Engineering, who directs the Drexel Wireless Systems Lab, and was a co-author of the research. “The ability to spray an antenna on a flexible substrate or make it optically transparent means that we could have a lot of new places to set up networks — there are new applications and new ways of collecting data that we can’t even imagine at the moment.”

The researchers, from the College’s Department of Materials Science and Engineering, report that the MXene titanium carbide can be dissolved in water to create an ink or paint. The exceptional conductivity of the material enables it to transmit and direct radio waves, even when it’s applied in a very thin coating.

“We found that even transparent antennas with thicknesses of tens of nanometers were able to communicate efficiently,” said Asia Sarycheva, a doctoral candidate in the A.J. Drexel Nanomaterials Institute and Materials Science and Engineering Department. “By increasing the thickness up to 8 microns, the performance of MXene antenna achieved 98 percent of its predicted maximum value.”

Preserving transmission quality in a form this thin is significant because it would allow antennas to easily be embedded — literally, sprayed on — in a wide variety of objects and surfaces without adding additional weight or circuitry or requiring a certain level of rigidity.

“This technology could enable the truly seamless integration of antennas with everyday objects which will be critical for the emerging Internet of Things,” Dandekar said. “Researchers have done a lot of work with non-traditional materials trying to figure out where manufacturing technology meets system needs, but this technology could make it a lot easier to answer some of the difficult questions we’ve been working on for years.”

Initial testing of the sprayed antennas suggest that they can perform with the same range of quality as current antennas, which are made from familiar metals, like gold, silver, copper and aluminum, but are much thicker than MXene antennas. Making antennas smaller and lighter has long been a goal of materials scientists and electrical engineers, so this discovery is a sizeable step forward both in terms of reducing their footprint as well as broadening their application.

“Current fabrication methods of metals cannot make antennas thin enough and applicable to any surface, in spite of decades of research and development to improve the performance of metal antennas,” said Yury Gogotsi, PhD, Distinguished University and Bach professor of Materials Science and Engineering in the College of Engineering, and Director of the A.J. Drexel Nanomaterials Institute, who initiated and led the project. “We were looking for two-dimensional nanomaterials, which have sheet thickness about hundred thousand times thinner than a human hair; just a few atoms across, and can self-assemble into conductive films upon deposition on any surface. Therefore, we selected MXene, which is a two-dimensional titanium carbide material, that is stronger than metals and is metallically conductive, as a candidate for ultra-thin antennas.”

Drexel researchers discovered the family of MXene materials in 2011 and have been gaining an understanding of their properties, and considering their possible applications, ever since. The layered two-dimensional material, which is made by wet chemical processing, has already shown potential in energy storage devices, electromagnetic shielding, water filtration, chemical sensing, structural reinforcement and gas separation.

Naturally MXene materials have drawn comparisons to promising two-dimensional materials like graphene, which won the Nobel Prize in 2010 and has been explored as a material for printable antennas. In the paper, the Drexel researchers put the spray-on antennas up against a variety of antennas made from these new materials, including graphene, silver ink and carbon nanotubes. The MXene antennas were 50 times better than graphene and 300 times better than silver ink antennas in terms of preserving the quality of radio wave transmission.

“The MXene antenna not only outperformed the macro and micro world of metal antennas, we went beyond the performance of available nanomaterial antennas, while keeping the antenna thickness very low,” said Babak Anasori, PhD, a research assistant professor in A.J. Drexel Nanomaterials Institute. “The thinnest antenna was as thin as 62 nanometers — about thousand times thinner than a sheep of paper — and it was almost transparent. Unlike other nanomaterials fabrication methods, that requires additives, called binders, and extra steps of heating to sinter the nanoparticles together, we made antennas in a single step by airbrush spraying our water-based MXene ink.”

The group initially tested the spray-on application of the antenna ink on a rough substrate — cellulose paper — and a smooth one — polyethylene terephthalate sheets — the next step for their work will be looking at the best ways to apply it to a wide variety of surfaces from glass to yarn and skin.

“Further research on using materials from the MXene family in wireless communication may enable fully transparent electronics and greatly improved wearable devices that will support the active lifestyles we are living,” Anasori said.

Source: ScienceDaily 

22 Sep 2018
Manahel Thabet

How Investing in AI is About Investing in People, Not Just Technology

How is your organization preparing for artificial intelligence (AI)? Ask this question of businesses investing in this field today, and the answer almost always comes down to “data”– with leaders talking about “data preparations” or “data science talent acquisition.”

While there would be no AI without data, enterprises that fail to ready the other side of the equation– people— don’t just stunt their capacity for good AI, they risk sunk investment and jeopardize employee trust, brand backlash or worse.

After all, people are the ones building, measuring, consuming and determining the success of AI in enterprise and consumer settings. They’re the ones whose jobs will change; whose tedium will be eased by automation; whose consumption or rejection of AI’s outcomes will be the focus.

People, in short, are those who’ll feel AI’s myriad impacts. That’s why investing in AI is as much about investing in people as it is data. 

1. Investment in factors beyond technical talent

Hiring a team of data scientists will not cause business processes to magically become automated overnight. Some liken this mistaken assumption to hiring electrical engineers to run a bakery: While the mechanics of ovens are important, it is the experienced baker who best knows how to innovate recipes and inspire customer delight!

Across industries, we found that the successful AI deployments we saw involved at least eight distinct personae:

  • Product leaders
  • Front-line associates (e.g., customer support agents, field technicians)
  • Subject matter experts (e.g., doctors, security admins, legal, etc.)
  • Designers
  • Sales
  • Leadership
  • End users
  • Data scientists & technical builders

In addition to identifying these stakeholders, businesses have to make AI accessible and build trust by educating people and quelling fears. The top recommendation here is to prepare stakeholders by using tactics that put AI into context for each role.

Leadership requires a demonstration of ROI and visualization. AI leaders at FedEx, for example, built simulated dashboards and reports to illustrate the difference between traditional analytics and machine-learning-driven recommendations.

Meanwhile, readying the sales team requires both equipping agents with the knowledge, tools and confidence to sell the benefits of AI, and re-evaluating their metrics and incentive models to preserve quality and integrity. For effective roll-out, the unique needs and pain points for each of the above staff members have to be addressed.

2. Investment in addressing AI’s cultural stigma

AI is distinct from other technologies in that it can challenge people’s sense of importance and relevance. Some 58 percent of organizations in international settings have not discussed AI’s impact on the workforce with employees, according to a recent survey by the Workforce Institute. Yet AI’s success is driven by people’s willingness to adopt it.

Thus, enterprises deploying AI are well advised to assess how people’s sentiments, fears, questions and insecurities impact their proclivity to adopt. Instead of ignoring concerns, companies interviewed suggested discussing and developing positions and initiatives to address:

  • Job displacement
  • Algorithmic bias
  • Privacy, surveillance
  • Security threats
  • Autonomous machines
  • Societal manipulation
  • Environmental impacts
  • The notion of “killer robots”

These “elephants in the room” don’t just threaten employee morale, they highlight opportunities for companies to improve engagement and reinforce a healthy and trustworthy company culture. Address concerns of job displacement at your own company by evangelizing the limitations of AI. Articulate where AI will augment or accelerate human workflows. Provide clarity on governance models. And support employee upskilling and continued education programs.

Microsoft’s Professional Program for AI is an example: This is a massive open online course (MOOC) designed to guide aspiring AI builders through a range of topics, from statistics to ethics to research design. Other companies, like Starbucks and Kaiser Permanente, have partnered with elearning platforms like Coursera or to facilitate professional development.

3. Investment in building an AI mindset

While investing in a mindset might sound squishy or disconnected from the bottom line, preparing employees with the education, ownership, tools and processes they need to engage with AI has tangible business benefits. According to a recent survey of 1,075 companies in 12 industries, the more companies embraced active employee involvement in AI design and deployment, the better their AI initiatives performed in terms of speed, cost savings, revenues and other operational measures.

The following “3 D’s” of what I call the AI mindset reflect three universal truths about AI and serve as starting points for building people’s engagement in an organization’s AI journey:

Think “diversified”: AI must be designed and managed by multiple skill sets. Those responsible for the day-to-day administration of the workflow are the ones who best understand where the breakdowns occur, where products fall short, where they, the staffers, spend most of their time and where customer sensitivities lie.

The business benefits: Diversifying AI design and development helps companies identify important features, UX/UI needs and use cases that might otherwise go unseen, or take more resources to surface. Companies like Wells Fargo have cross-functional centers of excellence to accelerate this process, emphasizing the value of using trusted internal influencers to facilitate onboarding.

Think “directional”: AI implementation is not a linear, “completed” destination, but rather one that calls for continual learning and iterations based on feedback loops.

The business benefits: Instilling a “directional” mindset reduces time to at-scale deployment. Even though people want to see results quickly, the extent of experimentation determines how strong any AI model is, and how many problems it can solve. Often, deployment time is based on user adoption, and the more people who can help train and optimize the system, (again) the more problems adoption can solve. This is also why companies like SEB, a Swiss bank, deployed its virtual agent, Aida, to 600 employees; then to 15,000 employees, before rolling the agent out across its million-plus customers.

Think “democratized”: AI is more sustainable when organizations enable accessible tools, training and multi-functional contribution and collaboration.

The business benefits: Democratizing access via easy-to-use tools means employees don’t have to have a data science degree to contribute value to AI systems. The more simple, reliable and “self-service” enterprise data portals become, the more employees of all stripes can activate enterprise data — an invaluable metric to any business.

In sum, the culture of an organization is inextricably linked to the willingness of its people to adapt, adopt, engage and innovate. Technology is only half the battle. Hierarchies, silos, complexity, distrust and complacency can choke innovation. Given that the most powerful AI involves both humans and machines, true AI readiness must go far beyond the data, and empower the people responsible for its success.

Source: Entrepreneur

20 Sep 2018

Thinking Like a Human: What It Means to Give AI a Theory of Mind

Last month, a team of self-taught AI gamers lost spectacularly against human professionals in a highly-anticipated galactic melee. Taking place as part of the International Dota 2 Championships in Vancouver, Canada, the game showed that in broader strategic thinking and collaboration, humans still remain on top.

The AI was a series of algorithms developed by the Elon Musk-backed non-profit OpenAI. Collectively dubbed the OpenAI Five, the algorithms use reinforcement learning to teach themselves how to play the game—and collaborate with each other—from scratch.

Unlike chess or Go, the fast-paced multi-player Dota 2 video game is considered much harder for computers. Complexity is only part of it—the key here is for a group of AI algorithms to develop a type of “common sense,” a kind of intuition about what others are planning on doing, and responding in kind towards a common goal.

“The next big thing for AI is collaboration,” said Dr. Jun Wang at University College London. Yet today, even state-of-the-art deep learning algorithms flail in the type of strategic reasoning needed to understand someone else’s incentives and goals—be it another AI or human.

What AI needs, said Wang, is a type of deep communication skill that stems from a critical human cognitive ability: theory of mind.

Theory of Mind as a Simulation

By the age of four, children usually begin to grasp one of the fundamental principles in society: that their minds are not like other minds. They may have different beliefs, desires, emotions, and intentions.

And the critical part: by picturing themselves in other peoples’ shoes, they may begin to predict other peoples’ actions. In a way, their brains begin running vast simulations of themselves, other people, and their environment.

By allowing us to roughly grasp other peoples’ minds, theory of mind is essential for human cognition and social interactions. It’s behind our ability to communicate effectively and collaborate towards common goals. It’s even the driving force behind false beliefs—ideas that people form even though they deviate from the objective truth.

When theory of mind breaks down—as sometimes in the case of autism—essential “human” skills such as story-telling and imagination also deteriorate.

To Dr. Alan Winfield, a professor in robotic ethics at the University of West England, theory of mind is the secret sauce that will eventually let AI “understand” the needs of people, things, and other robots.

“The idea of putting a simulation inside a robot… is a really neat way of allowing it to actually predict the future,” he said.

Unlike machine learning, in which multiple layers of neural nets extract patterns and “learn” from large datasets, Winston is promoting something entirely different. Rather than relying on learning, AI would be pre-programmed with an internal model of itself and the world that allows it to answer simple “what-if” questions.

For example, when navigating down a narrow corridor with an oncoming robot, the AI could simulate turning left, right, or continuing its path and determine which action will most likely avoid collision. This internal model essentially acts like a “consequence engine,” said Winston, a sort of “common sense” that helps instruct its actions by predicting those of others around it.

In a paper published early this year, Winston showed a prototype robot that could in fact achieve this goal. By anticipating the behavior of others around it, a robot successfully navigated a corridor without collisions. This isn’t anything new—in fact, the “mindful” robot took over 50 percent longer to complete its journey than without the simulation.

But to Winston, the study is a proof-of-concept that his internal simulation works: [it’s] “a powerful and interesting starting point in the development of artificial theory of mind,” he concluded.

Eventually Winston hopes to endow AI with a sort of story-telling ability. The internal model that the AI has of itself and others lets it simulate different scenarios, and—crucially—tell a story of what its intentions and goals were at that time.

This is drastically different than deep learning algorithms, which normally cannot explain how they reached their conclusions. The “black box” model of deep learning is a terrible stumbling block towards building trust in these systems; the problem is especially notable for care-giving robots in hospitals or for the elderly.

An AI armed with theory of mind could simulate the mind of its human companions to tease out their needs. It could then determine appropriate responses—and justify those actions to the human—before acting on them. Less uncertainty results in more trust.

Theory of Mind In a Neural Network

DeepMind takes a different approach: rather than a pre-programmed consequence engine, they developed a series of neural networks that display a sort of theory of mind.

The AI, “ToMnet,” can observe and learn from the actions of other neural networks. ToMNet is a collective of three neural nets: the first leans the tendencies of other AIs based on a “rap sheet” of their past actions. The second forms a general concept of their current state of mind—their beliefs and intentions at a particular moment. The output of both networks then inputs to the third, which predicts the AI’s actions based on the situation. Similar to other deep learning systems, ToMnet gets better with experience.

In one experiment, ToMnet “watched” three AI agents maneuver around a room collecting colored boxes. The AIs came in three flavors: one was blind, in that it couldn’t compute the shape and layout of the room. Another was amnesiac; these guys had trouble remembering their last steps. The third could both see and remember.

After training, ToMnet began to predict the flavor of an AI by watching its actions—the blind tend to move along walls, for example. It could also correctly predict the AI’s future behavior, and—most importantly—understand when an AI held a false belief.

For example, in another test the team programmed one AI to be near-sighted and changed the layout of the room. Better-sighted agents rapidly adapted to the new layout, but the near-sighted guys stuck to their original paths, falsely believing that they were still navigating the old environment. ToMnet teased out this quirk, accurately predicting the outcome by (in essence) putting itself in the near-sighted AI’s digital shoes.

To Dr. Alison Gopnik, a developmental psychologist at UC Berkeley who was not involved in the study, the results show that neural nets have a striking ability to learn skills on their own by observing others. But it’s still far too early to say that these AI had developed an artificial theory of mind.

ToMnet’s “understanding” is deeply entwined with its training context—the room, the box-picking AI and so on—explained Dr. Josh Tenenbaum at MIT, who did not participate in the study. Compared to children, the constraint makes ToMnet far less capable of predicting behaviors in radically new environments. It would also struggle modeling the actions of a vastly different AI or a human.

But both Winston’s and DeepMind’s efforts show that computers are beginning to “understand” each other, even at if that understanding is still rudimentary.

And as they continue to better grasp each others’ minds, they are moving closer to dissecting ours—messy and complicated as we may be.

Source: SingularityHub

19 Sep 2018

What is disruptive innovation?

Disruptive innovation is an idea that goes against the norms, significantly changing the status quo of regular business growth and developing what is known as a new market that accelerates beyond the scope of existing markets.

At the beginning, it may not be as good as the technology it’s replacing, but it’s usually cheaper, meaning more people can snap it up and it can become a commodity fast.

The key differentiator between disruptive innovation and standard innovation is that what is considered to be a revolutionary innovation doesn’t shift the entire market. It may make a difference to some people, but if it’s not accessible to all or doesn’t have an effect on the entire market, it’s not considered disruptive.

Although the introduction of cars in the 19th century was a revolutionary innovation, not everyone was able to afford them and so they didn’t become a commodity until much later.

Horse-drawn carriages remained as the primary mode of transport for a long time after the introduction of motorised vehicles – in fact until the Ford Model T was launched. It was a mass-produced car that made vehicles more affordable for everyone and is therefore considered a disruptive innovation.

The history of disruptive innovation

The idea of a disruptive innovation was first introduced by Harvard Business School scholar Clayton M. Christensen in 1995. He wrote a paper called Disruptive Technologies: Catching the Wave and then discussed the theory further in his book The Innovator’s Dilemma.

This latter book examined the evolution of disk drives but formed the argument that its the business model that allows for a product to become a disruptive innovation rather than the technology itself.

The term is now considered ‘modern jargon’ by some thinkers and publications, although it’s still widely used.

Examples of disruptive innovation in technology

The iPhone Although products aren’t necessarily always the definition of disruptive innovation, the iPhone is one that had a huge impact on the industry. It redefined communication and although at launch had a pretty narrow focus, quickly became mainstream, mobilising the laptop and developing a new business model in the process – one that relied upon app developers (a new, revolutionary concept in itself) to make it a success.

The iPhone completely disrupted the technological status quo – as smartphones because a way to replace laptops and evolved into the iPad – even further eradicating the need for a laptop.

Video on demand The concept of streaming TV programmes at any time to your TV would have been completely insane probably a decade ago.

The ability to stream content using the internet direct to the TV has completely disrupted the market. According to a recent report by the Office of National Statistics, half of adults now consume content using streaming services.

But it’s also disrupted the advertising market, with an entirely new revenue stream for sponsors and advertisers, shifted the balance in rights and caused startups to overtake traditional TV networks in popularity and income.

Digital photography Digital photography is a solid example of disruptive innovation, because not only did it introduce a technology – and whole ecosystem of accessories that grew in popularity because of it (such as memory cards, photo printers etc.), but it also completely shifted the manufacturer market, knocking Kodak, one of the previous market leaders, into oblivion.

The concept of digital photography was actually developed by Kodak engineer Steve Sasson, but it wasn’t popularised by the firm because Kodak wanted to concentrate on film – its bread and butter. Unfortunately, other firms jumped onto the idea and it completely revolutionised photography, leaving film a hobbyist technology in many cases, rather than a commercial opportunity.

Source: ITPRO

18 Sep 2018
Manahel Thabet

Preparing for the Future of AI

Artificial intelligence is taking the world by storm, and many experts posit that the technology has brought us to the cusp of a fourth industrial revolution that will fundamentally alter the business landscape. AI and machine learning are responsible for a constant stream of innovation and disruption in the way organizations operate. To avoid being left behind, business leaders need to prepare for this future now.

While the earliest iterations of AI emerged in the 1950s, hardware limitations prevented the technology from reaching its true potential. Of course, the amount of processing power in our pockets today would have astounded scientists in that era, and advanced algorithms are allowing us to put it to work, combing through reams of data in seconds at the mere touch of a button.

AI isn’t exactly real intelligence, but it is capable of spotting patterns buried deep within data sets that human eyes may or may not notice, and in a fraction of the time. Additionally, due to deep learning techniques, it’s capable of learning and improving over time, meaning it becomes more and more effective at its job. Thanks to this functional facet, AI is powering an exciting array of applications from investment strategies to autonomous vehicles.

AI is sweeping through one industry after another, and while the technolgy is still in its infancy, it’s important to get started implementing now, so you don’t fall behind in the future. Rome wasn’t built in a day, so its safe to assume that implementation of AI technology won’t happen overnight either. To position your company in the sweet spot to grasp this incredible opportunity now, here are three steps to follow.

1. Cultivate an open culture.

According to McKinsey, a profit gap is already emerging between early AI adopters and those who have yet to implement the technology. Unfortunately for the firms that are being left behind, catching up is more than the matter of purchasing new software.

While the tempo of technological change is difficult enough to keep up with, the pace of cultural change is glacial. To take advantage of AI requires a team effort, which necessitates organizations build that culture of trust and openness to encourage collaboration. Encourage this kind of open culture now — for example, promote cross-team collaboration, invite process experimentation and redefine key performance initiatives — to foster positive attitudes toward technological change and AI adoption.

2. Partner with the pioneers.

AI is on the bleeding edge of technological innovation, and the pioneers pushing it to the next innovative level are startups. These small companies aren’t going it alone, however. From financial institutions to automotive companies, large organizations are funding incubators and accelerators to nurture the next generation of startups whose technology will change industries.

Working with startups has a couple advantages for bigger companies. According to Hossein Rahnama, founder and CEO of Flybits, “Joining up with a young company gives each entity a partner to lean on and grow with over the years.” He adds, “Through these partnerships, you’re not just procuring technology; you’re gaining access to talent, consulting services, new ideas, and more.” Working with small, agile startups provides an excellent launchpad for technology strategy and adoption.

3. Capitalize on creativity.

AI can crunch numbers better than anyone you could ever hope to hire, but it can’t do everything. It certainly can’t exercise creative problem-solving capabilities, and it’s still up to your employees to turn the insights AI unlocks into high-level strategies that drive business value.

The issue is that many companies see AI as the remover of jobs, when really it is a job creator and efficiency upper. Instead of requiring a creative team to constantly multitask between crunching the numbers and strategy, use AI to perform some of the grunt work, and free up your creative teams to do what they do best. The initial implementation might make operations less efficient, but over time, your marketing and other creative teams will become much better at wielding the technology. With a bit of experience, they’ll aim it at the right data to determine your company’s next step in reaching its goals.

As AI becomes more widespread, it will morph from a competitive advantage to the price of admission. To keep pace with the rest of the pack, start creating an implementation strategy today.

Source: Entrepreneur