Category: Innovation

13 Nov 2018
Manahel Thabet

AI should be a global public good

Efforts to develop artificial intelligence (AI) are increasingly being seen as a global race, even a new Great Game. Apart from the race between countries to become more competent and establish a competitive advantage in AI, enterprises are also in a contest to acquire AI talent, leverage data advantages, and offer unique services. In both cases, success would depend on whether AI solutions can be democratized and distributed across sectors.

The global AI race is unlike any other global competition, as the extent to which innovation is being driven by governments, the corporate sector or academia differs substantially from country to country. On average, though, the majority of innovations so far have emerged from academia, with governments contributing through procurement, rather than internal research and development.

While the share of commodities in global trade has fallen, the share of digital services has risen, such that digitalization now underwrites more than 60 percent of all trade. By 2025, half of all economic value is expected to be created in the digital sector. And as governments have searched for ways to claim a position in the value chain of the future, they have homed in on AI.

Accordingly, countries ranging from the United States, France, Finland and New Zealand to China and the United Arab Emirates all now have national AI strategies to boost domestic talent and prepare for the future effects of automation on labor markets and social programs.

Still, the true nature of the AI race remains to be seen. It most likely will not be restricted to any single area, and the most important factor determining outcomes will be how governments choose to regulate and monitor AI applications, both domestically and in an international context. China, the US and other participants not only have competing ideas about data, privacy and national sovereignty, but also divergent visions of what the 21st century world order should look like.

Thus, nationalized AI programs are a hedged bet. Until now, governments have assumed that the country that is first to the finish line will be the one that captures the bulk of AI’s potential value. This seems accurate. And yet the issue is not whether the assumption is true, but whether a nationalized approach is necessary, or even wise.

After all, to frame the matter in strictly national terms is to ignore how AI is developed. Whether data sets are shared internationally could determine whether machine-learning algorithms develop country-specific biases. And whether certain kinds of chips rendered as proprietary technology could determine the extent to which innovation can proceed at the global level. In light of these realities, there is reason to worry that a fragmentation of national strategies could hamper growth in the digital economy.

Moreover, in the current environment, national AI programs are competing for a limited talent pool. And though that pool will expand over time, the competencies needed for increasingly AI-driven economies will change. For example, there will be a greater demand for expertise in cybersecurity.

So far, AI developers working out of key research centers and universities have found a reliable exit strategy, and a large market of eager buyers. With corporations driving up the price for researchers, there is now a widening global talent gap between the top companies and everyone else. And because the major technology companies have access to massive, rich data stores that are unavailable to newcomers and smaller players, the market is already heavily concentrated.

Against this backdrop, it should be obvious that isolationist measures-not least trade and immigration restrictions-will be economically disadvantageous in the long run. As the changing composition of global trade suggests, most of the economic value in the future will come not from goods and services, but from the data attached to them. Thus, the companies and countries with access to global data flows will reap the largest gains.

At a fundamental level, the new global competition is for applications that can compile alternate choices and make optimal decisions. Eventually, the burden of adjusting to such technologies will fall on citizens. But before that moment arrives, it is crucial that key AI developers and governments coordinate to ensure that this technology is used safely and responsibly.

Back in the days when the countries with the best sailing and navigation technologies ruled the world, the mechanical clock was a technology available only to the few. This time is different. If we are to have super intelligence, then it should be a global public good.


10 Nov 2018
Manahel Thabet

Scientists hunt mysterious ‘dark force’ to explain hidden realm of the cosmos

Scientists are about to launch an ambitious search for a “dark force” of nature which, if found, would open the door to a realm of the universe that lies hidden from view.

The hunt will seek evidence for a new fundamental force that forms a bridge between the ordinary matter of the world around us and the invisible “dark sector” that is said to make up the vast majority of the cosmos.

The chances of success may be slim, but should such a force be found it would rank among the most dramatic discoveries in the history of physics. The best theory of reality that physicists have explains only 4% of the observable universe. The rest is a mystery made up of dark matter, the strange material that lurks around galaxies, and the even more baffling dark energy, a substance called upon to explain the ever-accelerating expansion of the universe.

“At the moment, we don’t know what more than 90% of the universe is made of,” said Mauro Raggi, a researcher at the Sapienza University of Rome. “If we find this force it will completely change the paradigm we have now. It would open up a new world and help us to understand the particles and forces that compose the dark sector.”

Physicists, to date, know of only four basic forces of nature. The electromagnetic force allows for vision and mobile phone calls, but also stops us falling through our chairs. Without the so-called strong force, the innards of atoms would fall apart. The weak force operates in radiation, and gravity – the most pervasive of nature’s forces – keeps our feet rooted to the ground.

But there may be other forces that have gone unnoticed. These would shape the behaviour of the so far unknown particles that constitute dark matter, and could potentially exert the most subtle effects on the forces we are more familiar with.

This month, Raggi and his colleagues will turn on an instrument at the National Institute of Nuclear Physics near Rome which is designed to hunt down a possible fifth force of nature. Known as Padme, for Positron Annihilation into Dark Matter Experiment, the machine will record what happens when a diamond wafer a tenth of a millimetre thick is blasted with a stream of antimatter particles called positrons.

When positrons slam into the diamond wafer, they immediately merge with electrons and vanish in a faint burst of energy. Normally, the energy released is in the form of two particles of light called photons. But if a fifth force exists in nature, something different will happen. Instead of producing two visible photons, the collisions will occasionally release only one, alongside a so-called “dark photon”. This curious, hypothetical particle is the dark sector’s equivalent of a particle of light. It carries the equivalent of a dark electromagnetic force.

Unlike normal particles of light, any dark photons produced in Padme will be invisible to the instrument’s detector. But by comparing the energy and direction of the positrons fired in, with whatever comes out, scientists can tell if an invisible particle has been created and work out its mass. Though normal photons are massless, dark photons are not, and Padme will search for those up to 50 times heavier than an electron.

The dark photon, if it exists, would have an imperceptible influence on what makes up the world we see. But knowing its mass, and the kinds of particles it can break down into, would provide the first glimpse of what makes up the bulk of the universe that is beyond our perception.

The Padme experiment will run until at least the end of the year, but there are tentative plans to move the instrument to Cornell University in 2021. There it would be hooked up to a more powerful particle accelerator than in Italy to broaden its search for dark photons.

Other laboratories around the world are also looking for dark photons. Bryan McKinnon, a research fellow at Glasgow University, is involved in the search for the particle at the Thomas Jefferson national accelerator facility in Virginia. “The dark photon, if it exists, is effectively a portal,” he said. “It lets us peer into the dark sector to see what is happening. It won’t open the floodgates, but it will allow us to have a little look.”

Physicists have little idea how complex the dark sector might be. There may be no new forces to discover. Dark matter itself may be shaped by gravity alone and made up of only one type of particle. But it may be a far richer realm, where new kinds of invisible particles and forces wait to be found.

Read more:

07 Nov 2018

Digging deeper into AI: Why scratching the surface isn’t enough

There’s no question that artificial intelligence (AI) has become a game-changer for businesses today. For forward-looking leaders, AI is increasingly understood to be the key to transforming customer experiences, delivering new, digitally driven value propositions, and entering new markets. But most business leaders today aren’t capitalizing on AI’s full potential. That is largely because they’re limiting their focus to the technology itself — rather than focusing on the much bigger potential of AI as an engine for growth.

To reap the full benefits of AI, executives must dig deeper — they must evolve the technology from a hot new trend to a seamless enabler, woven into the fabric of the enterprise and truly working alongside and augmenting people. When done right, AI has the potential to allow companies to not only do different things, but also to do things differently.By more deeply understanding AI and the holistic value it can bring, companies can become more valuable to their ecosystem and maintain a competitive position in a world that will soon be powered by AI. But that also requires the ability to see beyond the hype and tackle the challenges and complexities to reach AI’s full potential.

A multi-pronged approach to AI

Firms across industries are starting to recognize the value of ingraining AI programs into all aspects of their business. While 57 percent of companies are still in the early investment or pilot stages of AI initiatives and have yet to deploy fully sustainable programs across their organization, some are starting the journey — and already reaping the benefits.

One large financial services firm, for example, is working to incorporate AI across all levels of its business by exploring areas such as intelligent automation that can automate analyst and operational workflows, as well as intelligent products that can deliver a new class of algorithm-driven funds for clients.

Data is the new currency

For the best learning and results, AI demands vast and diverse data. Accordingly, leadership teams are quickly realizing that data from their own company is highly valuable — and that data from a network of related companies is even more valuable.

AI will only be as good as the breadth of data used to “train it,” which is why it’s so important for organizations to leverage the data across an ecosystem rather than just within a company. According to Accenture Strategy research, 44 percent of executives across industries say the value of ecosystems lies in access to new customers, and with those customers comes data. (Note: The authors of this article both work for Accenture Strategy.)

The faster and more completely companies buy into ecosystems of data, the better able they will be to compete. AI-fueled insights will increasingly require vast data marketplaces to be truly targeted. It is these very insights that are key to business growth drivers — transforming experience, developing new digitally enabled products and services, and serving previously less attractive markets.

Read more:

04 Nov 2018

Make sure you’re not investing in zombie AI

Ever notice how images of robots almost always accompany AI propaganda? It’s certainly an effective tactic. Robots conjure up images of highly intelligent solutions poised to create far more efficient and profitable businesses. Yet there are rarely details available about how these AI technologies work. And as a result, many AI solutions are a ‘black box’ to users.

What’s in the box?

Software developers and marketers often lead customers to believe there’s a robot in the box, when in fact it may be just an artificially intelligent zombie dressed up in a robot costume.

A zombie is a little like a robot. It’s self-sufficient. It doesn’t need a lot of guidance or direction to do the things it does. But it’s not truly intelligent.

And a zombie AI system operates independently, with no human interaction. On one hand, a lack of interactivity is positive as it can mean ease of use; on the other, there’s no way to train a zombie to do something else, or to do its job better. Users are not only unable to apply changes but are shielded from the decisions and logic used in creating the system. With no awareness or understanding, there can be no accountability, nor hope for progress.

While most organizations employ talented developers and technicians capable of ‘teaching’ AI systems to overcome weaknesses, the absence of transparency precludes their ability to do so.

True AI

Among the throngs of zombie AI systems, though, exist a few quality AI systems. These systems are highly intelligent, and though they have some minor human dependencies, they produce incredibly reliable results. The developers of these systems want customers to have a good grasp of the ‘magic’ behind the intelligence – ‘magic’ that really amounts to specific settings, mechanics, controls, even known limitations.

True AI can be recognized by its interactivity and trainability. These systems combine intuitive interfaces with algorithms, instructions that tell the robotic brain what logic to use. And with a little coaching along the way, true AI gets smarter and learns to differentiate right from wrong.

Compared to zombie systems, true AI systems require more time investment initially but are typically more sustainable in the long run because the coaching continually improves them over time.

Robots will reign

Any investment in a zombie system is a waste of resources. Businesses that venture down that path will eventually want more control. They will want the ability to coach. When variables require adjusting or errors occur, they’ll want to be able to make fixes and modifications. At the very least, they’ll want a basic understanding of how the system works.

What they’ll really want is the robot they thought they were getting the first time.

Many business leaders who are disappointed in the outcome of their zombie are focused on the exorbitant amount of time they invested into making the system ‘fit.’ Often, they’ve gone so far as to change their business processes to accommodate the zombie brain. And sadly, they continue to sink money and resources into a solution that will never do what they expected.

Don’t make that mistake. Bury your zombie AI system before it’s too late. Look for true AI. Seek out the system in the ‘glass box,’ or at least one with an access panel into its robotic brain. It’s the AI frameworks that effectively balance transparency and trainability with performance and ease of use that will deliver the highest ROI – and separate the zombies from the robots.


03 Nov 2018
Manahel Thabet

How a small change will reduce distortion in measuring innovation

When your child is diabetic, a few minutes can make a big difference, and it pays to have real-time access to their blood sugar numbers. But what if no one sells a product that can do that? You build one, like the open-source community that developed the wireless blood sugar monitor Nightscout did.

The product serves a public good—keeping diabetics safe—and the community offers the plans to build it for free online. But until now, such clear examples of innovation haven’t been counted as such under the Organisation for Economic Co-operation and Development’s definition. That changed on Oct. 24: The organization’s new edition of the Oslo Manual, the guidebook for collecting and using data on industrial innovation used by most nations, now includes a revised definition of an innovation, removing the requirement that one must first be commercialized to be counted.

MIT Sloan economist Eric von Hippel, in his book Free Innovation, argues that household or “free” innovations—those developed by people on their own time and dollar—aren’t receiving enough support. That’s because they aren’t being recognized for what they are by public policymakers, even though they’re responsible for a significant proportion of new ideas and innovation.

Under the previous OECD definition, an innovation only qualifies as an innovation if it has been introduced to the market. That definition of “onto the market” means that innovators in the household sector—representing tens of millions of people spending tens of billions of dollars on product development and modification per year—don’t get credit for innovations because 90 percent of them simply give their innovations away.

In mountain biking, for example, participants in the sport—who are now the consumers of the mountain bike industry—designed and built the first mountain bikes, but didn’t receive any credit in the statistics, von Hippel said.

“Basically, the end result is a distorted system where businesses get a lot of credit for a lot of innovations they didn’t do. This, in turn, biases public policy toward the needs of companies and their intellectual property rights,” von Hippel said. “Now, finally, with a better OECD definition and better data we’ll be in a position to allocate innovations to the people who actually develop them. That in turn will make household sector innovation visible to government policymakers, and induce people to make a more level playing field where both consumer innovators and producer innovators are acknowledged and supported.”

The new OECD definition reads:

An innovation is a new or improved product or process, or combination thereof, that differs significantly from the unit’s previous products or processes and that has been made available to potential users (product) or brought into use by the unit (process). (This general definition uses the generic term “unit” to describe the actor responsible for innovations. It refers to any institutional unit in any sector, including households and their individual members.)

Tinkering leads to tech breakthroughs

Research by von Hippel and colleagues into consumer innovation across 10 nations found that the phenomenon was both very general and very important—generating a research and development capital stock of about $250 billion in the U.S. alone. These consumer hacks addressed all product areas of interest to consumers from new medical devices to sports and software hacks.

“If there is nothing out there, consumers will build it for themselves,” he said. “Ninety percent of these people just give [innovations] away for free, and the other 10 percent is where a lot of entrepreneurship comes from. That means 90 percent of innovation occurring in the household sector hasn’t been counted.”

That means efforts to support household innovators, like building more maker spaces, aren’t being carried out, von Hippel said. “If the household sector is developing many generally valuable innovations, this increases social welfare just as producer innovation does, and society ought to level the playing field and support both household sector innovation and producer innovation,” he said.

Read more at:

01 Nov 2018
Manahel Thabet

AI under the spotlight

The ethical dilemmas inherent in artificial intelligence (AI) will be the focus of a seminar held at the State Library Victoria, in Melbourne, Australia on 13 November.

Professors Toby Walsh and Sharon Oviatt will sit down to discuss and answer questions about the future of this technology in a forum to be moderated by Kylie Ahern, co-founder of Cosmos magazine.

The event is billed as “The ethical dilemmas of AI – Are we sleepwalking into an AI future” and is open to the public.

Sharon Oviatt is a professor at Monash University, known for her work in human-computer interaction. Toby Walsh is a professor at the University of New South Wales in Australia, with a focus on limiting AI “to ensure the public good”.

Ahern says the event is open to “anyone with an interest in AI”.

“This talk will help people think bigger about AI and gain a better understanding of what it is and how it might impact us,” she adds.

The panellists are expected to discuss Australia’s investment in the field, the development of commercialised technology and use of this technology to support human needs, activities and values.

Ahern says she plans to ask the panellists about the most impactful and innovative research projects in the field today, as well as questions related to the timeframe for wide-spread AI in our daily lives.

“Outside of academia we don’t have a great understanding of AI, the history of AI research or where it’s headed,” she says.

“What should we be scared of and excited about? What are the safety measures we need to implement? How will it change us and how our behaviours?”

The event Is part of the Monash University Dean’s Seminar Series.


31 Oct 2018
Manahel Thabet

AI powered device for Locked-In Syndrome patients available on NHS Supply Chain

EyeControl is an AI-powered, wearable eye tracking device that enables immediate communication for both emergency and social purposes with the first devices expected to be delivered to patients by the end of the year.

Or Retzkin, CEO of EyeControl said: “Since our launch in the UK in August we’ve received very positive feedback on our device. We’re thrilled to be officially working with the NHS to enable patients to once again communicate with their loved ones and carers in a simple, intuitive, and innovative way.”

Patients are said to be able use the device within 20 minutes. It consists of a head-mounted infrared camera that tracks the eye movements of a wearer and translates it into audio communication via a speaker. A bone conduction element that sits within the earpiece provides audio feedback to the user, allowing them to hear the communication before it is sent to the output speaker. The wearer can use predefined sentences or teach the EyeControl their own personalised syntax, as well as choose from a range of output languages and the device features Bluetooth wireless technology and works without a screen.

Helen Paterson, speech therapist at The Royal Hospital of Neuro-disability recently tested the device with a number of her patients and said: “The brilliant thing about The EyeControl over alternative communication devices is that it’s quite light and easy to wear and patients can communicate but they don’t have to have a big screen in front of them and they only need to move their eyes up and down and side to side. This means they don’t have to rely on having their device in front of them all the time, which obviously makes communication much easier for locked-in patients.”


28 Oct 2018

The exciting impact of AI on everyday life

Austin Tanney, Head of AI at Kainos, discusses the impact of artificial intelligence on our everyday lives. It is exciting, he concludes!

From my Northern Irish vantage point, I coordinate and facilitate a collaborative network around AI. Our 30 members range from micro SMEs through to multinational organisations, such as Liberty IT and Allstate. As such, I am privy to an incredible range of AI based applications and solutions that are coming down the line, and I am always surprised at the pace of change in the industry.

With every change, we need to take a few steps back and rethink how to frame the state of the art at that given time – so it’s worth keeping in mind that what is state of the art at any given time may well be seen as mundane in just a few short months.

For example, when the age of the AI personal assistant arrived with Google Assistant, Siri and Cortana, my framing focussed on trying to communicate that AI was no longer an abstract concept, but part of our everyday lives; albeit in a relatively limited manner. But it was only with the arrival of Amazon Alexa that many people were spending real money to own what is ultimately an AI product.

Today? Well… today things feel different again. Let me give you a few examples of applications of AI that we will all find hugely beneficial.

Read more:

22 Oct 2018
Manahel Thabet

Artificial Intelligence: What’s The Difference Between Deep Learning And Reinforcement Learning?

The various cutting-edge technologies that are under the umbrella of artificial intelligence are getting a lot of attention lately. As the amount of data we generate continues to grow to mind-boggling levels, our AI maturity and the potential problems AI can help solve grows right along with it. This data and the amazing computing power that’s now available for a reasonable cost is what fuels the tremendous growth in AI technologies and makes deep learning and reinforcement learning possible. With the rapid changes in the AI industry, it can be challenging to keep up with the latest cutting-edge technologies. In this post I want to provide easy-to-understand definitions of deep learning and reinforcement learning so that you can understand the difference.

Both, deep learning and reinforcement learning are machine learning functions, which in turn are part of a wider set of artificial intelligence tools.  What makes deep learning and reinforcement learning functions interesting is they enable a computer to develop rules on its own to solve problems. This ability to learn is nothing new for computers – but until recently we didn’t have the data or computing power to make it an everyday tool.

What is deep learning?

Deep learning is essentially an autonomous, self-teaching system in which you use exiting data to train algorithms to find patterns and then use that to make predictions about new data. For example, you might train a deep learning algorithm to recognize cats on a photograph. You would do that by feeding it millions of images that either contain cats or not. The program will then establish patterns by classifying and clustering the image data (e.g. edges, shapes, colours, distances between the shapes, etc.). Those patterns will then inform a predictive model that is able to look at a new set of images and predict whether they contain cats or not, based on the model it has created using the training data.

Deep learning algorithms do this via various layers of artificial neural networks which mimic the network of neurons in our brain. This allows the algorithm to perform various cycles to narrow down patters and improve the predictions with each cycle.

A great example of deep learning in practice is Apple’s Face ID. When setting up your phone you train the algorithm by scanning your face. Each time you log on using e.g. Face ID, the TrueDepth camera captures thousands of data points which create a depth map of your face and the phone’s inbuilt neural engine will perform the analysis to predict whether it is you or not.

Read more:

17 Oct 2018

You Have No Idea What Artificial Intelligence Really Does

WHEN SOPHIA THE ROBOT first switched on, the world couldn’t get enough. It had a cheery personality, it joked with late-night hosts, it had facial expressions that echoed our own. Here it was, finally — a robot plucked straight out of science fiction, the closest thing to true artificial intelligence that we had ever seen.

There’s no doubt that Sophia is an impressive piece of engineering. Parents-slash-collaborating-tech-companies Hanson Robotics and SingularityNET equipped Sophia with sophisticated neural networks that give Sophia the ability to learn from people and to detect and mirror emotional responses, which makes it seem like the robot has a personality. It didn’t take much to convince people of Sophia’s apparent humanity — many of Futurism’s own articles refer to the robot as “her.” Piers Morgan even decided to try his luck for a date and/or sexually harass the robot, depending on how you want to look at it.

“Oh yeah, she is basically alive,” Hanson Robotics CEO David Hanson said of Sophia during a 2017 appearance on Jimmy Fallon’s Tonight Show. And while Hanson Robotics never officially claimed that Sophia contained artificial general intelligence — the comprehensive, life-like AI that we see in science fiction — the adoring and uncritical press that followed all those public appearances only helped the company grow.

But as Sophia became more popular and people took a closer look, cracks emerged. It became harder to believe that Sophia was the all-encompassing artificial intelligence that we all wanted it to be. Over time, articles that might have once oohed and ahhed about Sophia’s conversational skills became more focused on the fact that they were partially scripted in advance.

Read more: