Category: Innovation

25 May 2019
Economist urges Government to manage disruptive technologies

Economist urges Government to manage disruptive technologies

If Government is to bring down the country’s soaring debt through growth, an American economist is strongly advising the Mia Mottley administration to better handle disruptive technologies.

But while many pundits advocate a complete embrace of this technology, Michigan University economics professor Dr Linda Tesar is warning Government to expect significant short-term pain in order to gain potential benefits in the long run.

Disruptive technology has significantly altered the way businesses or entire industries have traditionally functioned.

In the most recent cases, much of it driven by e-commerce, businesses have been forced to change the way they approach their operations for fear of losing market share or becoming irrelevant.

With Amazon up-ending bricks-and-mortar retailers, Uber ride-sharing changing the face of public transport and Airbnb’s online hospitality marketplace disrupting the hotel and tourism trade.

Tesar, who was the featured speaker as the Central Bank of Barbados’ 6th Distinguished Visiting Fellow, said: “The thing about disruptive technology such as Airbnb and Uber, it is great, but it is disruptive for a reason because it disrupts what is already there. This means that the only way to take advantage of disruptive technology is to be willing to upset existing businesses.

“If you bring in Uber, the taxi drivers aren’t going to be very happy. For example, when driverless truck technology comes on stream, it is going to mean layoffs for many drivers.

“What are these drivers going to do? Re-training and re-tooling are all easy things to say but not so easy in practice.”

Dr. Tesar explained that because disruptive technologies are not contained by conventional rules, it is difficult to adequately plan for it in a growth strategy.

She said: “We don’t know what it is because if we knew, we would put a label on it. So, it is very hard to create the conditions for it to grow.

“I think it is tempting to say that growth is the way out, but I think it is dangerous to say that one is going to grow themselves out of debt. While it is probably good in the long run, in the short term it is very painful.”

The economist suggested that with a high debt to GDP, Barbados only has three options if Government is to attain the goal of bringing the debt to 60 per cent of GDP by 2033: taxation, cuts in spending and growth.

She said that of the three, growth is the most desirous option, but noted that in the quest for quick growth, unmanaged disruptive technologies become a concern.

Dr Tesar said: “In bringing down the debt there is only three things to do, spend less, tax more or growth. All three of those things are going to contribute to primary surplus, so that you can have a sustainable level of debt.

“Out of those, if one had to pick one, growth would be the one that they choose but getting growth is never that simple.

“How do we grow our way out of debt? One way is to create a climate where businesses can say this is a place where we want to invest.  Another is increasing efficiency by getting more out of what you are currently doing.

“Finally, there is innovation, which sometimes shows up as a technology factor in the production function. It ends up being the residual that we can’t explain.”

Source:
https://barbadostoday.bb/2019/05/23/economist-urges-government-to-manage-disruptive-technologies/

23 May 2019
The future of AI is collaborative

The future of AI is collaborative

AI is becoming increasingly widespread, affecting all facets of society — even Sonic drive-ins are planning to implement artificial intelligence to provide better customer service.

Of course, every time a new innovation appears in the realm of AI, fears arise regarding its potential to replace human jobs. While this is a reality of adapting to a more tech-driven society, these fears tend to ignore the collaborative and job-creating attributes that AI will have in the future.

The future’s most successful businesses will be those that learn to combine the best attributes of machines and human workers to achieve new levels of efficiency and innovation. In reality, the future of AI will be largely dependent on collaboration with living, breathing human beings.

AI augmenting human performance

In most business settings, AI does not have the ability to make crucial decisions. However, it does have the power to provide greater insights and support to ensure that you make the right decisions faster.

Simply put, there are many tasks that AI can perform faster and more efficiently than humans. It is estimated that we produce 2.5 quintillion bytes of data per day. While individual businesses only produce a tiny fraction of that total, there is no denying that trying to analyze data points drawn from diverse areas such as logistics, marketing and corporate software programs is becoming increasingly difficult.

This is where AI enters the picture. Machine learning allows AI to analyze data points at much greater speed than a person ever could, while also eliminating the risk of data entry errors that so often occur during manual work.

Such systems present data in comprehensive formats that make it far easier to identify trends, opportunities and risks to improve business practices. This trend is already having a significant impact in the business world. A 2016 survey revealed that “61 percent of those who have an innovation strategy say they are using AI to identify opportunities in data that would otherwise be missed.”

While AI may not be granted decision-making capabilities for crucial business tasks, its ability to provide reliable, error-free data is already leading to vital insights that completely transform business operations.

AI’s automation capabilities means it is increasingly being used to streamline mundane tasks and give workers more time for high-level activities. This can make companies more efficient by lowering operating costs and improving productivity. In other words, as AI continues to advance, it will help us do our own jobs even better.

However, the biggest potential for AI comes from machine learning.

As AI learns from new data inputs, it becomes increasingly powerful and better able to assist with more complex tasks and algorithms, further expanding opportunities for collaboration and increased efficiency. Machine learning is helping AI applications better understand a wider range of instructions, and even the context in which a request is made.

This will lead to even faster and more efficient results, and helping to overcome common problems we see today, such as automated customer service systems being unable to solve complaints or requests. Even as these systems grow more advanced, however, there will still be many instanceswhere human interaction is needed to achieve the desired resolution.

People will help machines, too

The future doesn’t merely entail AI streamlining everyday tasks or helping us do our jobs better. AI is only possible thanks to human ingenuity, and that trend isn’t going away anytime soon. Future innovations and improvements will be largely dependent on what people are able to produce.

As Russell Glenister explains in an interview with Business News Daily, “Driverless cars are only a reality because of access to training data and fast GPUs, which are both key enablers. To train driverless cars, an enormous amount of accurate data is required, and speed is key to undertake the training. Five years ago, the processors were too slow, but the introduction of GPUs made it all possible.”

Improving GPUs aren’t the only way developers will continue to play a vital role in helping AI advance to new heights. Human guidance will also be necessary to help AI “learn” how to perform desired tasks — particularly for applications where real-time human interaction will be required.

This is especially apparent in virtual assistants such as Alexa or Siri. Alexa’s recent introduction of speech normalization AI has been found to reduce errors by 81 percent, but these results were only achieved after researchers provided training using a public data set containing 500,000 samples. Similar processes have also been used to give these virtual assistants their own distinct personalities.

As AI applications become more complex and more engrained in day to day life, there will also be an increased need for individuals who can explain the findings and decisions generated by a machine.

Supervision of AI applications will also be necessary to ensure that unwanted outcomes — such as discrimination and even racism — are detected and eliminated to prevent harm. No matter how smart AI becomes, it will continue to require human guidance to find new solutions and better fulfill its intended function.

Though AI offers boundless opportunities for innovation and improvement, it won’t be able to achieve its full potential on its own. A collaborative future will see programmers, engineers and everyday consumers and workers more fully integrating AI into their daily lives.

When people and AI work together, the possibilities will be truly limitless.

Source:
https://thenextweb.com/podium/2019/05/21/the-future-of-ai-is-collaborative/

22 May 2019
In the 'post-digital' era, disruptive technologies are must-haves for survival

In the ‘post-digital’ era, disruptive technologies are must-haves for survival

Call it the post-digital era. That’s the era that many organizations have now entered – the era in which advanced digital technologies are must-haves in order to stay competitive in their markets. At least, that is the way that Accenture sees it.

The research and consulting firm has recently published its annual Technology Vision report, looking at where organizations have been putting their technology investments, and which tools and trends they see as top priorities in the next year.

Information Management spoke with Michael Biltz, who heads the research for the annual study, about what this year’s study revealed and what lessons it holds for software professionals and data scientists.


Information Management: In your recent Technology Vision report, what are the most significant findings of interest to data scientists and data analysts?

Michael Biltz: The overarching takeaway from the 2019 Accenture Technology Vision report is that we’re entering a “post-digital” era in which digital technologies are no longer a competitive advantage – they’re the price of admission. This is supported by our research, which found that over 90 percent of companies invested in digital transformation in 2018, with collective spend reaching approximately $1.1 trillion. This indicates that we’re now at a point where practically every company is driving its business with digital capabilities – and at a faster rate than most people have anticipated.

This new environment necessitates new rules for business; what got your company to where it is today will not be enough to succeed in the new post-digital era.

For example, technology is creating a world of intensely customized and on-demand experiences, where every moment will become a potential market – what we call “momentary markets.” Already, 85 percent of those surveyed believe that customer demands are moving their organizations toward individualized and on-demand delivery models, and that the integration of these two capabilities represents the next big wave of competitive advantage.

In other words, success will be judged by companies’ ability to combine deep understanding of its customers with individualized services delivered at just the right moment.


IM: How do those findings compare with the results of similar previous studies by Accenture?

Biltz: One of the great things about publishing this report annually is that we can observe how the trends evolve year-over-year, with the latest report drawing on insights from earlier editions. The 2019 report builds on last year’s theme of ‘Intelligent Enterprise Unleashed: Redefine Your Company Based on the Company You Keep,’ which focused on how rapid advancements in technologies—including artificial intelligence (AI), advanced analytics and the cloud—are accelerating the creation of intelligent enterprises and enabling companies to integrate themselves into people’s lives, changing the way people work and live.

Expanding on last year’s theme, the 2019 report discusses how the digital enterprise is at a turning point, with businesses progressing on their digital journeys. But digital is no longer a differentiating advantage—it’s now the price of admission. In this emerging “post-digital” world, in which new technologies are rapidly evolving people’s expectations and behavior, success will be based on an organization’s ability to deliver personalized “realities” for customers, employees and business partners. This will require understanding people at a holistic level and recognizing that their outlooks and needs change at a moment’s notice.

IM: Were you surprised by any of these findings, and why so or why not?

Biltz: We were surprised to see that on the one hand, companies are investing time and money into transforming their services and job functions, yet commitment to transforming and reskilling their workforces largely hasn’t kept pace. And when new roles and capabilities are created, many organizations still try to apply traditional (but increasingly outdated) tools, organization structures, training and incentives to support them. This is creating what we call a “Digital Divide,” between companies and their employees.


IM: What are the data-related themes and technologies that will be of greatest interest to organizations over the next three years?

Biltz: While all of the themes described in my response to the first question are relevant to forward-looking organizations, the first trend, ‘DARQ Power: Understanding the DNA of DARQ,’ highlights four emerging technologies that companies should explore in order to remain competitive. These are distributed ledger technology (DLT), artificial intelligence (AI), extended reality (XR) and quantum computing (Q).

While these technologies are at various phases of maturity and application, each represents opportunities for businesses to stay ahead of the curve, differentiate themselves and vastly improve products and services.


IM: Did your study shed any light on how prepared organizations are to adopt these technologies and get expected value from them?

Biltz: Yes, our research into the development and application of DARQ technologies was quite revealing. Our research found that 89 percent of businesses are already experimenting with one or more of these technologies, expecting them to be key differentiators. However, the rate of adoption varies between the four technologies, as they’re currently at varying stages of maturity.

Here are a few specifics for each of DARQ capability: 

  • Distributed Ledger Technologies: 65 percent of executives surveyed reported that their companies are currently piloting or have adopted distributed ledger technologies into their business; 23 percent are planning to pilot this kind of technology in the future.
  • Artificial Intelligence: When asked to rank which of the DARQ technologies will have the greatest impact on their organization over the next three years, 41 percent listed AI as number one. Already, 67 percent are piloting AI solutions, or have already adopted AI across one or more business units.
  • Extended Reality: 62 percent are leveraging XR technologies, and this percentage is set to increase, with 24 percent evaluating how to use XR in the future.
  • Quantum Computing: Although quantum computing is the furthest of the DARQ technologies from full maturity, we’re seeing rapid advances in this area. Consider this: it took 19 years to get from a chip with two qubits (the quantum equivalent of a traditional computer’s bit) to one with 17; within two years of that Google unveiled a 72-qubit chip. And the technology is becoming more readily available, with software companies releasing platforms that allow organizations that don’t have their own quantum computers to use the former’s quantum computers via the cloud.


IM: What is your advice on how IT leaders can best educate the C-suite on which so-called disruptive technologies are worth investing in and which aren’t a good match?

Biltz: It’s important to first focus on your long-term vision and strategy for the company, asking questions such as, “Who do we as a company want to be in 5-10 years? What markets will we target? And what role do we want to play in the emerging digital ecosystems?”

Once you understand the answers to these questions, not only do the specific technologies fall into place, they also tend drive a level of innovation that are usually only expected from likes of the technology giants.


IM: What industries or types of organizations are the leaders with cutting-edge and disruptive technologies that other organizations can learn from?

Biltz: I think organizations can best learn from companies – regardless of industry – that are exploring leveraging more than one DARQ capability to unlock value. This is where true disruption lies: those exploring how to integrate these seemingly standalone technologies together will be better positioned to reimagine their organizations and set new standards for differentiation within their industries.

Volkswagen is an excellent example. The company is using quantum computing to test traffic flow optimization, as well as to simulate the chemical structure of batteries to accelerate development. To further bolster the results from quantum computing, the company is teaming up with Nvidia to add AI capabilities to future models.

Volkswagen is also testing distributed ledgers to protect cars from hackers, facilitate automatic payments at gas stations, create tamper-proof odometers, and more. And the company is using augmented reality to provide step-by-step instructions to help its employes service cars.

Read more:
https://www.dig-in.com/news/in-the-post-digital-era-disruptive-technologies-are-must-haves-for-survival

21 May 2019
Artificial intelligence better than humans at spotting lung cancer

Artificial intelligence better than humans at spotting lung cancer

Researchers have used a deep-learning algorithm to detect lung cancer accurately from computed tomography scans. The results of the study indicate that artificial intelligence can outperform human evaluation of these scans.

doctor looking at lung scans on computer screen

New research suggests that a computer algorithm may be better than radiologists at detecting lung cancer.

Lung cancer causes almost 160,000 deaths in the United States, according to the most recent estimates. The condition is the leading cause of cancer-related death in the U.S., and early detection is crucial for both stopping the spread of tumors and improving patient outcomes.

As an alternative to chest X-rays, healthcare professionals have recently been using computed tomography (CT) scans to screen for lung cancer.

In fact, some scientists argue that CT scans are superior to X-rays for lung cancer detection, and research has shown that low-dose CT (LDCT) in particular has reduced lung cancer deaths by 20%.

However, a high rate of false positives and false negatives still riddles the LDCT procedure. These errors typically delay the diagnosis of lung cancer until the disease has reached an advanced stage when it becomes too difficult to treat.

New research may safeguard against these errors. A group of scientists has used artificial intelligence (AI) techniques to detect lung tumors in LDCT scans.

Daniel Tse, from the Google Health Research group in Mountain View, CA, is the corresponding author of the study, the findings of which appear in the journal Nature Medicine.

‘Model outperformed all six radiologists’

Tse and colleagues applied a form of AI called deep learning to 42,290 LDCT scans, which they accessed from the Northwestern Electronic Data Warehouse and other data sources belonging to the Northwestern Medicine hospitals in Chicago, IL.

The deep-learning algorithm enables computers to learn by example. In this case, the researchers trained the system using a primary LDCT scan together with an earlier LDCT scan, if it was available.

Prior LDCT scans are useful because they can reveal an abnormal growth rate of lung nodules, thus indicating malignancy.

Read more:
https://www.medicalnewstoday.com/articles/325223.php

20 May 2019

New Photonic Microchip Mimics Basic Brain Function

Although the bleeding edge of artificial intelligence has provided us with powerful tools that can outmatch us in specific tasks and best us in even our most challenging games, they all operate as isolated algorithms with none of the incredible associative power of the human brain. Our current computational architecture can’t match the efficiency of our own minds, but a composite team of researchers from the Universities of Münster, Oxford, and Exeter discovered a way to begin narrowing that gap by creating a small artificial neurosynaptic network powered by light.

Today’s computers store memory separately from the processor. The human brain, on the other hand, stores memory in the synapses that serve as the connective tissue between neurons. Rather than transferring memory for processing, the brain has concurrent access to that memory when connected neurons fire. While neuroscience generally accepts the theory of synaptic memory storage, we still lack a definitive understanding of how it works.

Even still, we understand that the brain handles multiple processes simultaneously. Human brains may not function with the numeric computational efficiency of a basic calculator by default, but it takes a supercomputer that requires enough electricity to power 10,000 homes to outpace it. The human brain sets a high benchmark for its artificial counterpart in processing power alone, and its significant architectural differences make its capabilities so much more dynamic that it makes these comparisons almost pointless.

The multinational research team that successfully replicated an artificial neurosynaptic network used four neurons and 60 synapses arranged with 15 synapses per neuron. On its own, the cerebrum contains billions of neurons. With an even larger number of synapses (a ratio of about 10,000:1), when you consider the difference in scale, it’s easy to see how this major accomplishment only serves as an initial step in a long journey.

But that doesn’t make the accomplishment any less impressive. The way the chip functions defies simple explanation, but in essence, it uses a series of resonators to guide incoming laser light pulses so they reach the artificial neuron as intended. Without proper wave guidance, the artificial neurons would fail to receive consistent input and have no practical computational function. Comprised of phase-change material, the artificial neurons change their properties in response to a focused laser and this process successfully imitates one tiny piece of the human brain.

Read more:
https://www.extremetech.com/computing/291627-new-photonic-microchip-mimics-basic-brain-function

19 May 2019

Scientists: We Need to Protect the Solar System from Space Mining

Save The Solar System

A group of scientists want to declare much of the solar system to be official “space wilderness” in order to protect it from space mining. As The Guardian reports, the proposal calls for more than 85 percent of the solar system to be protected from human development.

“If we don’t think about this now, we will go ahead as we always have, and in a few hundred years we will face an extreme crisis, much worse than we have on Earth now,” Martin Elvis, senior astrophysicist at the Smithsonian Astrophysical Observatory in Cambridge , Massachusetts,  and lead author, told The Guardian. “Once you’ve exploited the solar system, there’s nowhere left to go.”

Iron Horse

The research will be published in an upcoming issue of the journal Acta Astronautica.

It suggests that we could use up as much as an eighth of the solar system’s supply of iron — the researchers’ proposed “tripwire” threshold after which we’d run the risk of running out of space resources indefinitely — in just 400 years.

Asteroid Farmer

Numerous private companies have suggested that space mining could further human advancements in space — while turning a huge profit. For instance, U.S.-based mining company Planetary Resources is already planning to look for “critical water resources necessary for human expansion in space,” according to its website.

And the next generation of human explorers is bound to be swept up by the great promise of space resources as well. The Colorado School of Mines has started offering a PhD program that focuses on the “exploration, extraction, and use of [space] resources.”

Gold Rush

What’s less clear is whether humankind has learned its lesson here on Earth.

“If everything goes right, we could be sending our first mining missions into space within 10 years,” Elvis told The Guardian. “Once it starts and somebody makes an enormous profit, there will be the equivalent of a gold rush. We need to take it seriously.”

Source:
https://futurism.com/scientists-stop-space-mining-solar-system

18 May 2019

The First Steps To Digital Transformation? Get Your Data In Order

Recently, Gartner announced its top 10 strategic technology trends for 2019. It is a nice list, touching on digital transformation trends that range from empowered edge computing to artificial intelligence-driven autonomous things. But while Gartner’s trends sound great in annual reports and Forbes articles, operationally, most enterprises aren’t properly (or digitally) prepared to adopt these trends. The reason why? Today’s pace of business and the disorderly data that’s needed to make sense of it all.

In the past, IT environments were simpler and more accessible for humans. But with the advent of cloud, containers, multi-modal delivery and other new technologies resulting in inordinately massive and complex environments, IT is being forced to move at machine speed, rendering manual processes too slow and inefficient.

To keep up with the rapid pace and scale of today’s digital environments, enterprises are turning to AIOps, which is powered by machine learning (ML) and artificial intelligence (AI). Unfortunately, ML-based algorithms and AI-based automation, key elements of unlocking digital transformation, are easier said than done. The underlying reason is that ML-based algorithms, by themselves, aren’t sophisticated enough to deal with today’s ephemeral, containerized, cloud-based world. ML needs to evolve into AI, and to do that, it needs cleaner actionable data to automate processes.

But attaining high-quality data presents its own unique challenges, and enterprises that do not have the right strategy in place will encounter cascading problems when trying to implement digital transformation initiatives in the future.

How To Build A High-Quality Data Strategy — Two Types Of Data

Imagine cooking a meal from scratch only to realize you forgot to chop an onion. You might be able to add it in later, but it won’t add the same texture and flavor. Too often, enterprises embark on an AI/ML transformation only to realize mid-development that they are missing key performance indicator (KPI) data that they did not foresee needing. Such mid-process realizations can have deleterious effects on a digital transformation initiative, stalling or even crippling its progress. Simply put, AI/ML doesn’t function without the right data.

The first step to building a high-quality data strategy is realizing that you need two separate data strategies: one for historical data and the other for real-time data or continuous learning.

Historical data is crucial for AI/ML strategies and serves as the fundamental building block for any effective anomaly detection, predictor or pattern analysis implementation. However, getting the right historic training data is much more difficult and challenging than many might assume.

There are several key questions to consider:

• What do your end goals and use cases for automation look like?

• What data do those use cases demand?

• How much of that data do you need?

• At what fidelity do you need that data?

Next, realize that training AI/ML on historical data is not enough. It needs to ingest real-time data to respond to and automate processes. Real-time data is the fuel that allows the ML algorithms to learn and adapt to new situations and environments. Unfortunately, real-time data presents its own set of challenges, too. The volume, velocity, variety and veracity of data can be overwhelming and expensive to manage.

Finally, enterprises must ensure the ML algorithms don’t acquire bad habits as a consequence of using poor data. And like bad human habits, it is hard to get an AI to unlearn a bad habit once formed. Specifically, these could be outliers that are erroneously deemed normal when they aren’t. Or they could present data gaps, which may skew newly learned behavior. Fundamentally, an AI/ML platform that does learn from bad data can ultimately result in extraneous false alerts and have negative impacts on IT operations. There are multiple ways to avoid going down this path, but they all boil down to one important thing: data quality.

The Two Most Important Ingredients For Data Quality

Historic and real-time training data are foundational to AI, ML and automation. However, data quality remains a major sore point for enterprises that underestimate the complexity of that challenge. Fortunately, data quality issues don’t have to be a terminal problem if approached strategically.

The most important step is to have full visibility both horizontally across operational silos and vertically, deep into infrastructure layers. You won’t know what KPIs are going to be important, so an ideal solution is one that allows you to ingest as much data as possible from as many places as possible right from the start.

It is also crucial that data be stored and normalized in a way that connects it to other data. Data that rests in silos will never be able to power automation; it has to have context. An ideal solution is one that can ingest data and contextualize it simultaneously. Spending time stitching data together, normalizing and correlating it after it is ingested is time-consuming and difficult.

Read more:
https://www.forbes.com/sites/forbestechcouncil/2019/05/13/the-first-steps-to-digital-transformation-get-your-data-in-order/#432cb38b3c61

14 May 2019

Five Things To Know About AI

Artificial intelligence (AI) holds a lot of promise when it comes to almost every facet of how businesses are run. Global spending on AI is rising with no signs of slowing down — IDC estimates that organizations will invest $35.8 billion in AI systems this year. That’s an increase of 44% from 2018. With all the fanfare, it’s easy to get lost in the noise and excitement — and with all of the vendors out there touting their various AI-based solutions, it’s also easy to get confused about which is which and what does what.

So, how do you muddle through the noise and make sure you really understand AI? Here are five things I believe you should be aware of when it comes to providing an AI solution or evaluating one for your business.

1. AI-Washing

Because AI is a trending technology that many believe holds great potential, vendors will sometimes claim they have AI-enabled capabilities when they really don’t. There’s no ruling body that defines what “AI” means — vendors are free to use it however they want. The same thing happened when the cloud entered the market, which caused the term “cloud washing” to emerge for products and services that were hyped as cloud-based but weren’t actually in the cloud. The same goes for “greenwashing” where companies lead consumers to believe they follow environmental best practices but really don’t.

Today’s “AI washing” makes it harder to tell truth from fiction. A Gartner press release from 2017 warned that AI washing is creating confusion and obscuring the technology’s real benefits. Many vendors are focused on the goal of “simply marketing their products as AI-based rather than focusing first on identifying the needs, potential uses, and the business value to customers,” according to Gartner research vice president Jim Hare.

It’s important to be clear about what AI is and about how a vendor is using the term. For instance, AI isn’t the same thing as automation. Automation allows process scripts to take care of previously manual, repetitive tasks, but the system isn’t learning and evolving. It’s just doing what it’s told to do. AI’s goal is generally to mimic human behavior and learn as it goes to become better at the tasks assigned to it over time.

2. Potential For Misuse

As with anything, AI can be used for nefarious purposes. A tool is only as “good” or “bad” as the hands that hold it. There are those who seek to use AI to control their citizenry via a nationwide network of facial recognition cameras (paywall) or build autonomous weapons, which I would consider bad applications. Fortunately, many hands have already found beneficial uses for AI, including accurate medical diagnoses, new cancer treatment approaches and language translation.

Another positive sign is that governments are working toward regulation and accountability. France and Canada announced plans to start the International Panel on AI to explore “AI issues and best practices,” and the U.S. Pentagon asked the Defense Innovation Board to create an ethical framework for using AI in warfare.

Ultimately, I believe AI is the best hope for overcoming the potential misuse of AI. For instance, much has been made of the inherent bias that keeps showing up in AI systems. IBM, for example, recently announced its automated bias-detection solutions. Since humanity can’t put the AI genie back in the bottle, we can devise good AI systems to help countermand its potential negative applications.

3. The Idea That AI Will Take People’s Jobs

Yes, it will eliminate some jobs — typically low-level and repetitive work — but it will likely create jobs, too. Gartner forecasted that AI will create more jobs than it eliminates by 2020, with a net increase of over two million jobs in 2025.

I believe AI also will take on tasks which the human brain is simply incapable of handling. AI can be trained to analyze vast data sets to gain insights that could elude the human mind. This could be particularly helpful in the creation of new drugs, saving time, effort and millions of dollars on development and clinical trials. I also believe AI could be useful for finding unique biological markers that enable individual-specific treatment. That said, this doesn’t mean that human oversight and involvement isn’t required.

4. The Idea That AI Will Change The Way People Think

AI probably won’t cause humans to rely on machines to do their jobs and make their decisions. AI, however well-developed it gets, can never replace the complexities of the human brain. That makes it even less reliable than our brains — meaning that AI compliments, rather than replaces, humans.

It’s unlikely that AI will yield flawless results. For instance, AI-powered speech transcriptions still serve up hilarious errors. Facial recognition programs still misidentify people. We can think of AI as an assistant to final human judgment, but a human must still be in the loop.

5. Lack Of Education

Here’s what I think is the biggest problem with AI in today’s world: We just don’t have enough people who are educated on how it works and how to leverage it. I think we’re staring right into the face of a looming skills gap.

For instance, an O’Reilly report on AI (via Information Age) found that over half of respondents felt their organizations needed machine learning experts and data scientists (although O’Reilly is an e-learning provider). And according to Deloitte, “Since nearly every major company is actively looking for data science talent, the demand has rapidly outpaced the supply of people with required skills.” In the United States alone, McKinsey projected (via Deloitte) that there will be a shortfall of 250,000 data scientists by 2024.

Students need to be learning about AI starting as early as middle school. Our children need to be equipped to handle the inevitable future that AI will bring. Otherwise, the shortage of workers who can actually leverage these technologies will expand. And that’s not good for anyone.

Act With Intelligence

Between the extremes of marketing hype and visions of Armageddon lies the truth of AI. Yes, there’s potential for misuse, but the majority of applications are and will be beneficial. You can’t ignore AI; organizations that find appropriate use cases for AI may get started sooner and find success sooner than their laggard competitors.

Source:
https://www.forbes.com/sites/forbesbusinessdevelopmentcouncil/2019/05/14/five-things-to-know-about-ai/#6106c7649b71

13 May 2019

A new AI acquired humanlike ‘number sense’ on its own

Artificial intelligence can share our natural ability to make numeric snap judgments.

Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.

In number-judging tasks, the AI demonstrated a number sense similar to humans and animals, researchers report online May 8 in Science Advances. This finding lends insight into what AI can learn without explicit instruction, and may prove interesting for scientists studying how number sensitivity arises in animals.

Neurobiologist Andreas Nieder of the University of Tübingen in Germany and colleagues used a library of about 1.2 million labeled images to teach an artificial neural network to recognize objects such as animals and vehicles in pictures. The researchers then presented the AI with dot patterns containing one to 30 dots and recorded how various virtual neurons responded.

Some neurons were more active when viewing patterns with specific numbers of dots. For instance, some neurons activated strongly when shown two dots but not 20, and vice versa. The degree to which these neurons preferred certain numbers was nearly identical to previous data from the neurons of monkeys.

Dot detectors

A new artificial intelligence program viewed images of dots previously shown to monkeys, including images with one dot and images with even numbers of dots from 2 to 30 (bottom). Much like the number-sensitive neurons in monkey brains, number-sensitive virtual neurons in the AI preferentially activated when shown specific numbers of dots. As in monkey brains, the AI contained more neurons tuned to smaller numbers than larger numbers (top).

To test whether the AI’s number neurons equipped it with an animal-like number sense, Nieder’s team presented pairs of dot patterns and asked whether the patterns contained the same number of dots. The AI was correct 81 percent of the time, performing about as well as humans and monkeys do on similar matching tasks. Like humans and other animals, the AI struggled to differentiate between patterns that had very similar numbers of dots, and between patterns that had many dots (SN: 12/10/16, p. 22).

This finding is a “very nice demonstration” of how AI can pick up multiple skills while training for a specific task, says Elias Issa, a neuroscientist at Columbia University not involved in the work. But exactly how and why number sense arose within this artificial neural network is still unclear, he says.

Nieder and colleagues argue that the emergence of number sense in AI might help biologists understand how human babies and wild animals get a number sense without being taught to count. Perhaps basic number sensitivity “is wired into the architecture of our visual system,” Nieder says.

Read more:
https://www.sciencenews.org/article/new-ai-acquired-humanlike-number-sense-its-own

12 May 2019
How Digitalization – through automation and AI – is transforming demand planning

How Digitalization – through automation and AI – is transforming demand planning

The process of demand planning is undergoing enormous transformation. While it has historically been a reactive process involving responding to changing market conditions, the advent of technology is allowing – and at the same time forcing – demand planning to become much more strategic. Digitalising demand planning is becoming imperative for organizations that want to stay ahead of competitors, impress customers and drive company profits. Demand planning is no longer a case of simply reacting – instead, it requires continuous proactivity to successfully predict demand. In line with this, artificial intelligence (AI) is becoming an intrinsic part of the demand planning function, further boosting planning accuracy through sensing the markets’ desires.

A recent Capgemini report found that, when it comes to supply chain digitalization, organizations work on too many projects simultaneously, with close to 30 projects at pre-deployment stages. This high volume inevitably leads to some initiatives failing to take off, and places the most critical projects at risk. The digitalization of demand planning – and subsequent implementation of AI – is one example of a critical initiative which businesses must prioritize, and that has tangible and quick benefits, including:

Strategic decision-making

AI drives automation of the more traditional and labour-intensive tasks within demand planning to the next level – most notably, analyzing and interpreting batches of data. Not only is AI able to do this more accurately and quickly, but – by automating these critical but complex tasks – the team’s time is freed up so that they can focus on more strategic business endeavours.

Additionally, demand planners no longer need to dedicate large amounts of time to creating short-term demand plans or triggering stock replenishment – AI can do this for them. The team can then concentrate on progressing higher-value business objectives that will have a greater impact on the organization. Demand planners will need to interpret their role more strategically, e.g. dedicate more time to investigate how to improve operational efficiency, identify new ways to increase profits and become more involved in the business as a whole.

Improved forecasting

With so much data readily available, it has become more difficult to detect customer purchasing patterns. Artificial intelligence can work to cut through this noise, processing the data to uncover subtle patterns that humans would have missed. By aggregating datasets from Enterprise Resource Planning (ERP), Customer Relationship Management (CRM) and Internet of Things (IoT) systems – and combining this with external variables and contextual data such as a calendar of events, seasonality and the weather – AI works to provide more accurate demand planning forecasts.

If this holistic approach is taken, AI forecasts can then be linked through supply and inventory planning to automate replenishment triggers, so that organizations consistently have the correct amount of products in stock. This results in increased sales by improving order fill rates and shelf availability.

For example, a global organization for personal care products built a demand-driven supply chain using data analytics to increase visibility into real-time demand trends. This enabled the company to produce and store the exact amount of inventory required to replace what consumers actually purchased, instead of manufacturing based on forecasts from historical data. The company also utilized point-of-sales (POS) data from retailers such as Walmart to generate forecasts that triggered shipments to stores and informed internal deployment decisions and tactical planning.

This approach helped the company to effectively track stock keeping units and shipping locations. As a result, it saw up to a 35% reduction in forecast errors for a one-week planning horizon and 20% for a two-week horizon.

More responsive   

Supply chain channels are undoubtedly vulnerable to a variety of external factors – for example natural disasters or availability of raw materials– that can impact demand forecasting. Rather than relying on historical data, AI and machine learning tools use real-time calculations to respond to and find resolutions for supply chain disruptions. As well as this, automation allows for rapid responses to changing consumer demand, improving sales and profits, and boosting consumer loyalty. This added responsivity boosts the accuracy of demand planning and limits monetary losses.

An office products retailer, for example, had disparate systems working autonomously with different SKUs, forecasting and planning processes. Management recognized that, without a “synchronized view of demand” of its supply chain, the company could not respond rapidly enough to market changes. Capgemini and a software solutions provider were brought in to implement an innovative solution designed to empower the retailer with synchronized decision-making and, ultimately, a unique competitive advantage. The solution is allowing the company to proactively meet fluctuations by tightly integrating a range of core business processes, starting at merchandise planning through to the replenishment process. The company expects this to increase top-line revenue by delivering real strategic value and strong demand chain results.

As with any significant organizational change, an agile approach – involving small steps, small failures, and fast recovery – can deliver the quick results that clearly demonstrate the value of cutting-edge demand planning approaches, such as the implementation of AI.

With this in mind, A proof of concept approach (POC) is highly recommended. This allows enterprises to gain a better understanding of the costs and returns of automation, as well as understand the skills and alterations that will be needed to accommodate it. Ultimately, the sooner an organization begins to adjust the way it goes about demand planning, the sooner the benefits will become apparent.

Source:
https://www.supplychaindigital.com/technology/how-digitalization-through-automation-and-ai-transforming-demand-planning