X-Frame-Options: SAMEORIGIN

Category: Innovation

22 Jun 2020


A team of researchers claim to have achieved quantum teleportation using individual electrons.

Quantum teleportation, or quantum entanglement, allows particles to affect each other even if they aren’t physically connected — a phenomenon predicted by famed physicist Albert Einstein.

Rather than a teleportation chamber out of a sci-fi movie, quantum teleportation transports information rather than matter.

Scientists have recently shown that pairs of photons — massless elementary particles — could form entangled qubits, the basic unit of quantum information. The discovery suggested that these qubits could transmit information via quantum teleportation.

Electron Qubits
The new research, however, marks the first time the same has been demonstrated using individuals electrons to form qubits.

“We provide evidence for ‘entanglement swapping,’ in which we create entanglement between two electrons even though the particles never interact, and ‘quantum gate teleportation,’ a potentially useful technique for quantum computing using teleportation,” John Nichol, an assistant professor of physics at the University of Rochester, co-author of the new paper published in Nature Communications this week, said in a statement. “Our work shows that this can be done even without photons.”

Info Dump
Allowing electrons to use quantum-mechanical interactions over a distance without touching could revolutionize the development of quantum computers. After all, semiconductors inside conventional computers use electrons to transmit information.

“Individual electrons are promising qubits because they interact very easily with each other, and individual electron qubits in semiconductors are also scalable,” Nichol said.

But passing this information over longer distances remains to be a big hurdle. “Reliably creating long-distance interactions between electrons is essential for quantum computing,” Nichol added.

Source: https://futurism.com/the-byte/scientists-demonstrate-quantum-teleportation-using-electrons

20 Jun 2020

Why Disciplined Innovation Matters Most Now

VP of manufacturing, technology and Innovation at Jabil. Over 20 years of experience helping global teams create cutting-edge manufacturing.

A double dose of disciplined innovation will go a long way in helping the world right now. Think about all the benefits that could come from innovations such as wearable patches embedded with biosensor technology to monitor a patient’s vital signs remotely, along with patches equipped with acoustic and audio sensors to interpret cough patterns and respiratory rates, as well as heart and lung sounds.

Expect to see an influx of sensor-based wearable devices, which will give overworked health care workers a much-needed break by enabling remote and low- or no-touch patient monitoring. Emerging advances in electrochemical biosensors for pathogen detection also promise big benefits in more rapidly detecting and combatting viral and bacterial pathogens.

Innovations are popping up everywhere, many leveraging advances in flexible hybrid electronics (FHE). For instance, integrating capacitive touch capabilities with FHE enables functionality to be embedded directly onto plastic or glass surfaces without the need for knobs, buttons or crevices. For hospitals, the opportunity to have device and appliance surfaces that are easy to clean and disinfect is hugely helpful.

In my last column, I explained how innovation goes into overdrive in times of crisis as a shared sense of urgency and purpose propel projects forward. Equally important is infusing each new product with proven principles and processes to maximize results.

A New Take On Innovative Thinking

I have long been a Clayton Christensen fan, ever since he coined the term “disruptive innovation” to describe how a product or service can take root in simple applications at the bottom of a market and then move upmarket to displace established competitors. I’ve also followed Professor Christensen’s fellow Harvard professor and strategy guru Michael Porter.

For many, being a disruptor means thinking “outside the box” to challenge conventional wisdom. As an engineer, I favor applying discipline and rigor to build a better box. It is a more practical spin on the theories espoused by the aforementioned professors because discipline helps you adapt more readily to new situations and constraints.

You may be faced with limited time, lack of materials, financial restrictions or constrained people resources. There is always something that necessitates a modification to the initial plan. The recent health crisis underscores this point, especially when you realize what innovators have managed to accomplish despite facing every limitation on the list.

In The Innovator’s Solution, Professor Christensen describes how disruptive innovators rely on discovery-driven planning to make decisions based on pattern recognition. Conversely, organizations seeking to sustain innovation plan deliberately and make project decisions based on numbers and rules. In the world of manufacturing, we have to balance both sides of the innovation equation, despite the obvious conflicts between them.

Unlimited Is Not In My Lexicon

In supporting any new product development, I look closely at the constraints, as the word “unlimited” is not in my lexicon. Industrialization dictates that we constantly review and leverage the best processes, technologies and people. We continually monitor outcomes to yield the best results at the “lowest landed cost,” which is the total price of a product once it has reached the buyer’s doorstep.

Each product brainstorm is framed by the core fundamentals of advanced manufacturing: design, materials, process, quality and test. We adhere to these tenets without exception and innovate within these foundational pillars.

This approach ensures the right investments are made at the right time without stifling innovation. Early entrants tend to address an urgent need, such as the emergence of infrared sensors in retail, which take shoppers’ temperatures as they enter the store. Interest in developing devices activated by voice control and gesture recognition also are on the rise, along with sanitizing and disinfecting robots and accelerated research and development for autonomous system innovations.

Through structured manufacturing and industrialization initiatives, companies can apply disciplined innovation to build upon these early product advancements, reallocate their own product development investments or become hyperfocused on solutions to help society.

Robust, Repeatable Rigor

Each new product comes with its own set of constraints. Clearly, autonomous systems for situation awareness applications carry significant operational risk, but they will get smarter and safer, thanks to additional sensors and actuators as well as cross-discipline expertise and arduous testing.

Today, an overriding sense of urgency is condensing industrialization phases without compromising the robust, repeatable processes and manufacturing rigor required to ensure the highest levels of quality possible. Disciplined innovation matters most when time is of the essence. That is why, right now, it is going to catalyze unbelievable product breakthroughs that will make us stronger, safer and better prepared for whatever the future holds.

Source: https://www.forbes.com/sites/forbestechcouncil/2020/06/19/why-disciplined-innovation-matters-most-now/#6de3f9801c20

26 May 2020

Reality Check: The Benefits of Artificial Intelligence

Gartner believes Artificial Intelligence (AI) security will be a top strategic technology trend in 2020, and that enterprises must gain awareness of AI’s impact on the security space. However, many enterprise IT leaders still lack a comprehensive understanding of the technology and what the technology can realistically achieve today. It is important for leaders to question exasperated Marketing claims and over-hyped promises associated with AI so that there is no confusion as to the technology’s defining capabilities.

IT leaders should take a step back and consider if their company and team is at a high enough level of security maturity to adopt advanced technology such as AI successfully. The organization’s business goals and current focuses should align with the capabilities that AI can provide.

A study conducted by Widmeyer revealed that IT executives in the U.S. believe that AI will significantly change security over the next several years, enabling IT teams to evolve their capabilities as quickly as their adversaries.

Of course, AI can enhance cybersecurity and increase effectiveness, but it cannot solve every threat and cannot replace live security analysts yet. Today, security teams use modern Machine Learning (ML) in conjunction with automation, to minimize false positives and increase productivity.

As adoption of AI in security continues to increase, it is critical that enterprise IT leaders face the current realities and misconceptions of AI, such as:

Artificial Intelligence as a Silver Bullet
AI is not a solution; it is an enhancement. Many IT decision leaders mistakenly consider AI a silver bullet that can solve all their current IT security challenges without fully understanding how to use the technology and what its limitations are. We have seen AI reduce the complexity of the security analyst’s job by enabling automation, triggering the delivery of cyber incident context, and prioritizing fixes. Yet, security vendors continue to tout further, exasperated AI-enabled capabilities of their solution without being able to point to AI’s specific outcomes.

If Artificial Intelligence is identified as the key, standalone method for protecting an organization from cyberthreats, the overpromise of AI coupled with the inability to clearly identify its accomplishments, can have a very negative impact on the strength of an organization’s security program and on the reputation of the security leader. In this situation, Chief Information Security Officers (CISO) will, unfortunately, realize that AI has limitations and the technology alone is unable to deliver aspired results.

This is especially concerning given that 48% of enterprises say their budgets for AI in cybersecurity will increase by 29 percent this year, according to Capgemini.

Automation Versus Artificial Intelligence
We have seen progress surrounding AI in the security industry, such as the enhanced use of ML technology to recognize behaviors and find security anomalies. In most cases, security technology can now correlate the irregular behavior with threat intelligence and contextual data from other systems. It can also use automated investigative actions to provide an analyst with a strong picture of something being bad or not with minimal human intervention.

A security leader should consider the types of ML models in use, the biases of those models, the capabilities possible through automation, and if their solution is intelligent enough to build integrations or collect necessary data from non-AI assets.

AI can handle a bulk of the work of a Security Analyst but not all of it. As a society, we still do not have enough trust in AI to take it to the next level — which would be fully trusting AI to take corrective actions towards those anomalies it identified. Those actions still require human intervention and judgment.

Biased Decisions and Human Error
It is important to consider that AI can make bad or wrong decisions. Given that humans themselves create and train the models that achieve AI, it can make biased decisions based on the information it receives.

Models can produce a desired outcome for an attacker, and security teams should prepare for malicious insiders to try to exploit AI biases. Such destructive intent to influence AI’s bias can prove to be extremely damaging, especially in the legal sector.

By feeding AI false information, bad actors can trick AI to implicate someone of a crime more directly. As an example, just last year, a judge ordered Amazon to turn over Echo recordings in a double murder case. In instances such as these, a hacker has the potential to wrongfully influence ML models and manipulate AI to put an innocent person in prison. In making AI more human, the likelihood of mistakes will increase.

What’s more, IT decision-makers must take into consideration that attackers are utilizing AI and ML as an offensive capability. AI has become an important tool for attackers, and according to Forrester’s Using AI for Evil report, mainstream AI-powered hacking is just a matter of time.

AI can be leveraged for good and for evil, and it is important to understand the technology’s shortcomings and adversarial potential.

The Future of AI in Cybersecurity
Though it is critical to acknowledge AI’s realistic capabilities and its current limitations, it is also important to consider how far AI can take us. Applying AI throughout the threat lifecycle will eventually automate and enhance entire categories of Security Operations Center (SOC) activity. AI has the potential to provide clear visibility into user-based threats and enable increasingly effective detection of real threats.

There are many challenges IT decision-makers face when over-estimating what Artificial Intelligence alone can realistically achieve and how it impacts their security strategies right now. Security leaders must acknowledge these challenges and truths if organizations wish to reap the benefits of AI today and for years to come.

Source: https://www.aithority.com/guest-authors/reality-check-the-benefits-of-artificial-intelligence/

04 May 2020



When the next generation of observatories are deployed, Cornell University astronomers hope to use them to scan distant exoplanets orbiting dead stars for signs of life.

When a rocky, Earth-like exoplanet passes in front of the white dwarf star it orbits, astronomers plan to search them for fingerprints of life, past or present. And to get a head start, the Cornell scientists published research in The Astrophysical Journal Letters on Thursday that offers a reference to help astronomers make sense of what they find.

Second Life

This particular brand of exoplanet would have survived the death of its host star: white dwarves are the remnant cores of stars that exhausted all of their fuel and collapsed. It stands to reason that anything living on those worlds wouldn’t survive such a devastating event, but new life could have theoretically emerged afterward.

“If we would find signs of life on planets orbiting under the light of long-dead stars,” Cornell astronomer Lisa Kaltenegger said in a press release, “the next intriguing question would be whether life survived the star’s death or started all over again — a second genesis, if you will.”

Cosmic Fingerprint

That’s where the new research comes in. It essentially serves as a catalog of what astronomers might come across as they study those exoplanets.

“If we observe a transit of that kind of planet, scientists can find out what is in its atmosphere, refer back to this paper, match it to spectral fingerprints and look for signs of life,” Cornell astronomy grad student Thea Kozakis said in the release. “Publishing this kind of guide allows observers to know what to look for.”

Source: https://futurism.com/the-byte/scientists-hunt-life-long-dead-worlds

02 Apr 2020

This Startup’s Computer Chips Are Powered by Human Neurons

Biological “hybrid computer chips” could drastically lower the amount of power required to run AI systems.

Australian startup Cortical Labs is building computer chips that use biological neurons extracted from mice and humans, Fortune reports.

The goal is to dramatically lower the amount of power current artificial intelligence systems need to operate by mimicking the way the human brain.

According to Cortical Labs’ announcement, the company is planning to “build technology that harnesses the power of synthetic biology and the full potential of the human brain” in order to create a “new class” of AI that could solve “society’s greatest challenges.”

The mouse neurons are extracted from embryos, according to Fortune, but the human ones are created by turning skin cells back into stem cells and then into neurons.

The idea of using biological neurons to power computers isn’t new. Cortical Labs’ announcement comes one week after a group of European researchers managed to turn on a working neural network that allows biological and silicon-based brain cells to communicate with each other over the internet.

Researchers at MIT have also attempted to use bacteria, not neurons, to build a computing system in 2016.

As of right now, Cortical’s mini-brains have less processing power than a dragonfly brain. The company is looking to get its mouse-neuron-powered chips to be capable of playing a game of “Pong,” as CEO Hon Weng Chong told Fortune, following the footsteps of AI company DeepMind, which used the game to test the power of its AI algorithms back in 2013.

“What we are trying to do is show we can shape the behavior of these neurons,” Chong told Fortune.

Source: https://futurism.com/startup-computer-chips-powered-human-neurons

18 Mar 2020


Brief Jolt
A team of engineers has figured out how to take a single drop of rain and use it to generate a powerful flash of electricity.

The City University of Hong Kong researchers behind the device, which they’re calling a droplet-based electricity generator (DEG), say that a single rain droplet can briefly generate 140 volts. That was enough to briefly power 100 small lightbulbs and, while it’s not yet practical enough for everyday use, it’s a promising step toward a new form of renewable electricity.

Forming Bridges
The DEG uses a “field-effect transistor-style structure,” Engadget reports, which can turn rainfall into short bursts of power.

The material the device is made from contains a quasi-permanent electrical charge, and the rain is merely what triggers the flow of energy, according to research published last week in the journal Nature.

Early Tests
The real trick will be finding a way to turn this technology into something that might be viable for people’s homes — for now, it’s not reliable enough to deliver a continuous supply of power, as it needs to charge up before it can let out another burst.

In the meantime, Engadget suggests, it could serve as a small, temporary power source on futuristic water bottles or umbrellas.

Source: https://futurism.com/the-byte/generate-electricity-rain

15 Mar 2020

Scientists Discover “Peculiar” Teardrop-Shaped Star

“I’ve been looking for a star like this for nearly 40 years and now we have finally found one.”

A team of astronomers have discovered a strange star that oscillates in a rhythmic pattern — but only on one side, causing gravitational forces to distort it into a teardrop shape.

“We’ve known theoretically that stars like this should exist since the 1980s,” said professor Don Kurtz from the University of Central Lancashire and co-author of the paper published in Nature Astronomy on Monday, in a statement. “I’ve been looking for a star like this for nearly 40 years and now we have finally found one.”

The star, known as HD74423, is about 1.7 times the mass of the Sun and was spotted around 1,500 light years from Earth — still within the confines of the Milky Way — using public data from NASA’s planet-hunting TESS satellite.

“What first caught my attention was the fact it was a chemically peculiar star,” said co-author Simon Murphy from the Sydney Institute for Astronomy at the University of Sydney in the statement. “Stars like this are usually fairly rich with metals – but this is metal poor, making it a rare type of hot star.”

Stars have been found to oscillate at different rhythms and to different degrees — including our own Sun. Astronomers suspect they’re caused by convection and magnetic field forces inside the star.

While the exact causes of these pulsations vary, these oscillations have usually been observed on all sides of the star. HD74423, however, was found to pulsate on only one side because of its red dwarf companion with which it makes up a binary star system.

They were found to do such a close dance — an orbital period of just two days — that the larger star is being distorted into a teardrop shape.

The astronomers suspect it won’t be the last of its kind to be discovered.

“We expect to find many more hidden in the TESS data,” said co-author Saul Rappaport, a professor at MIT.

Source: https://futurism.com/scientists-peculiar-teardrop-shaped-star

07 Mar 2020

How to Leverage AI to Upskill Employees

Artificial intelligence is the answer to polishing math skills and plugging our workforce pipeline.


One of the largest economic revolutions of our time is unfolding around us. Technology, innovation and automation are redrawing the career paths of millions of people. Most headlines focus on the negative, i.e. machines taking our jobs. But in reality, these developments are opening up a world of opportunity for people who can make the move to a STEM career or upskill in their current job. There’s also another part to this story: How AI can help boost the economy by improving how we learn.

In 2018, 2.4 million STEM jobs in the U.S. went unfilled. That’s almost equal to the entire population of Los Angeles or Chicago. It’s a gap causing problems for employers trying to recruit and retain workers, whether in startups, small businesses or major corporations. We just don’t have enough workers.

The Unspoken Barrier 

The barrier preventing new or existing employees from adding to their skill set and filling the unfulfilled jobs? Math. Calculus to be specific. It has become a frustrating impediment to many people seeking a STEM career. For college students, the material is so difficult that one-third of them in the U.S. fail the related course or drop it out of frustration. For adults, learning calculus is not always compulsory for the day to day of every STEM job, but learning its principles can help sharpen logic and reasoning. Plus, simply understanding how calculus relates to real-world scenarios is helpful in many STEM jobs. Unfortunately, for many people, the thought of tackling any level of math is enough to scare them away from a new opportunity.  We need to stop looking at math as a way to filter people out of the STEM pipeline. We need to start looking at it as a way to help more people, including professionals looking to pivot careers.

How AI Can Change How Employees Learn

How do we solve this hurdle and fill plug the pipeline? Artificial intelligence. We often discuss how AI can be used to help data efficiencies and process automation, but AI can also assist in personal tutoring to get people over the barriers of difficult math. The recently released Aida Calculus app uses AI to create a highly personalized learning experience and is the first of its kind to use a very complex combination of AI algorithms that provide step-by-step feedback on equations and then serve up custom content showing how calculus works in the real world.

While the product is important, the vision behind it is much bigger. This is a really impactful application of AI for good. It also shows that math skills can be developed in everyone and technology like AI can change the way people learn difficult subjects. The goal is to engage anyone, be it a student or working adult, who is curious about how to apply math in their daily lives. By making calculus relevant and relatable, we can begin to instill the confidence people need to take on STEM careers, even if those jobs don’t directly use calculus.

Leveraging AI Through Human Development

When people boost their complex math skills or even their general understanding of basic math concepts, there’s a world of opportunity waiting. STEM jobs outearn non-STEM jobs by up to 30 percent in some cases. A 2017 study commissioned by Qualcomm suggested that 5G will create 22 million jobs globally by 2035. The U.S. Labor Department says that IT fields will add half a million new jobs in the next eight years and that jobs in information security will grow by 30 percent. Job growth in STEM is outpacing overall U.S. job growth. At the same time, Pearson’s own Global Learners Survey said that 61 percent of Americans are likely to change careers entirely. It’s a good time for that 61 percent to consider STEM.

To equip themselves for this new economy, people will have to learn how learn. Whether it’s math or any other subject, they’ll likely need to study again, and that is hard. But we can use innovation and technology to make the tough subjects a little easier and make the whole learning experience more personalized, helping a whole generation of people take advantage of the opportunity to become the engineers, data analysts and scientists we need.

Source: https://www.entrepreneur.com/article/345502

29 Feb 2020

Human Intelligence and AI for Oncology advancement

As the Vatican workshop on ethics in Artificial Intelligence ends, Dr. Alexandru Floares speaks on the possibilities of medical innovation through collaboration between AI and human intellect.

The increase in the number of cancer cases worldwide is a major cause for concern for the medical community.

Doctor Alexandru Floares, a speaker at a 3-day workshop organized by the Pontifical Academy for Life on Ethics and Artificial Intelligence (AI), spoke to Vatican Radio on the potential for larger strides in the field of oncology and medical research through the efficiency that AI provides.

Dr. Floares, a Neurologist, specialist in AI applications in Oncology, and President of Solutions of Artificial Intelligence Applications (SAIA), gave a presentation titled “AI in Oncology.”

In his interview with Vatican Radio, Dr. Floares spoke on issues bordering on access to data for medical research, solutions to the emerging issues surrounding the use of AI in healthcare, and the revolutionary role of AI in the field of medicine.

“The problems related to applying AI to medicine and oncology can be solved relatively easily,” he said. “This means that when a problem is clearly and pragmatically formulated, it can be solved in a matter of months or at most a year. The benefits of applying AI in medicine, when we put them in a balance, are very important.”

Poor man’s Approach

Speaking on steps towards eliminating bias in data collection, Dr. Floares noted that bias is predominantly the fault of human data input into the algorithm and not an inbuilt AI defect.

“We should not blame the AI for poor results if we do not put in the proper data to assist the AI’s predictive capabilities,” he said.

Giving the example of his experience while collecting data for his molecular diagnostic test for cancer diagnosis, he expressed his suspicion of already available data. He rather opted for what he refers to as the “poor man approach.”

“I found mine to be better because the data is less biased. It is better to have 1,000 patients from various studies and to integrate them instead of having one big study with 1,000 patients because the data is less biased and so the predictive model behind the test is more robust, generating better to (represent) different kinds of population that were not involved when the system was developed. So the poor man’s strategy became a good strategy for fighting against bias in data.”

Checking misuse of AI and data

On the issue of the possible misuse of AI, Dr. Floares is of the opinion that the different actors in the field of AI will help curb excesses. 

“Collecting data is a good idea. I am on the optimistic side. I am sure there will be opposition too and all these forces working in different directions will create equilibrium for humanity. Hopefully the best.”

He furthermore insisted on active involvement in reining in excesses.

“Test AI systems. That is the most pragmatic way to do things to see if it is good or not. Instead of debating and having few actions,” he said.

AI revolution at hand

“A revolution that started in 2012 and is just showing its first impressive results… I did not believe that AI will ever beat the human in dealing with images because our brain – the result of evolution is very well developed. I realize that this is possible and the strategy is for human intelligence to collaborate with artificial intelligence. This will be the greatest team we have ever seen,” he said.

Source: https://www.vaticannews.va/en/vatican-city/news/2020-02/human-intelligence-and-ai-for-oncology-advancement.html

02 Feb 2020

How you can get your business ready for AI

  • 90% of executives see promise in the use of artificial intelligence.
  • AI set to add $15.7 trillion to global economy.
  • Only 4% planning major deployment of technology in 2020.

They say you have to learn to walk before you can run. It turns out the same rule applies when it comes to the rollout of artificial intelligence.

A new report on AI suggests that companies need to get the basics of the technology right before scaling up its use. In a PwC survey, 90% of executives said AI offers more opportunities than risks, but only 4% plan to deploy it enterprise-wide in 2020, compared with 20% who said they intended to do so in 2019.

Slow and steady wins the race

By 2030, AI could add $15.7 trillion to the global economy. But its manageable implementation is a global challenge. The World Economic Forum is working with industry experts and business leaders to develop an AI toolkit that will help companies understand the power of AI to advance their business and to introduce the technology in a sustainable way.

Focusing on the fundamentals first will allow organizations to lay the groundwork for a future that brings them all the rewards of AI.

Here are five things PwC’s report suggests companies can do in 2020 to prepare.

1. Embrace the humdrum to get things done

One of the key benefits that company leaders expect from investment in AI is the streamlining of in-house processes. The automation of routine tasks, such as the extrication of information from tax forms and invoices, can help companies operate more efficiently and make significant savings.

AI can already be used to manage the threat of fraud and cybersecurity – something that 38% of executives see as a key capability of the technology. For example, AI can recognize unauthorized network entry and identify malicious behaviour in software.

2. Turn training into real-world opportunity

For companies to be ready for AI at scale, they need to do more than just offer training opportunities. Employees have to be able to use the new skills they have learned, in a way that continuously improves performance.

It’s also important to make teams ‘multilingual’, with both tech and non-tech skills integrated across the business, so that colleagues can not only collaborate on AI-related challenges, but also decide which problems AI can solve.

3. Tackle risks and act responsibly

Along with helping employees to see AI not as a threat to their jobs but as an opportunity to undertake higher-value work, companies must ensure they have the processes, tools and controls to maintain strong ethics and make AI easy to understand. In some cases, this might entail collaboration with customers, regulators, and industry peers.

As AI usage continues to grow, so do public fears about the technology in applications such as facial recognition. That means risk management is becoming more critical. Yet not all companies have centralized governance around AI, and that could increase cybersecurity threats, by making the technology harder to manage and secure.

4. AI, all the time

Developing AI models requires a ‘test and learn’ approach, in which the algorithms are continually learning and the data is being refined. That is very different from the way that software is developed, and a different set of tools are needed. Machines learn through the input of data, and more – and better quality – data is key to the rollout of AI.

Some of AI’s most valuable uses come when it works 24/7 as part of broader operational systems, such as marketing or finance. That’s why leaders in the field are employing it across multiple functions and business units, and fully integrating it with broader automation initiatives and data analytics.

5. A business model for the future

It’s worth remembering that despite AI’s growing importance, it is still just one weapon in the business armoury. Its benefit could come through its use as part of a broader automation or business strategy.

Weaving it successfully into a new business model includes a commitment to employee training and understanding return on investment. For now, that investment could be as simple as using robotic process automation to handle customer requests.

AI’s impact may be incremental at first, but its gradual integration into business operations means that game-changing disruption and innovation are not far away.

Source: https://www.weforum.org/agenda/2020/01/artificial-intelligence-predictions-2020-pwc/