Category: Disruptive Technology

04 Nov 2019
We Need AI That Is Explainable, Auditable, and Transparent

We Need AI That Is Explainable, Auditable, and Transparent

Every parent worries about the influences our children are exposed to. Who are their teachers? What movies are they watching? What video games are they playing? Are they hanging out with the right crowd? We scrutinize these influences because we know they can affect, for better or worse, the decisions our children make.

Just as we concern ourselves with who’s teaching our children, we also need to pay attention to who’s teaching our algorithms. Like humans, artificial intelligence systems learn from the environments they are exposed to and make decisions based on biases they develop. And like our children, we should expect our models to be able to explain their decisions as they develop.

As Cathy O’Neil explains in Weapons of Math Destruction, algorithms often determine what college we attend, if we get hired for a job, if we qualify for a loan to buy a house, and even who goes to prison and for how long. Unlike human decisions, these mathematical models are rarely questioned. They just show up on somebody’s computer screen and fates are determined.

In some cases, the errors of algorithms are obvious, such as when Dow Jones reported that Google was buying Apple for $9 billion and the bots fell for it or when Microsoft’s Tay chatbot went berserk on Twitter — but often they are not. What’s far more insidious and pervasive are the more subtle glitches that go unnoticed, but have very real effects on people’s lives.

Once you get on the wrong side of an algorithm, your life immediately becomes more difficult. Unable to get into a good school or to get a job, you earn less money and live in a worse neighborhood. Those facts get fed into new algorithms and your situation degrades even further. Each step of your descent is documented, measured, and evaluated.

Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.

Make no mistake, as we increasingly outsource decisions to algorithms, the problem has the potential to become even more Kafkaesque. It is imperative that we begin to take the problem of AI bias seriously and take steps to mitigate its effects by making our systems more transparent, explainable, and auditable.

Sources of Bias

Bias in AI systems has two major sources: the data sets on which models are trained, and the design of the models themselves. Biases in the data sets on which algorithms are trained can be subtle, for example, such as when smartphone apps are used to monitor potholes and alert authorities to contact maintenance crews. That may be efficient, but it’s bound to undercount poorer areas where fewer people have smartphones.

In other cases, data that is not collected can affect results. Analysts suspect that’s what happened when Google Flu Trends predicted almost double as many cases in 2013 as there actually were. What appears to have happened is that increased media coverage led to more searches by people who weren’t sick.

Yet another source of data bias happens when human biases carry over into AI systems. For example, biases in the judicial system affect who gets charged and sentenced for crimes. If that data is then used to predict who is likely to commit crimes, then those biases will carry over. In other cases, humans are used to tag data and may direct input bias into the system.


This type of bias is pervasive and difficult to eliminate. In fact, Amazon was forced to scrap an AI-powered recruiting tool because they could not remove gender bias from the results. They were unfairly favoring men because the training data they used taught the system that most of the previously-hired employees of the firm that were viewed as successful were male. Even when they eliminated any specific mention of gender, certain words which appeared more often in male resumes than female resumes were identified by the system as proxies for gender.

A second major source of bias results from how decision-making models are designed. For example, if a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly.

Overcoming Bias

With so many diverse sources of bias, we do not think it is realistic to believe we can eliminate it entirely, or even substantially. However, what we can do is make our AI systems more explainable, auditable, and transparent. We suggest three practical steps leaders can take to mitigate the effects of bias.

First, AI systems must be subjected to vigorous human review. For example, one study cited by a White House report during the Obama administration found that while machines had a 7.5% error rate in reading radiology images, and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

Second, much like banks are required by law to “know their customer,” engineers that build systems need to know their algorithms. For example, Eric Haller, head of Datalabs at Experian told us that unlike decades ago, when the models they used were fairly simple, in the AI era, his data scientists need to be much more careful. “In the past, we just needed to keep accurate records so that, if a mistake was made, we could go back, find the problem and fix it,” he told us. “Now, when so many of our models are powered by artificial intelligence, it’s not so easy. We can’t just download open-source code and run it. We need to understand, on a very deep level, every line of code that goes into our algorithms and be able to explain it to external stakeholders.”

Third, AI systems, and the data sources used to train them, need to be transparent and available for audit. Legislative frameworks like GDPR in Europe have made some promising first steps, but clearly more work needs to be done. We wouldn’t find it acceptable for humans to be making decisions without any oversight, so there’s no reason why we should accept it when machines make decisions.

Perhaps most of all, we need to shift from a culture of automation to augmentation. Artificial intelligence works best not as some sort of magic box you use to replace humans and cut costs, but as a force multiplier that you use to create new value. By making AI more explainable, auditable and transparent, we can not only make our systems more fair, we can make them vastly more effective and more useful.

Source: https://hbr.org/2019/10/we-need-ai-that-is-explainable-auditable-and-transparent

03 Nov 2019
AI May Not Kill Your Job—Just Change It

AI May Not Kill Your Job—Just Change It

Don’t fear the robots, according to a report from MIT and IBM. Worry about algorithms replacing any task that can be automated. 

Martin Fleming doesn’t think robots are coming to take your jobs. The chief economist at IBM, Fleming says those worries aren’t backed up by the data. “It’s really nonsense,” he says. A new paper from MIT and IBM’s Watson AI Lab shows that for most of us, the automation revolution probably won’t mean physical robots replacing human workers. Instead, it will come from algorithms. And while we won’t all lose our jobs, those jobs will change, thanks to artificial intelligence and machine learning.

Fleming and a team of researchers analyzed 170 million online US job listings, collected by the job analytics firm Burning Glass Technologies, that were posted between 2010 and 2017. They found that, on average, tasks such as scheduling or credential validation, which could be performed by AI, appeared less frequently in the job listings in the more recent years. The recent listings also included more “soft skills” requirements like creativity, common sense, and judgment. Fleming says this shows that work is being resorted. AI is taking over more easily automated tasks and workers are being asked to do things that machines can’t do.

If you’re in sales, for example, you’ll spend less time figuring out the ideal price for your product, because an algorithm can determine the optimal price to maximize profits. Instead, you might spend more time managing customers or designing attractive marketing materials or websites.

In the study, researchers divided the listings into three groups based on the advertised pay, then examined how different tasks were being valued. What they found is that how we value tasks may be starting to change.

Design skills, for example, were in particularly high demand and increased the most across wage brackets. Within personal care and services occupations—which generally are low-wage—pay for jobs that included design tasks, such as presentation design or digital design, increased by an average of $12,000 over the study period, after inflation. The same can be said of higher wage earners in business and finance who have deep industry expertise that can’t yet be matched by AI. Their wages went up more than $6,000 annually.

Some low-wage occupations like home health care, hairstyling, or fitness training are insulated from the impact of AI because those skills are hard to automate. But middle-wage earners are starting to feel the squeeze. Their wages are still rising, but after adjusting for the shifts in tasks for those jobs, the report found, those wages weren’t growing as quickly as low-wage and high-wage jobs. In some industries, like manufacturing and production, wages actually decreased. There are also fewer middle-wage jobs. Some are getting simpler and being replaced by low-wage jobs. Others now require more skills and are becoming high wage.

Fleming is optimistic about what AI tools can do for work and for workers. Just as automation made factories more efficient, AI can help white-collar workers be more productive. The more productive they are, the more value they add to their companies. And the better those companies do, the higher wages get. “There will be some jobs lost,” he says. “But on balance, more jobs will be created both in the US and worldwide.” While some middle-wage jobs are disappearing, others are popping up in industries like logistics and health care, he says.

As AI starts to take over more tasks, and the middle-wage jobs start to change, the skills we associate with those middle-class jobs have to change too. “I think that it’s rational to be optimistic,” says Richard Reeves, director of the Future of the Middle Class Initiative at the Brookings Institution. “But I don’t think that we should be complacent. It won’t just automatically be OK.”

The report says these changes are happening relatively slowly, giving workers time to adjust. But Reeves points out that while these changes may seem incremental now, they are happening faster than they used to. AI has been an academic project since the 1950s. It remained a niche concept until 2012, when tests showed neural networks could make speech and image recognition more accurate. Now we use it to complete emails, analyze surveillance footage, and decide prison sentencing. The IBM and MIT researchers used it to help sort through all the data they analyzed for this paper.

That fast adoption means that workers are watching their jobs change. We need a way to help people adjust from the jobs they used to have to the jobs that are now available. “Our optimism actually is rather contingent on our actions, on actually making good on our promise to reskill,” says Reeves. “We are rewiring our economy but we haven’t rewired our training and education programs.”

Read more: https://www.wired.com/story/ai-not-kill-job-change-it/

31 Oct 2019
Employees Worldwide Welcome ‘AI Coworkers’ To The Office

Employees Worldwide Welcome ‘AI Coworkers’ To The Office

Last year, many Americans worried that artificial intelligence (AI) might replace them at work. This year, employees around the world are wondering why their employers don’t provide them with the kind of AI-enabled technology they’re starting to use at home. 

That’s one way to think about the results of a second annual survey about AI in the workplace, conducted by Oracle and research firm Future Workplace. This year, 50% of survey respondents say they’re currently using some form of AI at work—a major leap compared to only 32% in last year’s survey.

Last year, many Americans worried that artificial intelligence (AI) might replace them at work. This year, employees around the world are wondering why their employers don’t provide them with the kind of AI-enabled technology they’re starting to use at home. 

That’s one way to think about the results of a second annual survey about AI in the workplace, conducted by Oracle and research firm Future Workplace. This year, 50% of survey respondents say they’re currently using some form of AI at work—a major leap compared to only 32% in last year’s survey.

Source: https://www.forbes.com/sites/oracle/2019/10/31/employees-worldwide-welcome-ai-coworkers-to-the-office/#47aa68266681

28 Oct 2019
SCIENTISTS SAY THEY FINALLY FIGURED OUT HOW TO SPOT WORMHOLES

SCIENTISTS SAY THEY FINALLY FIGURED OUT HOW TO SPOT WORMHOLES

Thinking With Portals

Scientists think they’ve come up with a way to detect traversable wormholes, assuming they exist.

There’s never been any sort of evidence that traversable wormholes — portals between two distant parts of the universe, or two universes within a theoretical multiverse — are real. But if they are, a team of scientists think they know what that evidence might look like, breathing new life into a far-out theory that could finally achieve faster-than-light travel.

Telltale Wobbles

If a wormhole were to exist, then the gravitational pull of objects on one side, like black holes or stars, would influence the objects on the other side.

If a star wobbled or had otherwise inexplicable perturbations in its orbit around a black hole, researchers could hypothetically argue that they’re being influenced by the gravity of something on the other end of a wormhole, according to research published this month in the journal Physical Review D.

“If you have two stars, one on each side of the wormhole, the star on our side should feel the gravitational influence of the star that’s on the other side,” said University at Buffalo physicist Dejan Stojkovic. “The gravitational flux will go through the wormhole.”

Occam’s Razor

While the researchers hope to look for wobbles in the orbits of stars orbiting near Sagittarius A*, the supermassive black hole at the center of the Milky Way, Stojkovic concedes that spotting some wouldn’t guarantee that a wormhole exists there.

He added “we cannot say that, ‘Yes, this is definitely a wormhole.’ There could be some other explanation, something else on our side perturbing the motion of this star.”
 
27 Oct 2019
Big Tech Is Making A Massive Bet On AI … Here’s How Investors Can, Too

Big Tech Is Making A Massive Bet On AI … Here’s How Investors Can, Too

Artificial intelligence is becoming the future of everything. Yet, only a few large companies have the talent and the technology to perfect it.

That’s the gist of New York Times story published late last week. Rising costs for AI research are locking out university researchers and garage entrepreneurs, two of the traditional — and historically best — founts of innovation.

But it’s not all bad news for investors.

In the past, software engineers used code to build platforms and new business models. A prime example is Netflix.

Managers there transformed the mail-order DVD business into a digital media behemoth. They revolutionized how we view and interact with media. They also shook up traditional Hollywood studios by giving new and independent voices a huge platform.

In the process, the companies with the best algorithms will start to solve the medical, economic and social problems that have vexed researchers and scientists for decades.

Investors need to understand that winners and losers are being determined right now as the cost of AI research becomes prohibitive.

Think of the research process as a set of increasingly complex math problems. Researchers throw enormous amounts of data at custom algorithms that learn through trial and error. As the number of simulations mount, so do costs.

Big problems like self-driving cars or finding the cause of disease at the cellular level require immense amounts computing power.

An August research report from the Allen Institute for Artificial Intelligence determined that the number of calculations required to perform cutting-edge AI research soared 300,000x over the course of the past six years.

Only a handful of companies have the resources to compete at that level.

Long ago, executives at Amazon.comMicrosoftAlphabet and Facebook had the foresight to begin building massive cloud computing scale. Their data centers, many the size of football fields, are strewn all over the globe. Millions of servers, connected with undersea cables and fiber optic lines, have replaced the mainframes of old.

If you want to do great things in AI research, you’ll probably need to deal with at least one of these four big firms.

It’s a pinch being felt even by large technology companies …

Adobe and SAP joined an open data alliance with Microsoft in September 2018. A day later, salesforce.com hooked up with Amazon Web Services, Amazon’s cloud computer arm.

There has been some effort to break up the concentration of power. But critics are still mostly focused on the wrong things. In their view, data is the new oil, and it begs for regulation.

In the early 1900s, oil was the lifeblood of industry. It was central to the development of new game-changing chemicals. It powered the nascent automobile and steel complex.

The oil barons were the gatekeepers to innovation. In the process, they amassed fantastic wealth, as did many other industrialists. Income inequality soared.

Eventually, this led to calls for regulation, and trust-busters were brought in to break up (and control) the oil giants.

The parallels to today are convenient, and lazy.

Writers at The Economist in 2017, painted a dystopian picture of our future — one where the tech giants remain unregulated. The influential finance magazine concluded antitrust regulators must step in to control the flow of data, just as they did with oil companies in the early 1900s.

However, data is not oil. It’s not dear. It’s abundant.

Thanks to inexpensive sensors and lightweight software, there is a gusher of digital information everywhere. It comes from our wrists, cars and television sets. Soon it will shoot out of traffic lights, buses and trains; mining pits, farm fields and factories.

The limited resource is computing power. Enterprises, governments and researchers will need to pay up if they want to turn their data into something of value.

McKinsey, a global research and consulting firm, argues unlocking data should be a strategic priority at every enterprise. Analysts predict data will change business models in every industry, every business going forward.

The most important takeaway is that all future key AI breakthroughs are likely to come out of the big four. They have the technological and financial resources to attract talent. They have the scale to push the envelope.

It’s not a surprise that Amazon is leading in advanced robotics and language processing, or that Alphabet started developing self-driving cars in 2009.

Microsoft is building the biggest connected car platform in the world: Its engineers in Redmond, Wash., imagine a world of vehicle synchronization and the end of traffic.

Across town, Facebook researchers are working on augmented reality and brain computer interfaces.

These are big ideas with huge potential payoffs.

Amazon, Microsoft, Alphabet and Facebook are as important today as Standard Oil, Royal Dutch Shell and British Petroleum were a century ago.

Their resource is not oil, or data for that matter. It’s computing power. They’re leveraging that position to dominate AI research, the most important technology of the future.

For their investors, this is a good thing.

Growth investors should consider buying the stocks into any significant weakness. The story of AI is only getting started.

Source: https://www.forbes.com/sites/jonmarkman/2019/10/26/big-tech-is-making-a-massive-bet-on-ai–heres-how-investors-can-too/#a3cfea856d73

26 Oct 2019
Expert: VR Headsets Should Have Brain Interfaces

Expert: VR Headsets Should Have Brain Interfaces

Brain-computer interfaces could make VR gaming way more immersive.

Mind Control
Virtual reality headsets are already pretty good at fooling our eyes and ears into thinking we’re in another world. And soon, we might be able to navigate that world with our thoughts alone.

Speaking at this year’s Game Developer’s Conference in San Francisco, Mike Abinder, in-house psychologist and researcher for game developer and distributor Valve, gave a talk on the exciting possibilities of adding brain-computer interfaces to VR headsets.

Personalized Gaming
The idea is to add non-invasive electroencephalogram (EEG) sensors to the insides of existing VR headsets. EEG readers detect the electrical signals firing in the brain and turn them into data points. And by analyzing that data, according to Abinder, game designers could make games that respond differently depending on whether you’re excited, happy, sad or bored.


“So think about adaptive enemies. What kinds of animals do you like playing against in gaming?” Ambinder said, as quoted by VentureBeat. “If we knew the answers to these questions, you could have the game give you more of a challenging time and less of the boring time.”

Game design could become almost perfectly tailored to the person wearing the VR headset — or even recreate a perfect representation of you inside a virtual world. Your avatar could perfectly mimic your current state of mind or mood.

“All of a sudden, we start becoming able to assess how you’re responding to various elements in game,” Ambinder continued. “We can make small changes to make big changes.”

Brain Extensions
There are a handful of companies already trying to harness brain signals for enhancing gaming experiences. A startup called Neurable is already testing out BCIs built into off-the-shelf VR headsets “to create a natural extension of our brains, creating new possibilities for human empowerment,” according to its website.

Of course, Abinder’s vision of the future of gaming is mostly a fun thought experiment at this stage. Even hospital-grade EEGs have to deal with a huge amount of noise — and that’s especially the case for consumer-grade, non-invasive scanners that are not planted to the scalp or surgically implanted.

Source: https://futurism.com/brain-computer-interface-vr-headsets

22 Oct 2019
SCIENTISTS WORRIED THAT HUMAN BRAINS GROWN IN LAB MAY BE SENTIENT

SCIENTISTS WORRIED THAT HUMAN BRAINS GROWN IN LAB MAY BE SENTIENT

It’s Alive!

Some neuroscientists working with lab-grown human “mini brains” worry they could be experiencing an endless horror, with a conscious existence with no body.

At least, that’s the warning that a group of Green Neuroscience Lab researchers plan to deliver during a national meeting for the Society for Neuroscience on Monday, according to The Guardian. While it’s never been demonstrated that a mini brain has become conscious or sentient, the researchers believe that the risk is too great to continue using them.

Toeing The Line

Mini brains give scientists the opportunity to study neurological development and conditions in something more human-like than an animal model. While they don’t approach the complexity of a human brain, scientists have been able to make increasingly-complex mini brains for their work.

“We’re already seeing activity in organoids that is reminiscent of biological activity in developing animals,” Elan Ohayon, the director of the Green Neuroscience Laboratory, told The Guardian.

Hold Off

It’s that progress that has the neuroscientists concerned.

“If there’s even a possibility of the organoid being sentient, we could be crossing that line,” said Ohayon. “We don’t want people doing research where there is potential for something to suffer.”

Source: https://futurism.com/the-byte/scientists-worried-lab-grown-brains-sentient


 
 
21 Oct 2019
AI Can Help You—And Your Boss—Maximize Your Potential. Will You Trust It?

AI Can Help You—And Your Boss—Maximize Your Potential. Will You Trust It?

Would you trust an Artificial Intelligence (AI) to tell you how to become more effective and successful at your job? How would you feel if you knew your HR department uses AI to determine whether you are leadership material? Or that an AI just suggested to your boss that she should treat you better or else you might soon quit and join a competitor—well before the thought of jumping ship entered your mind?

Meet Yva, introduced by her creator David Yang in this fascinating podcast discussion.

David Yang is an impressive serial entrepreneur: he has launched twelve companies, beginning when he was in fourth grade. David started training as a physicist, to follow in his parents’ footsteps. He won math and physics Olympiads; then his first entrepreneurial detour “distracted” him from his studies for a while and sparked his passion for computer science and AI—it’s really worth hearing the story from David’s own voice, especially his concern of possibly disappointing his parents even as he was launching a hugely successful entrepreneurial and scientific career.

Yva, David’s latest creation, is an AI-powered people analytics platform—a remarkable example of the powerful role that AI is starting to play in the workplace, with the ethical implications that quickly come to the fore.

Yva’s neural network can mine and analyze workers’ activities across a range of work applications: email, Slack, G-Suite, GitHub. With these data, the AI can pick up a treasure trove of nuanced insights about employee behaviors: how quickly an employee responds to certain types of emails; or the tree structure of her communications: how many to subordinates, how many to peers or superiors, how many outside the company; and much more.

These insights can provide value to an organization in two main ways:

First, in identifying which employees have high potential to be great performers or strong leaders. The company tells Yva which individuals it currently considers as best performers; Yva’s neural network identifies which behaviors are characteristic of these top performers, and then finds other employees who exhibit some if not all of the same traits. It can tell you who has the potential to become a top salesperson, or an extremely effective leader; and it can tell you which characteristics they already possess and which ones they need to develop.

Second, Yva helps minimize “regrettable attrition” by identifying employees who are a high resignation risk. A decision to resign never comes out of the blue. First the employee will feel increasingly frustrated or burnt out; then she will become more open to consider other opportunities; then she will actively seek another job. Each stage carries subtle changes in our behavior: maybe how early we send out our first email in the morning, or how quickly we respond, or something in the tone of our messages. We can’t detect these changes, but Yva can.

For large companies, reducing regrettable attrition is Yva’s top contribution: losing and having to replace valuable employees represents a substantial cost. This, notes David Yang, makes the Return On Investment from deploying Yva very easy to identify. For smaller companies, especially in their growth stage, attrition is less of a concern and the greater value comes from the way Yva helps them build talent and leadership from within their ranks.

Given the ubiquitous concerns that technology will eliminate jobs, it’s refreshing and reassuring to hear that Yva instead proves its value by boosting employee retention.

Yva can also help the individual worker; it can create your personal dashboard with insights and suggestions on how you can change your behavior to become more effective and successful.

There is a trade-off. By default, Yva will respect your privacy, working on anonymized data. But the more individual data you are willing to share, the more Yva can help. The choice is yours.

David Yang notes some interesting geographic differences in the share of employees who opt in; he also notes that across the board, close to one employee in five remains adamantly opposed to disclosing her individual data.

Privacy concerns are fully understandable when faced with an AI that can drive important HR decisions. But is it smart to trust humans more than AI? David Yang notes that AI can help eliminate the human biases that often influence hiring and promotion decisions. Provided—he stresses—that the AI gets trained in the right way, only on final outcomes, on objective performance criteria, without feeding into it intermediate variables such as race, gender or age, which could create a built-in bias in the AI itself.

David Yang, unsurprisingly, is very bullish on the role that AI can play in people analytics and in our lives. Bullish, but very realistic and thoughtful, and willing to put himself on the line—at the end of the podcast discussion he talks of the role that Morpheus, another AI, plays in his personal life.

David thinks that in the future smaller companies (500 employees or less) will rely completely on AI-powered people analytics platform; he believes that AI will play a major role in leveraging the creativity and efficiency of individuals, while HR (human) professionals will focus on business-specific HR-partner roles. He has a horse in the race—Yva. But there seems to be little doubt that whatever role AI takes in HR and people analytics, it will be one of its most powerful influences in our professional—and personal—lives.

Source: https://www.forbes.com/sites/marcoannunziata/2019/10/20/ai-can-help-youand-your-bossmaximize-your-potential-will-you-trust-it/#1b696bef6b7b

19 Oct 2019
6 pillars of AI

6 pillars of AI

The application of artificial intelligence (AI) methods, technology, and solutions represent a fundamental shift in how people interact with information, and a huge opportunity for government agencies to improve outcomes.

However, a common misconception is that AI is “plug-and-play.” Perhaps because of this, according to McKinsey research, only 8% of companies use practices that enable effective adoption of AI.

For this reason, here is a test for AI readiness that we call the “6 pillars of AI.”

These pillars make certain that a finished AI product provides:

  • The right solutions for its users, and
  • Enduring value to the organization as a whole.

AI is most effective when an organization has centered all functions around using it. Project-based AI has its place, but the more that an organization shifts from seeing AI as a tool to seeing it as a broad methodology, the more the promises of AI will be realized.

        Insight by VMware: Learn the latest in how agencies are approaching cloud computing in this exclusive executive briefing.

Therefore, in every AI project we recommend the use of these 6 Pillars of AI to ensure that the solutions developed and the transformations made achieve broad organizational goals and bring lasting value to the organization.

6 pillars of successful AI adoption and projects

1.  AI is only of value if it improves something

Is AI the best approach to your problem, given the desired outcome and required investment?

Currently AI is a hot topic in government IT. It’s exciting, seen as forward-thinking, and generally looks bright and shiny. This causes organizations to get into trouble because deep analysis isn’t made on how AI will bring broad, long-lasting value.

The first question organizations should ask is:

  1. What do you want to accomplish, and how do you imagine AI can help you reach that goal?

The second question organizations should ask is:

  1. Based on that goal, are the costs of implementing AI acceptable? Is the business impact worth the cost to the business?

The cost of AI is so much more than its sticker price; truly adopting AI in order to realize its full potential requires transforming an organization’s culture, vision and strategy. Such broad transformation is not easy nor cheap, and thus needs to be taken into consideration when developing an AI strategy or planning an AI acquisition.

        Subscribe to Federal News Network’s Morning Federal Report for the latest federal workforce news.

2. AI is only of value if it augments human function

Is the process of using AI easier than the old way of doing things?

Though AI is supposed to make jobs easier and more efficient, an organization should carefully evaluate exactly how the proposed solution will do this. A valuable AI solution should decrease human dependencies.

In other words, if it takes your people longer to work with the AI solution, rather than to do the process “manually,” then the AI is not actually decreasing human dependency and is therefore not doing what it should fundamentally do.

If your AI solution is failing here, before concluding that the AI solution itself is the problem, your organization should evaluate the solution, including asking the following questions:

  • How accurate is our data?
  • How efficient is our data infrastructure?
  • Are we using the right AI model?
  • Is our team adopting and implementing the solution as intended?

This evaluation might uncover that your solution is valuable and will decrease human dependencies, if only your data were better, your infrastructure was efficient, or your team fully adopts it as intended.

3. AI is a human multiplier, not a replacement

Does the AI solution remove repetitive subsets of jobs, enabling your people to be more efficient, productive, and able to focus on higher value tasks?

While AI that has the intelligence to operate independent of human input is theoretically possible, the vast majority of companies that could effectively use AI today will use it in such a manner that still depends on people to guide its use and make decisions.

Don’t imagine that an AI solution will be able to replace a person or a team in your organization; instead, an effective solution should turn your people into “super humans” — enabling them to process, for example, twice as much input as before.

4. Data is the foundation of all AI operations

Has your leadership developed an infrastructure strategy that will enable AI success?

It hardly needs to be said that for a machine to learn, it must have data from which to learn — the more, the better. An organization’s AI solution is only as good as the quantity and quality of the data it’s built upon.

To store the necessary data, such organizations must have large computational power, access to data science expertise, and data sets on which to train models.

5. Data strategy is critical to AI benefits

What is the quality of your data?

Data infrastructure alone is not enough for AI to be effective, much less to produce desired benefits for an organization.

Data strategy overall involves creating processes to collect records (particularly results and outcomes), which in turn produce data. Data is the required input for machine learning, and the desired outcome of AI—predictive/explanatory models—are dependent on effective ML.

Once data has been gathered, it’s vital for the data strategy to outline how that data can be verified, cleaned, and structured so that the data is accurate and usable.

The principle “garbage in, garbage out” succinctly describes the importance of a well-formed data strategy and its place as one of our pillars of AI.

6. AI must produce a usable output

Did the AI model produce a usable output, and if so, is that output valuable for the organization?

Finally, the proposed (or developed!) AI solution must lead to a result that brings value — to the specific project in which it is implemented, and ultimately to the whole organization.

If, from the start, the effectiveness and scope of a proposed AI solution will be siloed or limited by an organization’s structure or culture, the value of that solution should be questioned. This doesn’t mean there isn’t value in small projects with quick turnaround and limited results — only that a successful AI output or even model is not the same as AI supporting a business process.

Together, these 6 pillars of AI facilitate agencies and contractors in making certain that:

  • AI is the right methodology for the problems being faced
  • The AI solutions developed are an effective approach to those problems
  • The output from AI will ultimately lead to long-term value for the organization

Source: https://federalnewsnetwork.com/commentary/2019/10/6-pillars-of-ai/

16 Oct 2019
Why Most Companies Are Failing at Artificial Intelligence: Eye on A.I.

Why Most Companies Are Failing at Artificial Intelligence: Eye on A.I.

Most companies that say they’re using artificial intelligence have yet to gain any value from their A.I. investments. 

A survey from MIT Sloan Management Review and Boston Consulting Group released Tuesday found that companies that view A.I. as merely a “technology thing,” akin to a product rather than a business overhaul, fail to gain financial results. The survey’s authors defined the “value” of an A.I. project as lifting sales, reducing costs, or creating a new product.

The survey, based on responses from nearly 2,500 executives, found that seven out of ten companies report little to no impact from their A.I. projects so far. Overall, 40% of the surveyed companies that have made “significant investments” in A.I. have yet to report any business gains.

There is a clear difference in the A.I. strategies between the “winners” and “losers,” according to Boston Consulting Group managing director Shervin Khodabandeh. For instance, companies that are getting some value from their investments view A.I. as a way to upend and change current business practices likes sales, rather than simply buying an A.I. tool from a vendor, he said.

Also, at the most successful companies, business leaders oversee A.I. initiatives. These executives, who control budgeting and resources, then build a group of data scientists and key personnel from departments like sales or marketing to oversee the A.I. project to completion.

This process is markedly different than the traditional technology approach at most businesses, in which CIOs decide which data-crunching projects to pursue. The downside to this CIO-driven tactic, Khodabandeh said, is that the A.I. projects become isolated and neglected by the overall executive team. 

The report confirms the findings of other recent surveys about A.I. and business that show companies struggle with their data-crunching initiatives. A KPMG survey earlier this year found that most executives believe it will take many years before their A.I. projects create a “significant return on investment.” 

Beyond the latest survey, Khodabandeh said companies that are successful in using A.I. often create their own mini-IT departments, built specifically for A.I. projects. Doing so allows the companies to brainstorm a specific business process they want to improve, like forecasting which products to sell, and then letting their data scientists pick and choose the A.I. technologies to do the job.

“He or she starts with something like, ‘I want my marketer to do their business differently,’” Khodabandeh said about how business-side executives should approach A.I.. “They don’t say, ‘I need reinforcement learning.’”

Source: https://fortune.com/2019/10/15/why-most-companies-are-failing-at-artificial-intelligence-eye-on-a-i/