Category: Innovation

15 Jan 2019

Congress Looking to AI and Education for Future Economy

Artificial intelligence is key to the future of the American economy, and investments need to be made now to ensure that the workforce is prepared, said members of Congress during a panel at a Washington Post Live event on Thursday.

When asked if artificial intelligence would put people out of a job, Rep. Pete Olson, R-Texas, cofounder of the Congressional AI Caucus, responded that AI will lead to different, but better-paying jobs. He pointed to examples at companies like IBM and Amazon of retraining and reskilling existing employees to fill AI-related roles.

In retraining local workers in his district, Olson noted that “AI is a big part of that, because it makes that worker better, more viable, more efficient. It drives down costs, drives up productivity, which is just great.”

Rep. Robin Kelly, D-Ill., shared Olson’s sentiments on needing more technology talent. She noted that in the rural, suburban, and urban areas of her district, the demand for skilled workers is a common theme.

Megan Smith, a former Federal CTO and CEO of digital experience agency Shift7, noted that AI could be used for more than economic benefit.

“Why would AI and data science not be for poverty and justice?” she asked. “I believe that the opportunity is about collective genius, and the surface area of participation, that…the more of us who can be included in the conversation, the more likely we are to succeed.”

However, all acknowledged the changes and potential hardships from the rise of AI.

Olson acknowledged that people are likely to experience job transitions and face changes, but he urged people to not be afraid of the change, and embrace it.

Source: https://www.meritalk.com/articles/congress-looking-to-ai-and-education-for-future-economy/

12 Jan 2019
Ovarian cancer AI can tell how aggressive a woman’s tumour is

Ovarian cancer AI can tell how aggressive a woman’s tumour is

Artificial intelligence is helping researchers spot aggressive forms of ovarian cancer.

Yinyin Yuan and colleagues at the Institute of Cancer Research in London built an AI to look for differences in tumour cell shape. It analysed tissue sample images from 514 women with ovarian cancer and found that misshapen nuclei correspond to a more aggressive form of the disease with a survival rate of 15 per cent over five years. That compares with 53 per cent for the standard form.

Human researchers are very good at looking at cells, but it is hard to quantify differences and the process takes a lot of time – hence the use of AI, says Yuan.

However, the test so far is of limited use, says Kevin Elias at the Dana-Farber Cancer Institute in Boston. “It is one thing to tell me a patient is likely to have a poor outcome, but if you are unable to suggest an alternative treatment, it is not that useful,” he says.

AI is increasingly used in cancer research to sift data for patterns that can help us in various ways, like tracking tumour evolution and improving diagnosis.

Yuan and her team will next use AI to look at cancer that resists chemotherapy, to try to develop more targeted treatments.

Source: https://www.newscientist.com/article/2190456-ovarian-cancer-ai-can-tell-how-aggressive-a-womans-tumour-is/

10 Jan 2019
Can AI improve mental health at work? Start-up introduces artificial intelligence tech to support employees

Can AI improve mental health at work? Start-up introduces artificial intelligence tech to support employees

A new year means a new start as millions of us make resolutions for the coming 12 months – putting self care at the top of our agenda.

And with the workplace being where we spend most of our time, it is perhaps important to consider how we are looking after ourselves while sitting in front of our desks.

But while we make little changes to our lives to try and live happier, how can technology help us improve our mental health at work?

Startup Humu thinks it has the answer. Run by three ex-Google employees in California, the company uses AI to ‘nudge’ people into being happier at work.

According to the New York Times, Humu’s software monitors data from employee surveys to determine how each worker is doing.

It then communicates with workers via emails and text messages to remind them to take small actions designed to improve their happiness.

Laszlo Block, chief executive of Humu, told NYT: “We want to be the person we hope we can be. But we need to be reminded.

“A nudge can have a powerful impact if correctly deployed on how people behave and on human performance.”

Concerns have been raised about the nature of the nudges and who can give them.

Professor Todd Haugh of Indiana University said the nudges could push employees into behaving in ways that benefited their employers, rather than having their best interests at heart.

However, Jessie Wisdom of Humu dismissed this claim and insisted: “Anybody can do whatever they want.”

She added: “We’re never trying to get people to do things that they don’t actually want to do.”

Source: https://www.standard.co.uk/futurelondon/health/startup-humu-uses-ai-software-to-support-mental-health-of-workers-a4034481.html

09 Jan 2019
The meaning of the blockchain

The meaning of the blockchain

THE BLOCKCHAIN, the technology that underlies bitcoin, has yet to live up to the hype surrounding it. Promising blockchain-based projects, such as a land registry in Honduras, have fallen short of expectations. Ersatz securities listings, called “initial coin offerings”, have attracted unfavourable attention from regulators.

Kevin Werbach is a legal scholar at the University of Pennsylvania’s Wharton School of Business and an expert on digital technologies. In the 1990s he was one of the leading thinkers, from his perch at America’s Federal Communications Commission, on how the internet would reshape regulatory policy. In his latest book, “The Blockchain and the New Architecture of Trust” (MIT Press, 2018), Mr Werbach explains how, far from being a radical technology that makes government obsolete, the blockchain relies on the social cohesion, political stability and rule of law that governments often provide.

Kevin Werbach: Blockchains are trust machines, as The Economist recognised in a cover story over three years ago. They’re useful when trusted institutions and intermediaries are problematic, or to overcome a trust gap between transacting organizations. The issue isn’t whether a centralised database could be employed in theory; it’s whether one would be in practice. In contexts like supply-chain management, provenance and trade finance, companies lack a unified view of information because they don’t fully trust their business partners. Blockchain enables what I call “translucent collaboration”: sharing data without giving up control. Whether it’s an improvement over the status quo, however, is highly context-specific.

Read more: https://www.economist.com/open-future/2019/01/08/the-meaning-of-the-blockchain

08 Jan 2019
Search... Blockchain's Role in the Enterprise in 2019

Blockchain’s Role in the Enterprise in 2019

Blockchain was invented by Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the cryptocurrency bitcoin. Blockchain has slowly gained traction in the enterprise since its emergence 10 years ago. In fact, late last year we saw digital workplaces using blockchain to share data and collaborate securely.

Blockchain in the Mainstream

Some suggest that blockchain will become mainstream in 2019. Elizabeth White, CEO of White Company, a blockchain based financial services technology firm that operates an exchange/wallet service, a crypto merchant processor and a crypto loadable debit card, agrees that that while 2019 will be the year of mass adoption of blockchain, it will only be for a few key, impactful use cases.

The reality, she said, is that the majority of applications being considered for blockchain simply do not need the distributed, trustless ledger that it offers and can be run faster and better on traditional databases.

White cites the example of supply management, or provenance. A blockchain is not necessary for this use case because there is a narrow group of users needing access to the information and the most important aspect of that information is not the transfer, (which is what blockchain is good for) but rather the input (which blockchain does not solve).

“There are certainly applications of blockchain that touch on supply management as it relates to trade finance,” she said. “The prime use case of blockchain is trustless and automated payments, and leveraging that technology we are building systems for B2B payments that can serve as escrow or conditional payment protocols.”

Source: https://www.cmswire.com/digital-workplace/blockchains-role-in-the-enterprise-in-2019/

07 Jan 2019
INSIGHTS Artificial Intelligence Basics For Senior Executives

Artificial Intelligence Basics For Senior Executives

Artificial intelligence has evolved to become one of the most overused and misunderstood terms in business while also offering the potential to be the driving force in business decision-making, automation, and scalability.

Now is the time to develop strategies that set your business up for success for years to come. This article attempts to shed light on the origins, definition, types, business applications, and how senior executives can approach introducing AI to their business.

Defining AI

Artificial intelligence (AI) is a broad term and describes technology’s ability to perform intellectual tasks typically only performed by humans. Technically speaking, a spreadsheet that helps calculate insurance rates based on a range of inputs can be classified as AI.

Applied in a business context AI can describe what happened based on historical data, anticipate what is likely to happen in the future, and provide recommendations on what to do to achieve goals.

The process that made AI the powerful technology it is today is machine learning (ML). It describes the ability of a system to analyse data, identify patterns, and make recommendations by processing data and experiences without explicit programming instructions. ML models adapt and become more accurate over time.

Examples of ML include:

Talent management – organisations identifying which employee traits are correlated to high performance based on CV information and performance review data.

Pricing – ride share services adjusting pricing based on estimated customer propensity to pay a higher price.

Navigation – courier services planning delivery routes based on weather, traffic, and fuel costs.

The latest advancement of ML is deep learning which is a technology that requires even less human guidance and is more accurate than most ML methods. Areas deep learning have helped to evolve include challenging tasks such as image recognition, sound processing, and natural language processing. Google Assistant, for example, is a product of deep learning advancements.

From Zero To Skynet In 200 Years

The fact that AI has evolved into the most disruptive technology since the introduction of the internet is based on the evolution of three major trends.

Big data – The digitisation of our economies and the associated data volumes have been crucial in creating data sets required to effectively train machine learning algorithms. According to Globalwebindex, there are now over four billion people online globally generating vast amounts of data every minute of the day.

Algorithms – Researchers have paved the way for AI by gradually improving algorithms. Theoretical work in the 1800s was brought to life when American scientist Frank Rosenblatt developed the very first machine learning model in 1958.

Computing power and storage – Since Amazon has brought cloud computing and storage capabilities to the mainstream, the costs and ease of access have improved significantly.

These three trends have led us to AI today, a technology so powerful that it already outperforms humans at certain tasks.

How It Works

The machine learning process roughly works the same in each case

  1. Business objective: The business objective is defined and AI might be identified as the way to achieve the objective.
  2. Data preparation: Training data is processed (cleaned and standardised) to make it suitable for the model.
  3. Model draft: A first iteration of the machine learning model is created.
  4. Model training and optimisation: Based on a training data set, the model is fine-tuned to generate better outputs.
  5. Business rules: Business rules are defined to do something with the output of the ML model.
  6. Model deployment: Once the accuracy of the model is satisfactory and the business rules are defined, the model is deployed, which means that “real-world” data inputs (i.e. not training data) can be used to return results.

Rule of thumb here is: more data –> better model –> higher accuracy.

A simplified example of a company going through this process could be a retailer wanting to increase the lifetime value of their online customers.

  1. Business objective: Increase the value of products purchased online per transaction by 25 per cent.
  2. Data preparation: All online and offline purchase data captured via the loyalty program is standardised and transferred into a central database. This data includes customer gender, age, product, product category, and date of purchase.
  3. Model draft: The initial model is created to identify customer segments and the products they are likely to buy together each season.
  4. Model training and optimisation: The initial outputs are compared to the latest real-world data and variable tweaks are needed to make the predictions of the model more accurate.
  5. Business rules: When checking out, each customer should be presented with a last-minute product recommendation that amounts to a minimum transaction value increase by 25 per cent.
  6. Model deployment: The check-out product recommendations are now visible to each website customer and the system will optimise recommendations dynamically based on the latest customer purchase behaviour.

The People Needed To Make It Happen

Larger scale companies who are experienced in ML typically involve a wide range of personnel in projects. Here are some examples. Please note that the job titles and responsibilities might vary greatly across organisations.

Business Analyst – Understands the business needs and determines the outcomes to be achieved.

Data analyst – Defines and sources the data required to solve the business problem.

Data engineer – Establishes the connection between the data sources and the database. She also defines the database structure to ensure efficient access.

Data designer – Defines database structure to ensure efficient access.

Database administrator – Manages the storage facility including performance and security backups.

Data architects – Is across the big picture of data flows and defines the data architecture in collaboration with the data designer and the data engineer.

Data scientist – Uses statistical analysis and data visualisation tools to explore data and creates machine learning models based on findings.

ML Engineer – Deploys the ML model and ensures that IT resources such as processing power and storage are appropriately allocated.

If you have not started introducing AI into your business and you don’t want to hire a whole team from scratch you might want to consider sourcing a vendor with AI capabilities. There is an increasing number of vendors out there and if offshore vendors are an option you’ll be able to find highly qualified talent at a fraction of the cost of western markets.

Making AI Part Of Your Business DNA

Just like any other new technology, ensuring a widespread adoption within your organisation and its culture is a challenge. Considering AI is the most powerful technology known to mankind it has never been as important as it is today to create an effective adoption plan.

The following blueprint for AI adoption can be applied to most businesses.

Stage 1 – Discovery

This is the early stage of AI adoption most business will find themselves in today.

Here it is important to make the most out of your existing resources. Start thinking about the problems you might be able to solve and what data may be required.

Engage some of your existing engineers and ask them to learn about AI and set up an AWS environment to experiment with model templates. Once they feel confident creating basic machine learning models, work with them on designing an MVP. Engage your most loyal customers to test the MVP and capture feedback.

Now you’ll be able to communicate the value AI can bring to the business using the findings of your MVP experiment and align your key stakeholders.

Stage 2 – Engagement 

Engage your key stakeholders to map out how AI can help achieve department objectives. Map existing processes to understand where AI can add value, which employee roles will change and how customer experiences can be improved. The existing prototype may be able to offer improvements already. If new AI capabilities are needed, create a data strategy that prepares the business for future advancements. Data partnerships will help to realise your data strategy faster.

Developing an understanding of how AI relates to other technologies is crucial to ensure future relevance. It may be the most powerful technology of them all but you don’t want to miss out on synergies with others such as IoT, VR, big data, and blockchain.

Change management will be required to take your staff on the AI journey. There will be anxiety around job losses. Sure, some jobs may not be required anymore but AI and general company growth will create new ones. Offering transparency around the use of AI within your organisation and training to the roles likely to be affected by changing job requirements will ensure a smooth transition to the next stage.

Read more: https://which-50.com/artificial-intelligence-basics-for-senior-executives/

05 Jan 2019
AI

Artificial Intelligence And The End Of Government

Even as artificial intelligence (AI) is forecast to exceed human capabilities across a range of industries it is also predicted to augment human labor. In finance, AI is already helping financial advisors augment financial planning while enhancing investment strategy. And in medicine, AI diagnostics systems have proven to be far more accurate than doctors in diagnosing heart disease and cancerous growths. In fact, McKinsey lists some 400 use cases representing $6 trillion in value across 19 industries in which AI will augment human work.

But what about government? What will the impact of AI be on the nature of government?

Waking Government to AI

Not surprisingly much of the public sector has already begun experimenting with AI-driven technologies. At the federal level many agencies are beginning to deploy AI-powered interfaces for customer service, alongside an expanding use of software to update legacy-systemsand automate simple tasks. Growing investments in infrastructure planning, legal adjudication, fraud detection, and citizen response systemsrepresent the first phase in the ongoing digitization of government.

Notwithstanding these investments however, government remains far behind the private sector in deploying and integrating AI. As Silicon Valley’s Tim O’Reilly has suggested, augmenting government through AI is critical to modernizing the public sector. AI-based applications could potentially reduce backlogs and free workers from mundane tasks while cutting costs. According to Deloitte, documenting and recording information alone consumes a half-billion staff hours each year, at a cost of more than $16 billion in wages. Add to this an additional $15 billion in the procuring and processing of information and the value of AI in transforming government bureaucracy becomes clear.

Read more: https://www.forbes.com/sites/danielaraya/2019/01/04/artificial-intelligence-and-the-end-of-government/#21666a50719b

03 Jan 2019

Retrofitting AI – key adoption issues in the enterprise 2019-2020

AI technology has moved beyond the hype phase, but short-term adoption of AI in organizations will primarily come through third-party software and relatively straightforward application of Machine Learning, even though many organizations are not yet ready for the latter.

The 2018 AI hype machine was as close to jumping the shark as anything I’ve seen over more than 30 years understanding this field of technology innovation.

Machine Learning holds the greatest promise yet much needs to happen before firms see a genuine business value stream, Even so, there are excellent opportunities for organizations retrofitting AI functions into their own applications to boost speed, accuracy, and productivity.

Caution: AI cuts to the core of human contribution and will need vigilant leadership to prevent disorganization, distortion and dysfunction. It is just as likely that human experts in select fields such as finance, underwriting, claims processing and credit, for example, will be co-opted by AI adoption as those performing manual processes.

Artificial intelligence (AI) is old technology, with new implementations. However, the advent of increasingly parallel programming models and unprecedentedly scalable hardware, coupled with the opportunity to pursue significant new business value served to make AI 2019 tech’s glittering fashion statement. As executives consider adding AI to their business system portfolio over the next 24 months, they must understand the following:

  • Not everything called AI is real. Psychologists and neuroscientists are still trying to understand what human intelligence is, so “intelligence” in the context of “artificial” and “human” is the same word to describe two different things. Think Paris, France and Paris, Texas. Distinguishing between core AI disciplines and technologies and AI applications that are built from those technologies is important to keep track of AI investments and expected business outcomes (see Figure 1).
  • In 2019, AI can stand for “additive intelligence.” Organizations will find that their existing applications can be enhanced with the application of AI “wrappers,” particularly replacing manual data ingestion, human expert forecasting, and data discovery. It is becoming easier for in-house developers to use AI technology, especially since Amazon AWS, IBM Watson and Microsoft Azure, among others, provide useful API’s for AI algorithms. However, enterprise software providers have far more resources to implement AI capabilities and most AI will be added to business systems through software packages.
  • AI can lead to organizational distortion and dysfunction. AI implementation has a direct effect on the nature of work in organizations. Adjusting to this is never simple. Employees see AI coming and they will push back, either purposely or not.

Source: https://diginomica.com/2019/01/03/the-retrofitting-outlook-for-ai-in-the-enterprise-2019-2020/

02 Jan 2019

AI Will Create Millions More Jobs Than It Will Destroy. Here’s How

In the past few years, artificial intelligence has advanced so quickly that it now seems hardly a month goes by without a newsworthy AI breakthrough. In areas as wide-ranging as speech translation, medical diagnosis, and gameplay, we have seen computers outperform humans in startling ways.

This has sparked a discussion about how AI will impact employment. Some fear that as AI improves, it will supplant workers, creating an ever-growing pool of unemployable humans who cannot compete economically with machines.

This concern, while understandable, is unfounded. In fact, AI will be the greatest job engine the world has ever seen.

New Technology Isn’t a New Phenomenon

On the one hand, those who predict massive job loss from AI can be excused. It is easier to see existing jobs disrupted by new technology than to envision what new jobs the technology will enable.

But on the other hand, radical technological advances aren’t a new phenomenon. Technology has progressed nonstop for 250 years, and in the US unemployment has stayed between 5 to 10 percent for almost all that time, even when radical new technologies like steam power and electricity came on the scene.

But you don’t have to look back to steam, or even electricity. Just look at the internet. Go back 25 years, well within the memory of today’s pessimistic prognosticators, to 1993. The web browser Mosaic had just been released, and the phrase “surfing the web,” that most mixed of metaphors, was just a few months old.

If someone had asked you what would be the result of connecting a couple billion computers into a giant network with common protocols, you might have predicted that email would cause us to mail fewer letters, and the web might cause us to read fewer newspapers and perhaps even do our shopping online. If you were particularly farsighted, you might have speculated that travel agents and stockbrokers would be adversely affected by this technology. And based on those surmises, you might have thought the internet would destroy jobs.

But now we know what really happened. The obvious changes did occur. But a slew of unexpected changes happened as well. We got thousands of new companies worth trillions of dollars. We bettered the lot of virtually everyone on the planet touched by the technology. Dozens of new careers emerged, from web designer to data scientist to online marketer. The cost of starting a business with worldwide reach plummeted, and the cost of communicating with customers and leads went to nearly zero. Vast storehouses of information were made freely available and used by entrepreneurs around the globe to build new kinds of businesses.

The Rise of Artificial Intelligence

Then along came a new, even bigger technology: artificial intelligence. You hear the same refrain: “It will destroy jobs.”

Consider the ATM. If you had to point to a technology that looked as though it would replace people, the ATM might look like a good bet; it is, after all, an automated teller machine. And yet, there are more tellers now than when ATMs were widely released. How can this be? Simple: ATMs lowered the cost of opening bank branches, and banks responded by opening more, which required hiring more tellers.

In this manner, AI will create millions of jobs that are far beyond our ability to imagine. For instance, AI is becoming adept at language translation—and according to the US Bureau of Labor Statistics, demand for human translators is skyrocketing. Why? If the cost of basic translation drops to nearly zero, the cost of doing business with those who speak other languages falls. Thus, it emboldens companies to do more business overseas, creating more work for human translators. AI may do the simple translations, but humans are needed for the nuanced kind.

In fact, the BLS forecasts faster-than-average job growth in many occupations that AI is expected to impact: accountants, forensic scientists, geological technicians, technical writers, MRI operators, dietitians, financial specialists, web developers, loan officers, medical secretaries, and customer service representatives, to name a very few. These fields will not experience job growth in spite of AI, but through it.

But just as with the internet, the real gains in jobs will come from places where our imaginations cannot yet take us.

Parsing Pessimism

You may recall waking up one morning to the news that “47 percent of jobs will be lost to technology.”

That report by Carl Frey and Michael Osborne is a fine piece of work, but readers and the media distorted their 47 percent number. What the authors actually said is that some functions within 47 percent of jobs will be automated, not that 47 percent of jobs will disappear.

Frey and Osborne go on to rank occupations by “probability of computerization” and give the following jobs a 65 percent or higher probability: social science research assistants, atmospheric and space scientists, and pharmacy aides. So what does this mean? Social science professors will no longer have research assistants? Of course they will. They will just do different things because much of what they do today will be automated.

The intergovernmental Organization for Economic Co-operation and Development released a report of their own in 2016. This report, titled “The Risk of Automation for Jobs in OECD Countries,” applies a different “whole occupations” methodology and puts the share of jobs potentially lost to computerization at nine percent. That is normal churn for the economy.

But what of the skills gap? Will AI eliminate low-skilled workers and create high-skilled job opportunities? The relevant question is whether most people can do a job that’s just a little more complicated than the one they currently have. This is exactly what happened with the industrial revolution; farmers became factory workers, factory workers became factory managers, and so on.

Embracing AI in the Workplace

A January 2018 Accenture report titled “Reworking the Revolution” estimates that new applications of AI combined with human collaboration could boost employment worldwide as much as 10 percent by 2020.

Electricity changed the world, as did mechanical power, as did the assembly line. No one can reasonably claim that we would be better off without those technologies. Each of them bettered our lives, created jobs, and raised wages. AI will be bigger than electricity, bigger than mechanization, bigger than anything that has come before it.

This is how free economies work, and why we have never run out of jobs due to automation. There are not a fixed number of jobs that automation steals one by one, resulting in progressively more unemployment. There are as many jobs in the world as there are buyers and sellers of labor.

Source: https://singularityhub.com/2019/01/01/ai-will-create-millions-more-jobs-than-it-will-destroy-heres-how/#sm.00001l947i936qdzfzqjt46zte0pt

30 Dec 2018

2018 is the year AI got its eyes

Computer scientists have spent more than two decades teaching, training and developing machines to see the world around them. Only recently have the artificial eyes begun to match (and occasionally exceed) their biological predecessors. 2018 has seen marked improvement in two areas of AI image processing: facial-recognition technology in both commerce and security, and image generation in — of all fields — art.

In September of this year, a team of researchers from Google’s DeepMind division published a paper outlining the operation of their newest Generative Adversarial Network. Dubbed BigGAN, this image-generation engine leverages Google’s massive cloud computing power to create extremely realistic images. But, even better, the system can be leveraged to generate dreamlike, almost nightmarish, visual mashups of objects, symbols and virtually anything else you train the system with. Google has already released the source code into the wilds of the internet and is allowing creators from anywhere in the world to borrow its processing capabilities to use the system as they wish.

“I’ve been really excited by all of the interactive web demos that people have started to turn these algorithms into,” Janelle Shane, who is a research scientist in optics by day and a neural-network programmer by night, told Engadget. She points out that in the past, researchers would typically publish their findings and call it a day. You’d be lucky to find even a YouTube video on the subject.

“But now,” she continued, “they will publish their model, they’ll publish their code and what’s even greater for the general creative world is that they will publish a kind of web application where you can try out their model for yourself.”

This is exactly what Joel Simon, developer of GANbreeder has done. This web app enables users to generate and remix BigGAN images over multiple generations to create truly unique creations. “With Simon’s web interface, you can look at what happens when you’re not generating pictures of just symbols, for example,” Shane points out. “But you’re generating something that’s a cross between a symbol and a comic book and a shark, for example.”

Read more: https://www.engadget.com/2018/12/29/2018-is-the-year-ai-got-its-eyes/