Category: Innovation

23 Sep 2019
The AI arms race spawns new hardware architectures

The AI arms race spawns new hardware architectures

As society turns to artificial intelligence to solve problems across ever more domains, we’re seeing an arms race to create specialized hardware that can run deep learning models at higher speeds and lower power consumption.

Some recent breakthroughs in this race include new chip architectures that perform computations in ways that are fundamentally different from what we’ve seen before. Looking at their capabilities gives us an idea of the kinds of AI applications we could see emerging over the next couple of years.

Neuromorphic chips

Neural networks, composed of thousands and millions of small programs that perform simple calculations to perform complicated tasks such as detecting objects in images or converting speech to text are key to deep learning.

But traditional computers are not optimized for neural network operations. Instead they are composed of one or several powerful central processing units (CPU). Neuromorphic computers use an alternative chip architecture to physically represent neural networks. Neuromorphic chips are composed of many physical artificial neurons that directly correspond to their software counterparts. This make them especially fast at training and running neural networks.

The concept behind neuromorphic computing has existed since the 1980s, but it did not get much attention because neural networks were mostly dismissed as too inefficient. With renewed interest in deep learning and neural networks in the past few years, research on neuromorphic chips has also received new attention.

Read more:


22 Sep 2019
Artificial Intelligence (AI) creates new possibilities for personalisation this year

Artificial Intelligence (AI) creates new possibilities for personalisation this year

Technology brands expand beyond their core products and turn themselves into a lifestyle

New Delhi: Artificial Intelligence (AI) and cross-industry collaborations are creating new avenues for data collection and offering personalised services to users this year, according to a report.

Among other technology trends that are picking up this year are the convergence of the smart home and healthcare, autonomous vehicles coming for last-mile delivery and data becoming a hot-button geopolitical issue, according to the report titled “14 Trends Shaping Tech” from CB Insights.

“As a more tech-savvy generation ages up, we’ll see the smart home begin acting as a kind of in-home health aide, monitoring senior citizens’ health and well being. We’ll see logistics players experiment with finally moving beyond a human driver,” said the report.

“And we’ll see cross-industry collaborations, whether via ancestry-informed Spotify playlists or limited edition Fortnite game skins,” it added.

In September 2018, Spotify partnered with to utilise DNA data to create unique playlists for individuals.

Playlists reflect music linked to different ethnicities and regions. A person with ancestral roots in Bengaluru, for example, might see Carnatic violinists and Kannada film songs on their playlists.

DNA data is also informing how we eat. GenoPalate, for example, collects DNA info through saliva samples and analyses physiological components like an individual’s ability to absorb certain vitamins or how fast they can metabolize nutrients.

From there, it matches this information to nutrition analyses that it has conducted on a wide range of food and suggests a personalised diet. It also sells its own meal kits that use this information to map out menus.

“We’ll also see technology brands expand beyond their core products and turn themselves into a lifestyle,” said the report.

For example, as electric vehicle users need to wait for their batteries to charge for anywhere from 30 minutes to two hours, the makers of these vehicles are trying to turn this idle time into an asset.

China’s NioHouse couples charging stations with a host of activities. At the NioHouse, a user can visit the library, drop children off at daycare, co-work, and even visit a nap pod to rest while charging.

Nio has also partnered with fashion designer Hussein Chalayan to launch and sell a fashion line, Nio Extreme.

Tech companies today are also attempting to bridge the gap between academia and the career market.

Companies like the Lambda School and Flatiron School offer courses to train students on exactly the skills they will need to get a job, said the report.

These apprenticeships mostly focus on tech skills like computer science and coding. Training comes with the explicit goal of employment and students only need to pay their tuition once they have landed a job that pays them above a certain range.

Investors are also betting on the rise of digital goods. While these goods cannot be owned in the physical world, they come with clout, and offer personalisation and in-game experiences to otherwise one-size-fits-all characters, the research showed.


19 Sep 2019
Artificial Intelligence (AI) Stats News: AI Is Actively Watching You In 75 Countries

AI Is Actively Watching You In 75 Countries

Recent surveys, studies, forecasts and other quantitative assessments of the impact and progress of AI highlighted the strong state of AI surveillance worldwide, the lack of adherence to common privacy principles in companies’ data privacy statement, the growing adoption of AI by global businesses, and the perception of AI as a major risk by institutional investors.

AI surveillance and the state of data privacy

At least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms (56 countries), facial recognition systems (64 countries), and smart policing (52 countries); technology linked to Chinese companies—particularly Huawei, Hikvision, Dahua, and ZTE—supply AI surveillance technology in 63 countries and U.S. firms’ technology—from IBM, Palantir, and Cisco—is present in 32 countries; 51% of advanced democracies deploy AI surveillance systems [Carnegie Endowment for International Peace AI Global Surveillance (AIGS) Index]

An analysis of 29 variables in 1,200 privacy statements against common themes in three major privacy regulations (the EU’s GDPR, California’s CCPA, and Canada’s PIPEDA) found that many organizations’ privacy statements fail to meet common privacy principles; less than 1% of organizations had language stating which types of third parties could access user data; only 2% of organizations had explicit language about data retention; only 32% of organizations had “readable” statements based on OTA standards [Internet’s Society’s Online Trust Alliance]


AI and the future of work

57% of technology companies do not expect technological advances will displace any of their workers in the next five years; 29% of respondents expect job displacement and 68% plan to retain workers by offering reskilling programs; software development (63%), data analytics (54%), engineering (52%), and AI/machine learning (48%) are the tech skills in highest demand [Consumer Technology Association survey of 252 tech business leaders]

Business adoption of AI

17% of 30 Global 500 companies have reported the use of AI/machine learning at scale and 30% reported selective use in specific business functions; in 3 years, 50% expect to be using AI/machine learning at scale; 26% have deployed RPA at scale across the enterprise or major functions; 65% say their use of RPA today is selective and siloed by individual groups or functions; in 3 years, 83% expect to have RPA deployed at scale; companies investing in AI report achieving on average 15% productivity improvements for the projects they are undertaking; most companies reported that their investments in AI-related talent and supporting infrastructure will increase approximately 50% to 100% in the next three years [KPMG 2019 Enterprise AI Adoption Study based on in-depth interviews with senior leaders at 30 of the world’s largest companies and other sources]

85% of organizations surveyed have a data strategy and 77% have implemented some AI-related technologies in the workplace, with 31% already seeing major business value from their AI efforts; top business functions for gaining most value from AI are sales (35%) and marketing (32%) and top technologies are machine learning (34%), chatbots (34%), and robotics (28%) [Mindtree survey of 650 IT leaders in the US and UK]

Expected business impact of AI

Top AI priorities for the next 3 to 5 years: customer and market insights that will refine personalization, driving sales and retention; back office and shared services automation to remove repetitive human tasks; finance and accounting streamlined to improve efficiency and compliance; analysis of unstructured voice and text data for specific functional use cases [KPMG 2019 Enterprise AI Adoption Study based in in-depth interviews with senior leaders at 30 of the world’s largest companies and other sources]

85% of institutional investors view AI as an investment risk that could potentially provoke societal backlash as well as geopolitical tension; 52% of the investors surveyed, who stated AI was a risk, also regarded it an opportunity, whereas 33% saw it as only a risk and 7% regard it as an opportunity only [BNY Mellon Investment Management and CREATE-Research in-depth, structured interviews with 45 CIOs, investment strategists and portfolio managers among pension plans, asset managers and pension consultants in 16 countries and a literature survey of about 400 widely respected research studies]

AI research successes

A deep learning algorithm, trained on non-imaging and sequential medical records, predicted the development of non-melanoma skin cancer in an Asian population with 89% accuracy [JAMA Dermatology]

Researchers at MIT developed a machine learning model that can estimate a patient’s risk of cardiovascular death. Using just the first fifteen minutes of a patient’s raw electrocardiogram (ECG) signal, the tool produces a score that places patients into different risk categories. Patients in the top quartile were nearly seven times more likely to die of cardiovascular death when compared to the low-risk group in the bottom quartile. By comparison, patients identified as high risk by the most common existing risk metrics were only three times more likely to suffer an adverse event compared to their low-risk counterparts [MIT CSAIL]


17 Sep 2019
Meet Five Synthetic Biology Companies Using AI To Engineer Biology

Meet Five Synthetic Biology Companies Using AI To Engineer Biology

TVs and radios blare that “artificial intelligence is coming,” and it will take your job and beat you at chess.

But AI is already here, and it can beat you — and the world’s best — at chess. In 2012, it was also used by Google to identify cats in YouTube videos. Today, it’s the reason Teslas have Autopilot and Netflix and Spotify seem to “read your mind.” Now, AI is changing the field of synthetic biology and how we engineer biology. It’s helping engineers design new ways to design genetic circuits — and it could leave a remarkable impact on the future of humanity through the huge investment it has been receiving ($12.3b in the last 10 years) and the markets it is disrupting.

The idea of artificial intelligence is relatively straightforward — it is the programming of machines with reasoning, learning, and decision-making behaviors. Some AI algorithms (which are just a set of rules that a computer follows) are so good at these tasks that they can easily outperform human experts.

Most of what we hear about artificial intelligence refers to machine learning, a subclass of AI algorithms that extrapolate patterns from data and then use that analysis to make predictions. The more data these algorithms collect, the more accurate their predictions become. Deep learning is a more powerful subcategory of machine learning, where a high number of computational layers called neural networks (inspired by the structure of the brain) operate in tandem to increase processing depth, facilitating technologies like advanced facial recognition (including FaceID on your iPhone).

Biology, in particular, is one of the most promising beneficiaries of artificial intelligence. From investigating genetic mutations that contribute to obesity to examining pathology samples for cancerous cells, biology produces an inordinate amount of complex, convoluted data. But the information contained within these datasets often offers valuable insights that could be used to improve our health.

In the field of synthetic biology, where engineers seek to “rewire” living organisms and program them with new functions, many scientists are harnessing AI to design more effective experiments, analyze their data, and use it to create groundbreaking therapeutics. Here are five companies that are integrating machine learning with synthetic biology to pave the way for better science and better engineering.

Read more:

16 Sep 2019
AI Can Now Pass School Tests but Still Falls Short on the Turing Test

AI Can Now Pass School Tests but Still Falls Short on the Turing Test

From winning at Go to passing eighth grade level multiple choice tests, AI is making rapid advances. But its creativity still leaves much to be desired.

On September 4, 2019, Peter Clark,  along with several other researchers, published “From ‘F’ to ‘A’ on the N.Y. Regents Science Exams: An Overview of the Aristo Project∗” The Aristo project named in the title is hailed for the rapid improvement it has demonstrated when it tested the way eighth-grade human students in New York State are tested for their knowledge of science. 

The researchers concluded that this is an important milestone for AI: “Although Aristo only answers multiple choice questions without diagrams, and operates only in the domain of science, it nevertheless represents an important milestone towards systems that can read and understand. The momentum on this task has been remarkable, with accuracy moving from roughly 60% to over 90% in just three years.”

The Aristo project is powered by the financial resources and vision of Paul G. Allen, the Founder of the Allen Institute for Artificial Intelligence (A12). As the site explains, there are several parts to making AI capable of passing a multiple-choice test.

Aristo’s most recent solvers include:

The Information Retrieval, PMI, and ACME solvers that look for answers in a large corpus using statistical word correlations. These solvers are effective for “lookup” questions where an answer is explicit in text.
The Tuple Inference, Multee, and Qualitative Reasoning solvers that attempt to answer questions by reasoning, where two or more pieces of evidence need to be combined to derive an answer.
The AristoBERT and AristoRoBERTa solvers that apply the recent BERT-based language-models to science questions. These systems are trained to apply relevant background knowledge to the question, and use a small training curriculum to improve their performance. Their high performance reflects the rapid progress made by the NLP field as a whole.
While Aristo’s progress is, indeed, impressive, and, no doubt, there are some eight graders who wish they could find some way to carry along the AI with them to the test, it still is far from capable of passing a Turing test. In fact, the Allen Institute for Artificial Intelligence admitted that it was deliberately testing its AI in a different way when it set out to develop it in 2016.

The explanation was given in an article entitled, “Moving Beyond the Turing Test with the Allen AI Science Challenge. Admitting that the test would not be “a full test of machine intelligence,” it still considered worthwhile for its showing “several capabilities strongly associated with intelligence – capabilities that our machines need if they are to reliably perform the smart activities we desire of them in the future – including language understanding, reasoning, and use of commonsense knowledge.”

There’s also the practical consideration that makes testing with ready-made tests so appealing: “In addition, from a practical point of view, exams are accessible, measurable, understandable, and compelling.” Come to think of it, that’s why some educators love having standardized tests, while others decry them for the very fact that they give the false impression they are measuring intelligence when all they can measure is performance of a very specific nature.

When it comes to more creative intelligence in which the answer is not simply out there to be found or even intuited, AI still has quite a way to go. We can see that in its attempts to create a script.

Making movies with AI
Benjamin (formerly known as Jetson) is the self-chosen name of “the world’s first automated screenwriter.” The screenwriter known as Benjamin is “a self-improving LSTM RNN [Long short-term memory recurrent neural network] machine intelligence trained on human screenplays.

Benjamin has his/its own Facebook page, Benjamin also used to have a site under that name, but now he/it shares the credit on a more generally named one,, which offers links to all three of the films based on scripts generated by AI that were made within just two days to qualify for the Sci-Fi London’s 48hr Film Challenge.

Benjamin’s first foray into film was the script for “Sunspring.” However, even that required a bit of prompting from Ross Goodwin, “creative technologist, artist, hacker, data scientist,” as well as the work of the filmmaker Oscar Sharp, and three human actors.

The film was posted to YouTube, and you can see it in its entirety by sitting through the entire 9 minutes. See if you share the assessment expressed by the writer Neil Gaiman whose tweet appears on the Benjamin site: “Watch a short SF film gloriously fail the Turing Test.”

Read more:

14 Sep 2019
How AI can save the retail industry

How AI can save the retail industry

Brick and mortar stores are closing left and right, but artificial intelligence may be able to keep them alive.

The future of retail continues looking grim, as more brick and mortar stores close their doors. US retailers have announced 8,558 store closures so far this year, with total US store closures predicted to hit 12,000 by the end of 2019, reported Coresight Research on Friday.

While the internet and automation are typically to blame for these closures, the same technology could actaaually be the solution for physical store locations, said Paul Winsor, general manager of retail at DataRobot.

“If retailers want to stay open in the existing stores that they are operating in, my recommendation to them is to ask: Are they understanding the changing habits of those customers, and how they’re shopping with them, in those locations?” Winsor said.

“To survive in the tough, tough retail market, you have to start to turn your business, and make predictions, based on learning from your historical data,” he added. “It’s all about learning from your historical data.”

After being in the retail industry for more than 30 years, Winsor said that artificial intelligence (AI) and machine learning are tools retailers must use to get ahead—and to stay open.

How businesses stayed open in the past
“Data driven retail is not new. Technology has been around to help companies understand their business from a data perspective before,” Winsor said. “The data just hasn’t been as individual and accurate, as the way that machine learning can help you do that.”

To make predictions in the past, retailers would simply look at daily and weekly transactional data and draw conclusions from that, Winsor said.

As technology evolved and convenience took priority, online stores became the primary way to shop. Since technology took over the shopping experience, it also took over the way retailers draw conclusions and predictions about their services. If retailers refuse to advance and adapt to an evolving retail infrastructure, they will inevitably be left behind.

The three ways AI helps retailers
“With AI, we’re dealing with machines that can simulate intelligent behavior or imitate intelligent human behavior, i.e. sense, reason, act and adapt,” said Brian Solis, principal analyst at Altimeter. “One of the most popular ways leading brands are using AI today is through machine learning.”

“The difference is that with machine learning, systems can recognize patterns from clean data sets, and with proper management, learn from that data to assess and even predict outcomes and improve performance over time,” Solis added. “This helps retailers learn how to personalize engagement, offers and next best action, as well as guide product and service development.”

1. Understanding the customer

Machine learning helps retailers understand their customers and predict future behaviors, Winsor said.

“We want to be more convenient in the way that we shop and we want to be, we want more convenience, and we want to shop across multiple channels,” Winsor said. “We know, as consumers ourselves, that we are constantly changing our habits and therefore what machine learning and AI is doing in this space.”

2. Forecasting

“The really impactful part is around forecasting,” Winsor noted. “We are now seeing retailers using AI and automated machine learning to operate their demand forecasting to understand the actual quantity needed today based on the demand from the customers.”

Not only will this increase accuracy, but it increases operational efficiency, saving both time and money for the organization.

“It’s going to really increase your accuracy because you’re taking in, you’re learning from the past and you’re predicting what that quantity needs to be in the future,” Winsor said. “Operational efficiency is absolutely key, because we’re talking about an industry that is operating its business on very low operating margins.”

3. Streamlining product supply and development

Machine learning and AI can play a significant role in determining a retailer’s supply and development plans.
Some questions machine learning can answer, according to Winsor, include: Are retailers selling the right products today based on the customer’s demands and expectations? And are they priced at the right level? And are they the right products in the right assortment in the right stores—in the right location?

Future of AI in retail
The future of retail is more automated and more individualized, Solis said. “Consumer choice will become less chaotic and stressful.”

“The more promising and realistic future scenarios include screens, connected dressing rooms, and virtual racks that are tailored to me based on my personal, data-defined persona,” Solis said. “It only shares things I would consider based on previous history, and also coming trends, aligned with individual preferences. You could play that scenario out in a multitude of retail sectors, i.e. automotive, appliances, etc.”


11 Sep 2019
AI, Are You A Watcher Or A Skeptic

AI, Are You A Watcher Or A Skeptic

We have two main questions for AI watchers (that are a few) and AI skeptics (that are certainly a lot more). Is the AI’s instigation being actioned immediately without even examining the bigger picture and are we okay with AI taking over human roles?

A watcher of AI for Accenture Global Health, Dr. Safavi, emphasizes that we might actually be taking a narrow view of Artificial Intelligence’s potential and power.

“To maximize the potential of AI, healthcare organizations must re-imagine and reinvent their processes from scratch—and create self-adapting, self-optimizing “living processes” using machine learning algorithms and real-time data to improve. Machines themselves will become agents of process change, unlocking new roles and new ways for humans and machines to work together”.

A lot of developers and watchers of the AI system still don’t predict an existing or future trend in which humans are being completely cut out-of-the-loop. Instead, they foresee (as told by trend-watchers and AI developers) a future in which Artificial Intelligence targets to complement the interactions or actions of humans. It is either because of the inefficiency of the technology to take over the human roles completely, or perhaps humans can offer a more autonomous and holistic response to any circumstances, thus, using AI as master and commander.

The Usage of AI Algorithms – Is It Taking Over The Human Roles?

The main area of contention lies in the introduction of the AI algorithms usage covertly. We will explain this with an example of AI algorithms applications in the finance industry. Associate Professor of Political Science at State University of New York Virginia Eubanks, views AI algorithms as an approach to stay unbiased while distributing ethical and correct welfare funding. The reaction of some watchers to the usage of AI algorithms is positive and they think that it’s an appropriate way to make the process neutral and smart machine can only bring the impartiality.

On the other hand, some skeptics also think that in such cases, AI may initially provide accuracy but actually, humans initiate the data and control devise algorithms and data input. Skeptics and watchers both warn to be cautious and think that AI may be viewed as neutral or unbiased, however, interpretation of the data and human interaction should also be considered.

Typically, there is a perception that mostly the AI technologies require a large amount of human response, either data input or programming. Thus, bringing autonomous and automation behaviors into question.

Who Will Watch The (AI) Watchers?

What mainly drives the existing trend to implement AI solutions is the fact that negative externalities due to AI use aren’t borne by organizations developing it. Normally, this situation is addressed with government regulation. For instance, the restriction on industrial pollution is because of the future cost to society. Regulations are followed for the protection of people in circumstances where they can harm.

These negative consequences are existing in the current use of AI. Today, except for a few cases, there is absolutely zero regulation involving the use of Artificial Intelligence. The basic assertions of algorithm accountability aren’t guaranteed for either automated decision making or users of technology.

Recently, it has come to light that Amazon abandoned the in-house technology that they were using to pick the best resumes from thousands of applications. They learned that their system developed an inclination towards male candidates, penalizing women applicants. Later, Amazon ensured that their technology works effectively. Other organizations may not be as vigilant, but they can at least take the first step.

Can Artificial Intelligence Be Biased?

With machine learning, you never know what biased features your system might develop. Unless the organizations are bound to be opaque about their use of transparent technology in areas where impartiality matters, it won’t happen. Transparency and accountability are crucial to safely implementing AI solutions in real-world.

At this point, let’s face the fact that our current use of Artificial Intelligence is getting ahead of its competences and using it securely requires more consideration than how much it’s getting.

Read more:

10 Sep 2019
Artificial Intelligence (AI) Stats News: 120 Million Workers Need To Be Retrained Because Of AI

Artificial Intelligence (AI) Stats News: 120 Million Workers Need To Be Retrained Because Of AI

Recent surveys, studies, forecasts and other quantitative assessments of the impact and progress of AI highlighted the need to retrain many workers, improving AI’s score from F to A on 8th-grade science exam, and the $97.9 billion the AI market will reach in 2023.

Expected business impact

In the next three years, as many as 120 million workers in the world’s 12 largest economies may need to be retrained or reskilled as a result of AI and intelligent automation; only 41% of CEOs surveyed say that they have the people, skills and resources required to execute their business strategies; the time it takes to close a skills gap through training has increased from 3 days on average in 2014 to 36 days in 2018 [IBM]

Top drivers for investing in robotics and automation: Reduced cost (80%), improved quality (55%), increased productivity (54%), improved capabilities of robots (54%). Top challenges: cost of robots (16%), lack of experience with automation (15%), lack of homogenous programming platforms/interfaces (13%), lack of integrators working across OEMs/geographies/industries (12%) [McKinsey online survey of 85 OEMs and users worldwide, 2018]

67% of organizations will look to AI to intelligently automate IT processes to some extent within their IT environments [ESG]

Quantified business impact

L’Oréal’s recruiters believe they saved 200 hours of time to hire 80 interns out of a pool of 12,000 candidates, using a chatbot that saves significant time in the early stages of the recruiting process by handling questions from candidates, and Seedlink, AI software that assesses their responses to open-ended interview questions 


08 Sep 2019
AI (Artificial Intelligence) Words You Need To Know

AI (Artificial Intelligence) Words You Need To Know

In 1956, John McCarthy setup a ten-week research project at Dartmouth University that was focused on a new concept he called “artificial intelligence.” The event included many of the researchers who would become giants in the emerging field, like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.

Yet the reaction to the phrase artificial intelligence was mixed. Did it really explain the technology? Was there a better way to word it?

Well, no one could come up with something better–and so AI stuck.

Since then, we’ve seen the coining of plenty of words in the category, which often define complex technologies and systems. The result is that it can be tough to understand what is being talked about.


So to help clarify things, let’s take a look at the AI words you need to know:


From Kurt Muehmel, who is a VP Sales Engineer at Dataiku:

A series of computations, from the most simple (long division using pencil and paper), to the most complex. For example, machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a “model” that can be used to make predictions on new data.

Machine Learning

From Dr. Hossein Rahnama, who is the co-founder and CEO of Flybits:

Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.

Neural Networks

From Dan Grimm, who is the VP and General Manager of Computer Vision a RealNetworks:

Neural networks are mathematical constructs that mimic the structure of the human brain to summarize complex information into simple, tangible results. Much like we train the human brain to, for example, learn to control our bodies in order to walk, these networks also need to be trained with significant amounts of data. Over the last five years, there have been tremendous advancements in the layering of these networks and the compute power available to train them.

Deep Learning

From Sheldon Fernandez, who is the CEO of DarwinAI:

Deep Learning is a specialized form of Machine Learning, based on neural networks that emulate the cognitive capabilities of the human mind. Deep Learning is to Machine Learning what Machine Learning is to AI–not the only manifestation of its parent, but generally the most powerful and eye-catching version. In practice, deep learning networks capable of performing sophisticated tasks are 1.) many layers deep with millions, sometimes, billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the prevailing task (hence the ‘learning’).


From Michael Beckley, who is the CTO and founder of Appian:

Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.

Supervised, Unsupervised and Reinforcement Learning

From Justin Silver, who is the manager of science & research at PROS:

There are three broad categories of machine learning: supervised, unsupervised, and reinforcement learning. In supervised learning, the machine observes a set of cases (think of “cases” as scenarios like “The weather is cold and rainy”) and their outcomes (for example, “John will go to the beach”) and learns rules with the goal of being able to predict the outcomes of unobserved cases (if, in the past, John usually has gone to the beach when it was cold and rainy, in the future the machine will predict that John will very likely go to the beach whenever the weather is cold and rainy). In unsupervised learning, the machine observes a set of cases, without observing any outcomes for these cases, and learns patterns that enable it to classify the cases into groups with similar characteristics (without any knowledge of whether John has gone to the beach, the machine learns that “The weather is cold and rainy” is similar to “It’s snowing” but not to “It’s hot outside”). In reinforcement learning, the machine takes actions towards achieving an objective, receives feedback on those actions, and learns through trial and error to take actions that lead to better fulfillment of that objective (if the machine is trying to help John avoid those cold and rainy beach days, it could give John suggestions over a period of time on whether to go to the beach, learn from John’s positive and negative feedback, and continue to update its suggestions).


From Mehul Patel, who is the CEO of Hired:

While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. Take hiring as an example: If you give a computer a data set with 100 female candidates and 300 male candidates and ask it to predict the best person for the job, it is going to surface more male candidates because the volume of men is three times the size of women in the data set. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.


From Victoria Jones, who is the Zoho AI Evangelist:

Backpropagation algorithms allow a neural network to learn from its mistakes. The technology tracks an event backwards from the outcome to the prediction and analyzes the margin of error at different stages to adjust how it will make its next prediction. Around 70% of our AI assistant (called Zia) features the use of backpropagation, including Zoho Writer’s grammar-check engine and Zoho Notebook’s OCR technology, which lets Zia identify objects in images and make those images searchable. This technology also allows Zia’s chatbot to respond more accurately and naturally. The more a business uses Zia, the more Zia understands how that business is run. This means that Zia’s anomaly detection and forecasting capabilities become more accurate and personalized to any specific business.


07 Sep 2019
AI for Executives: How Machine Learning Is Impacting the Next Generation Workforce

AI for Executives: How Machine Learning Is Impacting the Next Generation Workforce

The term “artificial” doesn’t really do the next generation, with the attitude of “how we will get things done,” justice.

Artificial refers to a machine doing the work rather than a human, and the “Augmented Intelligence” might be more appropriate. Many agree that repetitive tasks and to-dos should be done by someone other than humans.

Take a robotic vacuum, for example. As I write this, I am vacuuming or should I say Ivan is vacuuming. It has an intelligence of where it has covered and what areas of my home need the most attention. It doesn’t slack, cut corners or decide it is too tired to get the job done. Simply put, it is more efficient than me.  I also have way more visibility into what is going on with the vacuum, so in that sense it is much more efficient as well. 

Now for a definition of Artificial intelligence (AI) – this one from gives the best understanding within the context of executive management: 

Artificial Intelligence is the simulation of human intelligence processes by machines. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systemsspeech recognition and machine vision.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson AssistantMicrosoft Cognitive Services and Google AI services.

Now let’s look at examples of how AI is being applied.  Let’s start with Human Resources (HR) and workforce management.  It is interesting it is referred to as Human, wonder if that will evolve as new AI functionality is brought to the table.

AI tools much like databases are only as smart and good as the data that is input into them. When it comes to HR practices the potential for bias is inherent, thus the Human part.   You have to remember that people determine what data points should be used in the training of an AI program or process, and people hold biases some even unconscious.

Read more: