Category: Disruptive Technology

23 Sep 2019
The AI arms race spawns new hardware architectures

The AI arms race spawns new hardware architectures

As society turns to artificial intelligence to solve problems across ever more domains, we’re seeing an arms race to create specialized hardware that can run deep learning models at higher speeds and lower power consumption.

Some recent breakthroughs in this race include new chip architectures that perform computations in ways that are fundamentally different from what we’ve seen before. Looking at their capabilities gives us an idea of the kinds of AI applications we could see emerging over the next couple of years.

Neuromorphic chips

Neural networks, composed of thousands and millions of small programs that perform simple calculations to perform complicated tasks such as detecting objects in images or converting speech to text are key to deep learning.

But traditional computers are not optimized for neural network operations. Instead they are composed of one or several powerful central processing units (CPU). Neuromorphic computers use an alternative chip architecture to physically represent neural networks. Neuromorphic chips are composed of many physical artificial neurons that directly correspond to their software counterparts. This make them especially fast at training and running neural networks.

The concept behind neuromorphic computing has existed since the 1980s, but it did not get much attention because neural networks were mostly dismissed as too inefficient. With renewed interest in deep learning and neural networks in the past few years, research on neuromorphic chips has also received new attention.

Read more: https://venturebeat.com/2019/09/21/the-ai-arms-race-spawns-new-hardware-architectures/

 

22 Sep 2019
Artificial Intelligence (AI) creates new possibilities for personalisation this year

Artificial Intelligence (AI) creates new possibilities for personalisation this year

Technology brands expand beyond their core products and turn themselves into a lifestyle

New Delhi: Artificial Intelligence (AI) and cross-industry collaborations are creating new avenues for data collection and offering personalised services to users this year, according to a report.

Among other technology trends that are picking up this year are the convergence of the smart home and healthcare, autonomous vehicles coming for last-mile delivery and data becoming a hot-button geopolitical issue, according to the report titled “14 Trends Shaping Tech” from CB Insights.

“As a more tech-savvy generation ages up, we’ll see the smart home begin acting as a kind of in-home health aide, monitoring senior citizens’ health and well being. We’ll see logistics players experiment with finally moving beyond a human driver,” said the report.

“And we’ll see cross-industry collaborations, whether via ancestry-informed Spotify playlists or limited edition Fortnite game skins,” it added.

In September 2018, Spotify partnered with Ancestry.com to utilise DNA data to create unique playlists for individuals.

Playlists reflect music linked to different ethnicities and regions. A person with ancestral roots in Bengaluru, for example, might see Carnatic violinists and Kannada film songs on their playlists.

DNA data is also informing how we eat. GenoPalate, for example, collects DNA info through saliva samples and analyses physiological components like an individual’s ability to absorb certain vitamins or how fast they can metabolize nutrients.

From there, it matches this information to nutrition analyses that it has conducted on a wide range of food and suggests a personalised diet. It also sells its own meal kits that use this information to map out menus.

“We’ll also see technology brands expand beyond their core products and turn themselves into a lifestyle,” said the report.

For example, as electric vehicle users need to wait for their batteries to charge for anywhere from 30 minutes to two hours, the makers of these vehicles are trying to turn this idle time into an asset.

China’s NioHouse couples charging stations with a host of activities. At the NioHouse, a user can visit the library, drop children off at daycare, co-work, and even visit a nap pod to rest while charging.

Nio has also partnered with fashion designer Hussein Chalayan to launch and sell a fashion line, Nio Extreme.

Tech companies today are also attempting to bridge the gap between academia and the career market.

Companies like the Lambda School and Flatiron School offer courses to train students on exactly the skills they will need to get a job, said the report.

These apprenticeships mostly focus on tech skills like computer science and coding. Training comes with the explicit goal of employment and students only need to pay their tuition once they have landed a job that pays them above a certain range.

Investors are also betting on the rise of digital goods. While these goods cannot be owned in the physical world, they come with clout, and offer personalisation and in-game experiences to otherwise one-size-fits-all characters, the research showed.

Source: https://gulfnews.com/technology/artificial-intelligence-ai-creates-new-possibilities-for-personalisation-this-year-1.1569149228735

21 Sep 2019
To survive, asset managers need to embrace disruptive technologies

To survive, asset managers need to embrace disruptive technologies

FROM THE OUTSIDE, asset management looks like an exciting industry that’s immune to technology — but that’s not true.

Changes in regulations, rising customer expectations, and the growing pressure from new-age competitors are forcing the asset management industry to explore disruptive technologies in order to stay in business.

According to a new study by the Investment Company Institute (ICI), the asset management industry is at a critical juncture in its history — investing in innovation and reinvigorating their products and processes.

In fact, while front-office transformations remain slow, operations executives are aggressively transforming their operating models to achieve greater agility and cost-effectiveness, as they take on the challenges of supporting more complex products and services.

The study revealed that 64 percent of firms surveyed for the report have completed a major operating model change in the past three years to improve operational efficiencies.

Asset managers, many of whom are uncertain about the ability of their operations and technology to support the firm’s objectives, believe they need to alter their strategies at the front-end to focus on driving distribution and creating differentiated products.

“Doing this effectively requires embracing technology and innovation, including investment platform technology and artificial intelligence, for better investment decision-making,” said Accenture Senior MD Michael Spellacy — whose firm collaborated with ICI to create the report.

The study shows that 55 percent of asset management firms reported having a formal initiative in place to evaluate the business and operational potential of new technologies such as the cloud and APIs.

Asset managers join the fintech ecosystem

Given the rapid pace of development in the world of technology, asset managers are also evaluating partnerships with the fintech ecosystem and exploring collaborations with start-ups, accelerators, and incubators.

The study also reveals that middle office functions—including collateral management, data management, derivatives processing, and transaction management—will be the biggest beneficiaries of fintech partnerships.

In the back office, the report found that respondents expect that fintech firms will be able to quickly and successfully help transform expense management, fund accounting, and financial reporting.

Approximately one-third of the firms agreed that at-scale middle office fintech partnerships were common across the industry, suggesting that these partnerships are already delivering results.

According to analysts, however, in order to ensure the success of a fintech partnership, asset managers need to adopt a laser-like focus on delivering bottom-line impact.

“It is vital to avoid the initial focus on ‘shiny objects’ that can result in proofs of concept that lack a clear vision of eventual production and outcomes,” the report pointed out.

Transformation key to long-term success

Technology adoption in the asset management industry isn’t exactly driven by consumer demand for better experiences — but stakeholders do expect more transparency, accountability, and control over their monies.

Further, with margin pressures making profitability difficult, using disruptive technologies might provide new revenue opportunities to asset managers through smarter and more intelligent product and service portfolios.

The survey conducted by ICI suggests that decision-makers in the industry are aware of the changing landscape and are taking action.

“Asset managers are disrupting their legacy operating models and skillsets to reequip their firms to win in a disrupted future.

 

MAS HELPS FINTECH FIRMS ACCELERATE INNOVATION WITH SANDBOX EXPRESS

“Executing this transformation successfully is imperative for long-term success,” concluded the report.

Many asset managers have shifted their strategic focus to the front office and clients and are looking at accelerating their journey to technology in order to boost operational capabilities and acquire/develop top talent in order to support the evolving needs of the front office.

In the near future, partnerships with the fintech ecosystem and customer-driven technology projects are expected to drive the asset management industry into a new space — where digital is weaved into the very fabric of the organization.

Source: https://techwireasia.com/2019/09/to-survive-asset-managers-need-to-embrace-disruptive-technologies/

19 Sep 2019
Artificial Intelligence (AI) Stats News: AI Is Actively Watching You In 75 Countries

AI Is Actively Watching You In 75 Countries

Recent surveys, studies, forecasts and other quantitative assessments of the impact and progress of AI highlighted the strong state of AI surveillance worldwide, the lack of adherence to common privacy principles in companies’ data privacy statement, the growing adoption of AI by global businesses, and the perception of AI as a major risk by institutional investors.

AI surveillance and the state of data privacy

At least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes, including smart city/safe city platforms (56 countries), facial recognition systems (64 countries), and smart policing (52 countries); technology linked to Chinese companies—particularly Huawei, Hikvision, Dahua, and ZTE—supply AI surveillance technology in 63 countries and U.S. firms’ technology—from IBM, Palantir, and Cisco—is present in 32 countries; 51% of advanced democracies deploy AI surveillance systems [Carnegie Endowment for International Peace AI Global Surveillance (AIGS) Index]

An analysis of 29 variables in 1,200 privacy statements against common themes in three major privacy regulations (the EU’s GDPR, California’s CCPA, and Canada’s PIPEDA) found that many organizations’ privacy statements fail to meet common privacy principles; less than 1% of organizations had language stating which types of third parties could access user data; only 2% of organizations had explicit language about data retention; only 32% of organizations had “readable” statements based on OTA standards [Internet’s Society’s Online Trust Alliance]

 

AI and the future of work

57% of technology companies do not expect technological advances will displace any of their workers in the next five years; 29% of respondents expect job displacement and 68% plan to retain workers by offering reskilling programs; software development (63%), data analytics (54%), engineering (52%), and AI/machine learning (48%) are the tech skills in highest demand [Consumer Technology Association survey of 252 tech business leaders]

Business adoption of AI

17% of 30 Global 500 companies have reported the use of AI/machine learning at scale and 30% reported selective use in specific business functions; in 3 years, 50% expect to be using AI/machine learning at scale; 26% have deployed RPA at scale across the enterprise or major functions; 65% say their use of RPA today is selective and siloed by individual groups or functions; in 3 years, 83% expect to have RPA deployed at scale; companies investing in AI report achieving on average 15% productivity improvements for the projects they are undertaking; most companies reported that their investments in AI-related talent and supporting infrastructure will increase approximately 50% to 100% in the next three years [KPMG 2019 Enterprise AI Adoption Study based on in-depth interviews with senior leaders at 30 of the world’s largest companies and other sources]

85% of organizations surveyed have a data strategy and 77% have implemented some AI-related technologies in the workplace, with 31% already seeing major business value from their AI efforts; top business functions for gaining most value from AI are sales (35%) and marketing (32%) and top technologies are machine learning (34%), chatbots (34%), and robotics (28%) [Mindtree survey of 650 IT leaders in the US and UK]

Expected business impact of AI

Top AI priorities for the next 3 to 5 years: customer and market insights that will refine personalization, driving sales and retention; back office and shared services automation to remove repetitive human tasks; finance and accounting streamlined to improve efficiency and compliance; analysis of unstructured voice and text data for specific functional use cases [KPMG 2019 Enterprise AI Adoption Study based in in-depth interviews with senior leaders at 30 of the world’s largest companies and other sources]

85% of institutional investors view AI as an investment risk that could potentially provoke societal backlash as well as geopolitical tension; 52% of the investors surveyed, who stated AI was a risk, also regarded it an opportunity, whereas 33% saw it as only a risk and 7% regard it as an opportunity only [BNY Mellon Investment Management and CREATE-Research in-depth, structured interviews with 45 CIOs, investment strategists and portfolio managers among pension plans, asset managers and pension consultants in 16 countries and a literature survey of about 400 widely respected research studies]

AI research successes

A deep learning algorithm, trained on non-imaging and sequential medical records, predicted the development of non-melanoma skin cancer in an Asian population with 89% accuracy [JAMA Dermatology]

Researchers at MIT developed a machine learning model that can estimate a patient’s risk of cardiovascular death. Using just the first fifteen minutes of a patient’s raw electrocardiogram (ECG) signal, the tool produces a score that places patients into different risk categories. Patients in the top quartile were nearly seven times more likely to die of cardiovascular death when compared to the low-risk group in the bottom quartile. By comparison, patients identified as high risk by the most common existing risk metrics were only three times more likely to suffer an adverse event compared to their low-risk counterparts [MIT CSAIL]

Source: https://www.forbes.com/sites/gilpress/2019/09/18/artificial-intelligence-ai-stats-news-ai-is-actively-watching-you-in-75-countries/#a6335dc58092

18 Sep 2019
Disruptive Technology Design Considerations

Disruptive Technology Design Considerations

A disruptive technology, according to Harvard Business School professor Clayton M. Christensen who coined the term, is one that displaces an established technology, shaking up the industry, or a ground-breaking technology that creates a new industry.

In this context, disruptive technology could be a variety of innovations. For instance, 5G, RFID and AI used for personalisation in a retail or hotel setting. These can stream demographic-relevant content to each individual. It could also be Augmented Reality, Virtual Reality helmets, immersive higher pixel density displays or transparent displays.

Other examples are drones or invisible technologies, such as high fibre connectivity for higher quality transportation of content.

There are a number of considerations that need to be addressed with disruptive technology, particularly in critical environments. This is true whether it is a theme park that attracts thousands of visitors or an oil and gas control room which requires uptime 24/7.

It is advisable in these circumstances to balance new, leading-edge innovations with established, tried and tested technology. Adopting disruptive technologies early on comes with an element of risk. For example, any teething problems or bugs are likely to be discovered and ironed out further along in the process. Therefore, early adopters could experience reliability issues.

Creating immersive experiences

It’s not just reliability a designer needs to think about when installing new tech. VR has the potential to create impressive immersive experiences, however, the helmets can be isolating. There can also be issues around health and safety, and hygiene if hundreds and hundreds of people are going to use them continuously.

It must be stressed that the suitability of the technology  needs to be assessed on a case-by-case basis. Its compatibility with the existing tech in the environment also needs to be taken into account.

Where there is a tight time schedule, it may make better sense to go with tried and tested technology. Installing a piece of cutting-edge tech could require lengthy design, testing and implementation to ensure it meets its purpose.

When making a decision about the technology to use in a project, we assess it according to a number of factors. These are its readiness, its suitability and its fitness for purpose. We also look at where it fits in the client’s AV technology road map.

It is of key importance to look at the practical implications of new technology, and its ability to scale for use in large attractions. For example, in museums and theme parks, where large numbers of people will be using it constantly.

Another important consideration is the need for high-quality content to complement/ accompany the new technology.

At the heart of this process is technology master planning, something evidenced in our recent projects incorporating disruptive technologies.

MGM Cotai – The Spectacle

At the MGM Cotai Hotelthe Electrosonic team met and overcame the profound technical challenges of the world’s largest free-span glazed roof. The team creates an impactful digital art experience a year in advance of anything else on the market in terms of innovation and technology.

The project leverages the latest 4K displays, sufficiently bright to counteract background light in public environments. It shows the team’s capacity to optimize presentation for crisp videowalls. These can display cinematic portraits, big scenic shots of skylines, and multiple vignettes of attractions.

Electrosonic’s innovative multisensory experience takes place around the atrium. It highlights a global array of digital art. It also utilises true 4K LED processing of the media walls, creating ‘digital wallpaper’.

Source: https://blooloop.com/disruptive-technology-electrosonic/

17 Sep 2019
Meet Five Synthetic Biology Companies Using AI To Engineer Biology

Meet Five Synthetic Biology Companies Using AI To Engineer Biology

TVs and radios blare that “artificial intelligence is coming,” and it will take your job and beat you at chess.

But AI is already here, and it can beat you — and the world’s best — at chess. In 2012, it was also used by Google to identify cats in YouTube videos. Today, it’s the reason Teslas have Autopilot and Netflix and Spotify seem to “read your mind.” Now, AI is changing the field of synthetic biology and how we engineer biology. It’s helping engineers design new ways to design genetic circuits — and it could leave a remarkable impact on the future of humanity through the huge investment it has been receiving ($12.3b in the last 10 years) and the markets it is disrupting.

The idea of artificial intelligence is relatively straightforward — it is the programming of machines with reasoning, learning, and decision-making behaviors. Some AI algorithms (which are just a set of rules that a computer follows) are so good at these tasks that they can easily outperform human experts.

Most of what we hear about artificial intelligence refers to machine learning, a subclass of AI algorithms that extrapolate patterns from data and then use that analysis to make predictions. The more data these algorithms collect, the more accurate their predictions become. Deep learning is a more powerful subcategory of machine learning, where a high number of computational layers called neural networks (inspired by the structure of the brain) operate in tandem to increase processing depth, facilitating technologies like advanced facial recognition (including FaceID on your iPhone).

Biology, in particular, is one of the most promising beneficiaries of artificial intelligence. From investigating genetic mutations that contribute to obesity to examining pathology samples for cancerous cells, biology produces an inordinate amount of complex, convoluted data. But the information contained within these datasets often offers valuable insights that could be used to improve our health.

In the field of synthetic biology, where engineers seek to “rewire” living organisms and program them with new functions, many scientists are harnessing AI to design more effective experiments, analyze their data, and use it to create groundbreaking therapeutics. Here are five companies that are integrating machine learning with synthetic biology to pave the way for better science and better engineering.

Read more: https://www.forbes.com/sites/johncumbers/2019/09/16/meet-5-synthetic-biology-companies-using-ai-to-engineer-biology/

16 Sep 2019
AI Can Now Pass School Tests but Still Falls Short on the Turing Test

AI Can Now Pass School Tests but Still Falls Short on the Turing Test

From winning at Go to passing eighth grade level multiple choice tests, AI is making rapid advances. But its creativity still leaves much to be desired.

On September 4, 2019, Peter Clark,  along with several other researchers, published “From ‘F’ to ‘A’ on the N.Y. Regents Science Exams: An Overview of the Aristo Project∗” The Aristo project named in the title is hailed for the rapid improvement it has demonstrated when it tested the way eighth-grade human students in New York State are tested for their knowledge of science. 

The researchers concluded that this is an important milestone for AI: “Although Aristo only answers multiple choice questions without diagrams, and operates only in the domain of science, it nevertheless represents an important milestone towards systems that can read and understand. The momentum on this task has been remarkable, with accuracy moving from roughly 60% to over 90% in just three years.”

The Aristo project is powered by the financial resources and vision of Paul G. Allen, the Founder of the Allen Institute for Artificial Intelligence (A12). As the site explains, there are several parts to making AI capable of passing a multiple-choice test.

Aristo’s most recent solvers include:

The Information Retrieval, PMI, and ACME solvers that look for answers in a large corpus using statistical word correlations. These solvers are effective for “lookup” questions where an answer is explicit in text.
The Tuple Inference, Multee, and Qualitative Reasoning solvers that attempt to answer questions by reasoning, where two or more pieces of evidence need to be combined to derive an answer.
The AristoBERT and AristoRoBERTa solvers that apply the recent BERT-based language-models to science questions. These systems are trained to apply relevant background knowledge to the question, and use a small training curriculum to improve their performance. Their high performance reflects the rapid progress made by the NLP field as a whole.
While Aristo’s progress is, indeed, impressive, and, no doubt, there are some eight graders who wish they could find some way to carry along the AI with them to the test, it still is far from capable of passing a Turing test. In fact, the Allen Institute for Artificial Intelligence admitted that it was deliberately testing its AI in a different way when it set out to develop it in 2016.

The explanation was given in an article entitled, “Moving Beyond the Turing Test with the Allen AI Science Challenge. Admitting that the test would not be “a full test of machine intelligence,” it still considered worthwhile for its showing “several capabilities strongly associated with intelligence – capabilities that our machines need if they are to reliably perform the smart activities we desire of them in the future – including language understanding, reasoning, and use of commonsense knowledge.”

There’s also the practical consideration that makes testing with ready-made tests so appealing: “In addition, from a practical point of view, exams are accessible, measurable, understandable, and compelling.” Come to think of it, that’s why some educators love having standardized tests, while others decry them for the very fact that they give the false impression they are measuring intelligence when all they can measure is performance of a very specific nature.

When it comes to more creative intelligence in which the answer is not simply out there to be found or even intuited, AI still has quite a way to go. We can see that in its attempts to create a script.

Making movies with AI
Benjamin (formerly known as Jetson) is the self-chosen name of “the world’s first automated screenwriter.” The screenwriter known as Benjamin is “a self-improving LSTM RNN [Long short-term memory recurrent neural network] machine intelligence trained on human screenplays.

Benjamin has his/its own Facebook page, facebook.com/benjaminthescreenwriter. Benjamin also used to have a site under that name, but now he/it shares the credit on a more generally named one, www.thereforefilms.com/films-by-benjamin-the-ai, which offers links to all three of the films based on scripts generated by AI that were made within just two days to qualify for the Sci-Fi London’s 48hr Film Challenge.

Benjamin’s first foray into film was the script for “Sunspring.” However, even that required a bit of prompting from Ross Goodwin, “creative technologist, artist, hacker, data scientist,” as well as the work of the filmmaker Oscar Sharp, and three human actors.

The film was posted to YouTube, and you can see it in its entirety by sitting through the entire 9 minutes. See if you share the assessment expressed by the writer Neil Gaiman whose tweet appears on the Benjamin site: “Watch a short SF film gloriously fail the Turing Test.”

Read more: https://interestingengineering.com/ai-can-now-pass-school-tests-but-still-falls-short-on-the-turing-test

08 Sep 2019
AI (Artificial Intelligence) Words You Need To Know

AI (Artificial Intelligence) Words You Need To Know

In 1956, John McCarthy setup a ten-week research project at Dartmouth University that was focused on a new concept he called “artificial intelligence.” The event included many of the researchers who would become giants in the emerging field, like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.

Yet the reaction to the phrase artificial intelligence was mixed. Did it really explain the technology? Was there a better way to word it?

Well, no one could come up with something better–and so AI stuck.

Since then, we’ve seen the coining of plenty of words in the category, which often define complex technologies and systems. The result is that it can be tough to understand what is being talked about.

 

So to help clarify things, let’s take a look at the AI words you need to know:

Algorithm

From Kurt Muehmel, who is a VP Sales Engineer at Dataiku:

A series of computations, from the most simple (long division using pencil and paper), to the most complex. For example, machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a “model” that can be used to make predictions on new data.

Machine Learning

From Dr. Hossein Rahnama, who is the co-founder and CEO of Flybits:

Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.

Neural Networks

From Dan Grimm, who is the VP and General Manager of Computer Vision a RealNetworks:

Neural networks are mathematical constructs that mimic the structure of the human brain to summarize complex information into simple, tangible results. Much like we train the human brain to, for example, learn to control our bodies in order to walk, these networks also need to be trained with significant amounts of data. Over the last five years, there have been tremendous advancements in the layering of these networks and the compute power available to train them.

Deep Learning

From Sheldon Fernandez, who is the CEO of DarwinAI:

Deep Learning is a specialized form of Machine Learning, based on neural networks that emulate the cognitive capabilities of the human mind. Deep Learning is to Machine Learning what Machine Learning is to AI–not the only manifestation of its parent, but generally the most powerful and eye-catching version. In practice, deep learning networks capable of performing sophisticated tasks are 1.) many layers deep with millions, sometimes, billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the prevailing task (hence the ‘learning’).

Explainability

From Michael Beckley, who is the CTO and founder of Appian:

Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.

Supervised, Unsupervised and Reinforcement Learning

From Justin Silver, who is the manager of science & research at PROS:

There are three broad categories of machine learning: supervised, unsupervised, and reinforcement learning. In supervised learning, the machine observes a set of cases (think of “cases” as scenarios like “The weather is cold and rainy”) and their outcomes (for example, “John will go to the beach”) and learns rules with the goal of being able to predict the outcomes of unobserved cases (if, in the past, John usually has gone to the beach when it was cold and rainy, in the future the machine will predict that John will very likely go to the beach whenever the weather is cold and rainy). In unsupervised learning, the machine observes a set of cases, without observing any outcomes for these cases, and learns patterns that enable it to classify the cases into groups with similar characteristics (without any knowledge of whether John has gone to the beach, the machine learns that “The weather is cold and rainy” is similar to “It’s snowing” but not to “It’s hot outside”). In reinforcement learning, the machine takes actions towards achieving an objective, receives feedback on those actions, and learns through trial and error to take actions that lead to better fulfillment of that objective (if the machine is trying to help John avoid those cold and rainy beach days, it could give John suggestions over a period of time on whether to go to the beach, learn from John’s positive and negative feedback, and continue to update its suggestions).

Bias

From Mehul Patel, who is the CEO of Hired:

While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. Take hiring as an example: If you give a computer a data set with 100 female candidates and 300 male candidates and ask it to predict the best person for the job, it is going to surface more male candidates because the volume of men is three times the size of women in the data set. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.

Backpropagation

From Victoria Jones, who is the Zoho AI Evangelist:

Backpropagation algorithms allow a neural network to learn from its mistakes. The technology tracks an event backwards from the outcome to the prediction and analyzes the margin of error at different stages to adjust how it will make its next prediction. Around 70% of our AI assistant (called Zia) features the use of backpropagation, including Zoho Writer’s grammar-check engine and Zoho Notebook’s OCR technology, which lets Zia identify objects in images and make those images searchable. This technology also allows Zia’s chatbot to respond more accurately and naturally. The more a business uses Zia, the more Zia understands how that business is run. This means that Zia’s anomaly detection and forecasting capabilities become more accurate and personalized to any specific business.

Source: https://www.forbes.com/sites/tomtaulli/2019/09/07/ai-artificial-intelligence-words-you-need-to-know/#2933047406a4

07 Sep 2019
AI for Executives: How Machine Learning Is Impacting the Next Generation Workforce

AI for Executives: How Machine Learning Is Impacting the Next Generation Workforce

The term “artificial” doesn’t really do the next generation, with the attitude of “how we will get things done,” justice.

Artificial refers to a machine doing the work rather than a human, and the “Augmented Intelligence” might be more appropriate. Many agree that repetitive tasks and to-dos should be done by someone other than humans.

Take a robotic vacuum, for example. As I write this, I am vacuuming or should I say Ivan is vacuuming. It has an intelligence of where it has covered and what areas of my home need the most attention. It doesn’t slack, cut corners or decide it is too tired to get the job done. Simply put, it is more efficient than me.  I also have way more visibility into what is going on with the vacuum, so in that sense it is much more efficient as well. 

Now for a definition of Artificial intelligence (AI) – this one from wahtis.com gives the best understanding within the context of executive management: 

Artificial Intelligence is the simulation of human intelligence processes by machines. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systemsspeech recognition and machine vision.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson AssistantMicrosoft Cognitive Services and Google AI services.

Now let’s look at examples of how AI is being applied.  Let’s start with Human Resources (HR) and workforce management.  It is interesting it is referred to as Human, wonder if that will evolve as new AI functionality is brought to the table.

AI tools much like databases are only as smart and good as the data that is input into them. When it comes to HR practices the potential for bias is inherent, thus the Human part.   You have to remember that people determine what data points should be used in the training of an AI program or process, and people hold biases some even unconscious.

Read more: https://www.forbes.com/sites/cognitiveworld/2019/09/06/ai-for-executives/#5400b62651c4

04 Sep 2019
Feeding the big data and artificial intelligence ‘information-appetite’

Feeding the big data and artificial intelligence ‘information-appetite’

The promise of big data and artificial intelligence is everywhere. writes Jon Wells, VP customer solutions, Networked Energy Services. One almost gets the impression that there is no problem that cannot be solved with these new technologies. The answer to everything is ‘big data and artificial intelligence’.

Big data and artificial intelligence is the answer

Open a web browser and you see advertising tuned to your latest online shopping. Turn on the TV and you see advertisements about how our leading IT providers are using big data and artificial intelligence to address social, economic and environmental challenges. Two extremes of direct application of big data and artificial intelligence.

The tools used to derive timely, actionable insight to both the biggest and the most mundane challenges have certainly hit the mainstream. Using these tools has direct application to the smart grid. They can be used to increase reliability, improve operational efficiency, reduce energy loss, increase fair energy supply by reducing fraud and theft, identify illegal use of energy, enable other green energy initiatives such as distributed generation, energy storage, and electric vehicles, and focus restoration by sociological and business priorities.

The piece which is often left out of all the buzz is where is all this data coming from and how it gets to the big data and artificial intelligence platforms. We know it ends up in data lakes and data marts, but where is this data created, how does it get to the systems that can create the value from it, and how do we know that it is secure as it makes this journey? And, then, how is this managed in a smart grid?

The smart grid is the answer

Initiatives like the Clean Energy Package in Europe and the proposed Green New Deal in the US are driving the Energy Transition and putting the focus onto the smart grid to achieve the improvements above. Similarly to big data and artificial intelligence, whenever the question concerns energy efficiency, the answer seems to be ‘the Smart Grid’.

A smart grid is generally split into three segments: the high-voltage, medium-voltage and the low-voltage. The high- and medium voltage pieces are highly visible – they are major engineering projects and come with sophisticated communications, security and management capabilities in-built.

Getting information to feed the big data and artificial intelligence platforms is no great challenge here because the infrastructure is already there.

The low-voltage grid is more challenging – the equipment is highly distributed, often antiquated, unmonitored and unmanaged, and mostly ‘passive’ from an IT perspective.

It has little or no mechanism to share information back to these big data and artificial intelligence platforms that are waiting for it. As such, this represents a suboptimal use of major investments by DSOs.

This is unfortunate because it is in the low-voltage grid that the energy transition, driven by the Clean Energy Package and other green energy and conservations initiatives, is going have the largest impact over the next decades:

• Increased distributed generation

and storage – using residential-scale equipment to generate solar, wind and hydro energy, store locally and feed-back into the local low-voltage grid • Community energy and micro-grid – balancing the supply of energy within a community to minimise the demand on external centrally generated energy.

Both of these require a low-voltage grid that is highly optimised, and which can be dynamically switched through modes of operation to maintain that optimisation as demand and generation changes.

Thus the problem becomes how to create information about the performance of the low-voltage grid, and then communicate that, securely, to the ever-hungry maws of the big data and artificial intelligence platforms.

Internet of Things is the answer

Connection of everything in the low-voltage grid to ‘the Internet of Things’ could be the answer.

Of course, ‘everything’ is really limited to those things with enough IT capability to connect and share information, where the coverage provides the service and where it is technically and economically viable to use the service at the volumes required. That is fine in the high- and medium- voltage grids but still has challenges in the low-voltage grid, where many millions of consumers and their equipment need to be connected and managed.

Energy suppliers need to consider the costs of deploying IT-enabled equipment deep into the low-voltage grid, the costs of physically installing SIMs and associated SIM management, and the costs of monthly subscription for connecting to millions of end-points to collect many gigabytes (or even terabytes) of data each day.

Energy suppliers also need to consider the technology capabilities – there are several applicable network technologies, which can be used (NB-IoT and LTE-M being the most common).

These are wireless technologies, but it is also possible to connect through powerline communications to back-end systems which are Internet-enabled. This approach does not involve a subscription fee, but is dependent on distances, quality and noise-levels of the power cable, and, so, like wireless communications, needs to be considered carefully.

Smart meters are the answer

So, the ability to connect to all low-voltage devices is a potential challenge – let’s look at the devices themselves and see if they are the answer.

The all-pervasive IT-enabled equipment in the low-voltage grid are smart meters. These come in various shapes and sizes, ranging from the barely-smart through to the truly smart, and are generally deployed at the edges of the low-voltage grid. Barely-smart meters are typically able to communicate low-volumes of ‘basic’ consumption information relatively infrequently, and simply exist to provide automated billing.

At the other extreme, the truly-smart can be configured dynamically to report back on a wide range of voltage and power quality metrics, on a regular basis.

Of course, the truly-smart meters tend to attract a premium price tag that needs to be considered, when the DSO is also assessing their medium- and long-term investment strategy and business case. The reality is that, all too often, the DSO is under pressure to follow a policy of cost reduction, and this drives some to the barely-smart version of the smart meter. Unfortunately, these cannot actively participate in feeding the demands of big data and artificial intelligence, and so represent a lost opportunity to leverage the investments made in these platforms.

In any case, the smart meters generally only provide information about the customer service points and (sometimes) the substation transformer. This still leaves a big gap of coverage – effectively, the power cabling and associated distribution devices.

However, some of the truly-smart meters are addressing this space to provide an end-to-end view of low-voltage grid performance.

Don’t look for the silver-bullets – practical solutions are needed

Putting aside the buzz around big data and artificial intelligence, Smart Grid and Smart Meters, there are practical solutions to presenting the volumes and types of information that are required to form timely insight for energy and operational efficiency and sociologically balanced green and fair energy programmes.

Where will information come from?

The low-voltage grid data needs to be created somewhere. Dedicated monitoring systems can be deployed, but they are often too expensive to be deployed as a ‘blanket’ – rather they are deployed in specific known problem areas.

The most prevalent source of information across the low-voltage grid remains smart meters. The truly-smart meters allow large volumes of voltage and power information to be reported back to the DSO with enough frequency that they can spot trends, detect outages and short-term inefficiencies, gain insight and take action.

DSOs should look at their smart meter procurement policy and be confident that smart meters will justify their big data and artificial intelligence investments and generate timely and actionable insight.

Where barely-smart meters are being deployed, DSOs will find themselves without detailed information of the low-voltage grid, be unable to feed big data and artificial intelligence platforms and be unable to adapt to the changing demands in the low-voltage grid.

Communication of volumes of data

The volume of data quickly scales up when one considers the millions of end-points that will have a smart meter; potentially to many gigabytes and even terabytes of data per day.

The volumes and the subscription cost will challenge the standard wireless ‘Internet of Things’ connectivity model. Communications of at least some of this payload over PLC will significantly reduce the cost and data volumes using wireless and will allow the best of both technologies to be leveraged by the DSO.

A hybrid model of PLC and wireless will ensure both volumes and subscription cost remain manageable, and the data can be carried to the ever-hungry maws of the big data and artificial intelligence platforms.

PLC has received bad press over the last few years, creating an impression that it is old technology. In fact, there are truly smart meters based on PLC that employ
the highest quality protocols to achieve high information rates, even in the most challenging network environments.

Robust

Some truly-smart meters extend these options by providing connectivity to physical networks, which terminate at the home, the multi-dwelling unit or in the street. In these cases, the standard communications provided by the smart meter is augmented, and either used to carry more information more frequently, or to provide a back-up in the event of one of the communications mechanisms failing.

The latter resolves the problems of having ‘holes’ in the big data. Some DSOs can even leverage fibre infrastructure provided by government programmes or their own investments and diversifications.

Secure

A lot of privileged information about the consumers and about the DSO is transported.

The data lakes and data marts are highly secure, but the source of the data in the low-voltage grid and the communication through the low-voltage grid also needs to be as secure.

The built-in security features of the smart meters, the wireless and PLC communications needs to be carefully assessed so that the information shared with the big data and artificial intelligence platforms isn’t, accidentally, shared with the cyber-criminal fraternity. Again, typically, this is where the barely-smart meters are lacking, and so justify an extra careful assessment before selection.

Not just electrical energy

Truly-smart meters tend to have additional communications capabilities in-built to allow connection within the consumer’s residence.

This can be used to either connect to other WAN communications, such as the local ISP or community fibre infrastructure, in-home devices or to gather information from other utility meters such as gas and water. All three utilities – electricity, gas and water – are scarce resources, and can be exposed into big data and artificial intelligence platforms via the truly smart meters.

Not just end-points

Finally, the flow of energy within the lowvoltage grid is as important to understand as the energy provided by and delivered to its end-points. The latest truly-smart metering solutions use their own on-board analytics to derive more information about how energy flows within the low-voltage grid, allowing far more fine-grain business insight to be generated and the guesswork to be taken out of what is happening between the substation and the consumer. SEI

Open a web browser and you see advertising tuned to your latest online shopping. Turn on the TV and you see advertisements about how our leading IT providers are using big data and artificial intelligence to address social, economic and environmental challenges. Two extremes of direct application of big data and artificial intelligence.

The tools used to derive timely, actionable insight to both the biggest and the most mundane challenges have certainly hit the mainstream. Using these tools has direct application to the smart grid. They can be used to increase reliability, improve operational efficiency, reduce energy loss, increase fair energy supply by reducing fraud and theft, identify illegal use of energy, enable other green energy initiatives such as distributed generation, energy storage, and electric vehicles, and focus restoration by sociological and business priorities.

The piece which is often left out of all the buzz is where is all this data coming from and how it gets to the big data and artificial intelligence platforms. We know it ends up in data lakes and data marts, but where is this data created, how does it get to the systems that can create the value from it, and how do we know that it is secure as it makes this journey? And, then, how is this managed in a smart grid?

The smart grid is the answer

Initiatives like the Clean Energy Package in Europe and the proposed Green New Deal in the US are driving the Energy Transition and putting the focus onto the smart grid to achieve the improvements above. Similarly to big data and artificial intelligence, whenever the question concerns energy efficiency, the answer seems to be ‘the Smart Grid’.

A smart grid is generally split into three segments: the high-voltage, medium-voltage and the low-voltage. The high- and medium voltage pieces are highly visible – they are major engineering projects and come with sophisticated communications, security and management capabilities in-built.

Getting information to feed the big data and artificial intelligence platforms is no great challenge here because the infrastructure is already there.

The low-voltage grid is more challenging – the equipment is highly distributed, often antiquated, unmonitored and unmanaged, and mostly ‘passive’ from an IT perspective.

It has little or no mechanism to share information back to these big data and artificial intelligence platforms that are waiting for it. As such, this represents a suboptimal use of major investments by DSOs.

This is unfortunate because it is in the low-voltage grid that the energy transition, driven by the Clean Energy Package and other green energy and conservations initiatives, is going have the largest impact over the next decades:

• Increased distributed generation

and storage – using residential-scale equipment to generate solar, wind and hydro energy, store locally and feed-back into the local low-voltage grid • Community energy and micro-grid – balancing the supply of energy within a community to minimise the demand on external centrally generated energy.

Both of these require a low-voltage grid that is highly optimised, and which can be dynamically switched through modes of operation to maintain that optimisation as demand and generation changes.

Thus the problem becomes how to create information about the performance of the low-voltage grid, and then communicate that, securely, to the ever-hungry maws of the big data and artificial intelligence platforms.

Internet of Things is the answer

Connection of everything in the low-voltage grid to ‘the Internet of Things’ could be the answer.

Of course, ‘everything’ is really limited to those things with enough IT capability to connect and share information, where the coverage provides the service and where it is technically and economically viable to use the service at the volumes required. That is fine in the high- and medium- voltage grids but still has challenges in the low-voltage grid, where many millions of consumers and their equipment need to be connected and managed.

Energy suppliers need to consider the costs of deploying IT-enabled equipment deep into the low-voltage grid, the costs of physically installing SIMs and associated SIM management, and the costs of monthly subscription for connecting to millions of end-points to collect many gigabytes (or even terabytes) of data each day.

Energy suppliers also need to consider the technology capabilities – there are several applicable network technologies, which can be used (NB-IoT and LTE-M being the most common).

These are wireless technologies, but it is also possible to connect through powerline communications to back-end systems which are Internet-enabled. This approach does not involve a subscription fee, but is dependent on distances, quality and noise-levels of the power cable, and, so, like wireless communications, needs to be considered carefully.

Smart meters are the answer

So, the ability to connect to all low-voltage devices is a potential challenge – let’s look at the devices themselves and see if they are the answer.

The all-pervasive IT-enabled equipment in the low-voltage grid are smart meters. These come in various shapes and sizes, ranging from the barely-smart through to the truly smart, and are generally deployed at the edges of the low-voltage grid. Barely-smart meters are typically able to communicate low-volumes of ‘basic’ consumption information relatively infrequently, and simply exist to provide automated billing.

At the other extreme, the truly-smart can be configured dynamically to report back on a wide range of voltage and power quality metrics, on a regular basis.

Of course, the truly-smart meters tend to attract a premium price tag that needs to be considered, when the DSO is also assessing their medium- and long-term investment strategy and business case. The reality is that, all too often, the DSO is under pressure to follow a policy of cost reduction, and this drives some to the barely-smart version of the smart meter. Unfortunately, these cannot actively participate in feeding the demands of big data and artificial intelligence, and so represent a lost opportunity to leverage the investments made in these platforms.

In any case, the smart meters generally only provide information about the customer service points and (sometimes) the substation transformer. This still leaves a big gap of coverage – effectively, the power cabling and associated distribution devices.

However, some of the truly-smart meters are addressing this space to provide an end-to-end view of low-voltage grid performance.

Don’t look for the silver-bullets – practical solutions are needed

Putting aside the buzz around big data and artificial intelligence, Smart Grid and Smart Meters, there are practical solutions to presenting the volumes and types of information that are required to form timely insight for energy and operational efficiency and sociologically balanced green and fair energy programmes.

Where will information come from?

The low-voltage grid data needs to be created somewhere. Dedicated monitoring systems can be deployed, but they are often too expensive to be deployed as a ‘blanket’ – rather they are deployed in specific known problem areas.

The most prevalent source of information across the low-voltage grid remains smart meters. The truly-smart meters allow large volumes of voltage and power information to be reported back to the DSO with enough frequency that they can spot trends, detect outages and short-term inefficiencies, gain insight and take action.

DSOs should look at their smart meter procurement policy and be confident that smart meters will justify their big data and artificial intelligence investments and generate timely and actionable insight.

Where barely-smart meters are being deployed, DSOs will find themselves without detailed information of the low-voltage grid, be unable to feed big data and artificial intelligence platforms and be unable to adapt to the changing demands in the low-voltage grid.

Communication of volumes of data

The volume of data quickly scales up when one considers the millions of end-points that will have a smart meter; potentially to many gigabytes and even terabytes of data per day.

The volumes and the subscription cost will challenge the standard wireless ‘Internet of Things’ connectivity model. Communications of at least some of this payload over PLC will significantly reduce the cost and data volumes using wireless and will allow the best of both technologies to be leveraged by the DSO.

A hybrid model of PLC and wireless will ensure both volumes and subscription cost remain manageable, and the data can be carried to the ever-hungry maws of the big data and artificial intelligence platforms.

PLC has received bad press over the last few years, creating an impression that it is old technology. In fact, there are truly smart meters based on PLC that employ
the highest quality protocols to achieve high information rates, even in the most challenging network environments.

Robust

Some truly-smart meters extend these options by providing connectivity to physical networks, which terminate at the home, the multi-dwelling unit or in the street. In these cases, the standard communications provided by the smart meter is augmented, and either used to carry more information more frequently, or to provide a back-up in the event of one of the communications mechanisms failing.

The latter resolves the problems of having ‘holes’ in the big data. Some DSOs can even leverage fibre infrastructure provided by government programmes or their own investments and diversifications.

Secure

A lot of privileged information about the consumers and about the DSO is transported.

The data lakes and data marts are highly secure, but the source of the data in the low-voltage grid and the communication through the low-voltage grid also needs to be as secure.

The built-in security features of the smart meters, the wireless and PLC communications needs to be carefully assessed so that the information shared with the big data and artificial intelligence platforms isn’t, accidentally, shared with the cyber-criminal fraternity. Again, typically, this is where the barely-smart meters are lacking, and so justify an extra careful assessment before selection.

Not just electrical energy

Truly-smart meters tend to have additional communications capabilities in-built to allow connection within the consumer’s residence.

This can be used to either connect to other WAN communications, such as the local ISP or community fibre infrastructure, in-home devices or to gather information from other utility meters such as gas and water. All three utilities – electricity, gas and water – are scarce resources, and can be exposed into big data and artificial intelligence platforms via the truly smart meters.

Not just end-points

Finally, the flow of energy within the lowvoltage grid is as important to understand as the energy provided by and delivered to its end-points. The latest truly-smart metering solutions use their own on-board analytics to derive more information about how energy flows within the low-voltage grid, allowing far more fine-grain business insight to be generated and the guesswork to be taken out of what is happening between the substation and the consumer. SEI

Source: https://www.smart-energy.com/industry-sectors/data_analytics/feeding-the-big-data-and-artificial-intelligence-information-appetite/