Month: June 2019

30 Jun 2019
Sensors and metrology as the driving force for digitalization

Sensors and metrology as the driving force for digitalization

Many digitalized processes depend on data collected by increasingly powerful sensors and other test and measurement technology. When this data is processed, it provides precise and reliable information about the operating environment. Nine Fraunhofer Institutes will be presenting the results of their research into sensor technology and its applications in the field of testing and measurement at Sensor+Test 2019 in Nürnberg from June 25 to 27 (Booth 248 in Hall 5).

A great many innovations in today’s digital era rely on the ability to transfer information from the real world to the digital universe—examples include advances in gesture recognition, non-contact materials testing and artificial respiration. In applications like these, sensors and other test and measurement systems can be equated to enabling technologies because many new developments are based on them. At this year’s edition of Sensor+Test, the leading forum in this field worldwide, Fraunhofer will once again be presenting examples of its research in the many areas that make up its wide-ranging technology portfolio.

Wider-spectrum contact-free materials testing

Terahertz imaging is one of the new technologies that is being used increasingly to monitor  and test new materials. This non-contact method can be used to measure coating thickness, analyze the structure of polymer composites, or detect defects in non-conductive materials. The Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI, will be presenting the next generation of fiber-coupled terahertz transceivers. The integrated sensor probe permits reflection measurements orthogonal to the surface of the test sample and can be used without modification in combination with commercially available terahertz measuring systems.

Reducing machine downtime, manufacturing defects and reject rates

The Fraunhofer Institute for Digital Media Technology IDMT will demonstrate how the quality of workpieces and components can be assured using a non-contact, non-destructive test method based on audio sensing of product and process parameters combined with machine learning. Visitors can learn more about this method, which can be used both to monitor production processes and to perform end-of-line product testing, in a series of interactive exhibits.

Supplying sensors with energy created by tiny vibrations

One of the challenges in the Internet of Things (IoT) is how to supply power to wireless sensors—a question that Fraunhofer Institute for Integrated Circuits IIS is tackling by developing energy harvesting solutions. Even the slightest vibrations generating a pressure of 100 mg at a frequency of 60 hertz are sufficient for a vibration transformer to produce the electrical energy needed to operate several sensors and transmit data once per second. The Maximum Power Point Tracker provides an effective means of controlling the charge converter so as to guarantee a maximum power yield. The energy harvesting solution recharges the battery while the device is in operation and enables the design of IoT sensors with an unlimited service life, without power cable or swapping batteries.

Read more: https://phys.org/news/2019-06-sensors-metrology-digitalization.html

29 Jun 2019
How AI and human-machine collaboration is driving transformation across sectors

How AI and human-machine collaboration is driving transformation across sectors

Even a decade ago, the mention of Artificial Intelligence (AI) might refer to the fear that it would take away human jobs and render them expendable. Cut to the present and that fear has now been replaced with a more rational approach where AI is being seen as a way to extend human capabilities in an increasingly digital era.

The potential of AI to transform the way business is done and benefit individuals, communities and society as a whole is amply evident across a number of spheres – from recommendation engines that note customer preferences and suggest relevant items that they may want to buy, to chatbots that can enhance the customer experience to making healthcare and diagnostics more accurate and affordable, or even ensuring public safety.

When humans and machines collaborate, there is a lot more that can be achieved than when there is singular effort, and the following are just a few instances of how AI is helping redefine every aspect of our life.

Making everyday life easier and more efficient

When shopping online, you cannot miss those suggestions to buy items that pop up on your screen which seem to have an uncanny insight into your mind. Today, recommendation engines driven by AI and ML are a significant part of e-commerce, and play a huge role in the customer experience journey.

In the financial sector, AI is used across a variety of functions, from detecting and handling frauds, to assessing risks, and in advisory services, all of which work to ensure that your money is safe.

When we hail a cab or use the map feature on our smartphone to navigate, it is AI which is getting us to our destination. It’s the same when we ask our smart device to stream our favourite song.

In shorttoday, AI plays a big role across all aspects of everyday human lives: from how we shop, how we bank, commute or unwind.

Exploring a new frontier in healthcare

AI is completely changing how we look at and deal with health-related issues and patient outcomes. It brings into play more meaningful insights and more intelligent processes with a focus on reducing manual work, providing more accurate services and impactful interventions to patients, as well as long term savings for everyone involved.

From robotic surgeries for accurate and precise operations, to electronic health records easily accessible by all stakeholders, virtual health assistants which stay ahead in managing patients’ health, and accurate diagnostics, the emergence of use cases for AI in healthcare are on the rise.

Enhancing customer service through bots

Today’s customers expect an “always-on, always-me” experience. Here is where conversational bots, i.e.AI-powered messaging solutions, are saving the day. Users can interact with such bots, using voice or text, to access information, complete tasks or execute transactions. 

In a survey by Accenture, 56% Of CIOs and CTOs surveyed said that conversational bots are driving disruption in their industry, while 57% agreed that conversational bots can deliver large ROI for minimal effort.

These bots are capable of performing complex tasks by combining one or more interfaces. With advancements in technology, in the future, bots will be able to act without human intervention and take relevant actions.

Despite the scepticism around bots on whether they will be able to appropriately incorporate history and context to create the personalised experiences desired by a customer and adequately understand what he or she requires via human input, businesses are embracing bots. Today, these virtual agents help enhance human agents’ productivity, deliver timely, conversational and contextual customer interactions and help resolve issues in a speedy and satisfactory manner.

Explainable AI to serve the ‘missing middle’ space in human-machine interaction

The rapid adoption of AI and related technologies notwithstanding, there will always be some jobs that will be done exclusively by humans. And then there are others which can be fully automated and taken care of by intelligent automation. But the maximum roles will see a combination of humans and machines working together. This space is something termed as “the missing middle” by Accenture

There are situations where an AI-driven decision on its own is not enough and we also need to know the reasons and rationale behind it. These roles will require people to apply their human skills and intelligence. Explainable AI complements and supports humans enabling them to make better, more accurate and faster decisions.

As collaboration between humans and machines increases, this space will see more action. For example, large enterprises have to manage a huge number of projects which means interacting with multiple vendors, clients and partners. The risks involved for each of these interactions is different, and often companies go wrong because of the complex nature of these interactions. Accenture Labs applied Explainable AI and developed a five-stage process to explain the risk tier of projects and contracts at each tier, along with valid reasons for these predictions, making it easier for decision makers to take more informed decisions.

Accenture Ventures’ Applied Intelligence Challenge

We have seen how AI’s footprint extends across industries, sectors and use cases, and helps make things better, faster and more efficient. Now let’s talk something even cooler: Applied Intelligence. Accenture’s unique approach to combining AI with data, analytics and automation helps transform businesses — not in silos, but more comprehensively across all functions and processes, helping them maximise existing investments, extending new technologies and scaling opportunities as they arise.

Read more: https://yourstory.com/2019/06/ai-human-machine-collaboration-driving-transformation

27 Jun 2019
The Only Thing We Have To Fear Is The Fear Of AI

The Only Thing We Have To Fear Is The Fear Of AI

It is now the era of artificial intelligence; the age of the synthetic mind. An increasing number of the world’s leaders all acknowledge this increasingly evident truth. The question now is not whether AI is coming (it is here), or how large an impact it will have (it could automate 35% of our jobs by the 2030s and is at the center of the burgeoning “Tech War” between China and the United States). It is instead how society will choose to engage with this new revolution. Will we configure our public education, social contracts, business models and international agreements to capitalize on the incredible new potential AI brings, or will we shrink back in fear?

Certainly, there are developments that should be concerning. The open and transparent development of AI is at risk as looming export controls threaten to pull it into an ever expanding “Tech War” between China and the United States. Autonomous weapons are probably being developed. AI language analysis techniques are already being used to gather information on citizens the world over. Intelligence agencies have attempted to alter electoral outcomes by creating automated psychographic profiles of millions of individuals, and then targeting them in influence campaigns.

Daunting as they may seem, these threats emerge not from some sentient, superhuman genocidal AI hatching a Machiavellian plot to spell humanity’s doom, but from the application of artificial narrow intelligence (ANI) under human control. Today’s real threats are illustrated by more quotidian news: bias creeping into AI systems used for loan approvals, abusive bots that spew racial hatred and failed facial recognition algorithms that confuse humans with animals.

Furthermore, there are the nearly insurmountable competitive moats the largest data-driven technology companies have been able to create, virtually stifling competition. Now, emerging startups know their salvation lies in being acquired, not in attempting to compete with the incumbents. Leaders like Cisco Chairman Emeritus John Chambers have even cited the resultant reduction in IPOs as a threat to the strength of the U.S. startup ecosystem.


For business, what makes AI formidable is its incredible ability to concentrate power and business advantage in the hands of early adopters. The insights gained from data enrich products, enabling further use and more data in a rapid positive spiral, unleashed at scale and machine speed, that quickly locks out the competition. A recent McKinsey study even suggests that early AI adopters will build unassailable leads in their respective categories. Those left behind may never overcome the competitive advantages that accrue to first movers.

Whether for companies or countries, the greatest risk around AI today is one of exclusion. This exclusion manifests itself in the business landscape as increased potential for monopolies, and it manifests itself in the social landscape as growing inequality. In truth, the malevolent sentience immortalized in Hollywood’s “Terminator” films isn’t a near-term threat to anyone. But failing to integrate artificial intelligence into businesses and national defense may very well be. Those who fail to understand and embrace AI leave themselves vulnerable to an insurmountable disruption by those who come to grips with it first.

Indeed, disruption is a constant in the modern world, but the pace at which disruption unfolds becomes faster every year. The number of years a company stays on the S&P 500 shrank from 60 years in 1959 to 20 years today, and it is projected to be a mere 13 years by the mid-2020s. Prior waves of digital transformation have reshaped entire industries, but it is the coming cognitive transformation that will be the most profound.

What makes the coming transformation unique is that it will allow for infinite scalability of cognitive tasks, essentially reducing to zero the incremental cost of embedding “intelligence” in a product or service. Artificial intelligence isn’t just about doing business better; it’s about inventing entirely new ways of doing business. This can already be seen in the way that technology companies such as Google, Apple and Amazon are morphing into “everything companies.” The skill sets these companies bring to bear are not their unique mastery of retailing or logistics, but their ability to quickly find key insights from data and automate the thousands of individual processes needed to scale a business. It doesn’t matter to them whether the insights they seek relate to oil and gas, commercial real estate or the sales of Beanie Babies. Transforming data, insight and predictions into a product or service is where they excel.

It shouldn’t be surprising that these “everything companies” are launching aviation businesses, selling home appliances and developing cars. Leveraging AI, they will continue to expand their reach across industries, disrupting businesses that never considered them competition. Who would have predicted ten years ago that Amazon would one day own Whole Foods? Everything companies will use data and AI to compete with virtually anyone.

Of course, AI-powered disruption can be a double-edged sword. If they slow down or miss a market, even tech giants can land themselves in trouble. Much of the truly fearsome competition is likely to come from China. Even now, Chinese e-commerce firm JD.com is pushing into western markets, aided by AI technology that outstrips Amazon’s.

JD.com is arguably the world’s leading company in delivery via autonomous systems—technology Amazon is still testing or only now rolling out in a few locations. It possesses the largest drone delivery system on the planet, as well as robot-run warehouses, drone “airports,” and driverless delivery trucks. And it’s not the only competition. Chinese ridesharing giant Didi boasts three times Uber’s global ride volume. With the ability to generate larger data streams, leverage a larger domestic user base and make use of massive government investments in AI, Chinese companies such as Alibaba, Tencent, Baidu, Weibo, Face++ and Didi may end up disrupting the disruptors.

As the ancient Greek adage proclaims, “change is the only constant.” In the age of AI, change is a super-exponential function. Today, it is not technology that should scare us. The only thing truly worthy of fear is our own inaction.

Source: https://www.forbes.com/sites/amirhusain/2019/06/26/the-only-thing-we-have-to-fear-is-the-fear-of-ai/#79a32c702593

26 Jun 2019
HERE’S HOW AI CAN HELP FIGHT CLIMATE CHANGE ACCORDING TO THE FIELD’S TOP THINKERS

Here’s how AI can help fight climate change according to the field’s top thinkers

The AI renaissance of recent years has led many to ask how this technology can help with one of the greatest threats facing humanity: climate change. A new research paper authored by some of the field’s best-known thinkers aims to answer this question, giving a number of examples of how machine learning could help prevent human destruction.

The suggested use-cases are varied, ranging from using AI and satellite imagery to better monitor deforestation, to developing new materials that can replace steel and cement (the production of which accounts for nine percent of global green house gas emissions).

But despite this variety, the paper (which we spotted via MIT Technology Review) returns time and time again to a few broad areas of deployment. Prominent among these are using machine vision to monitor the environment; using data analysis to find inefficiencies in emission-heavy industries; and using AI to model complex systems, like Earth’s own climate, so we can better prepare for future changes.

The authors of the paper — which include DeepMind CEO Demis Hassabis, Turing award winner Yoshua Bengio, and Google Brain co-founder Andrew Ng — say that AI could be “invaluable” in mitigating and preventing the worse effects of climate change, but note that it is not a “silver bullet” and that political action is desperately needed, too.

“Technology alone is not enough,” write the paper’s authors, who were led by David Rolnick, a postdoctoral fellow at the University of Pennsylvania. “[T]echnologies that would reduce climate change have been available for years, but have largely not been adopted at scale by society. While we hope that ML will be useful in reducing the costs associated with climate action, humanity also must decide to act.”

In total, the paper suggests 13 fields where machine learning could be deployed (from which we’ve selected eight examples), which are categorized by the time-frame of their potential impact, and whether or not the technology involved is developed enough to reap certain rewards. You can read the full paper for yourself here, or browse our list below.

  • Build better electricity systems. Electricity systems are “awash with data” but too little is being done to take advantage of this information. Machine learning could help by forecasting electricity generation and demand, allowing suppliers to better integrate renewable resources into national grids and reduce waste. Google’s UK lab DeepMind has demonstrated this sort of work already, using AI to predict the energy output of wind farms.
  • Monitor agricultural emissions and deforestation. Greenhouse gases aren’t just emitted by engines and power plants — a great deal comes from the destruction of trees, peatland, and other plant life which has captured carbon through the process of photosynthesis over millions of years. Deforestation and unsustainable agriculture leads to this carbon being released back into the atmosphere, but using satellite imagery and AI, we can pinpoint where this is happening and protect these natural carbon sinks.
  • Create new low-carbon materials. The paper’s authors note that nine percent of all global emissions of greenhouse gases come from the production of concrete and steel. Machine learning could help reduce this figure by helping to develop low-carbon alternatives to these materials. AI helps scientists discover new materials by allowing them to model the properties and interactions of never-before-seen chemical compounds.
  • Predict extreme weather events. Many of the biggest effects of climate change in the coming decades will be driven by hugely complex systems, like changes in cloud cover and ice sheet dynamics. These are exactly the sort of problems AI is great at digging into. Modeling these changes will help scientists predict extreme weather events, like droughts and hurricanes, which in turn will help governments protect against their worst effects.

 

Rising Temperatures And Drought Conditions Intensify Water Shortage For Navajo Nation

Better climate models would help governments mitigate the worse effects of droughts and other extreme weather events. 
Photo by Spencer Platt/Getty Images
  • Make transportation more efficient. The transportation sector accounts for a quarter of global energy-related CO2 emissions, with two-thirds of this generated by road users. As with electricity systems, machine learning could make this sector more efficient, reducing the number of wasted journeys, increasing vehicle efficiency, and shifting freight to low-carbon options like rail. AI could also reduce car usage through the deployment of shared, autonomous vehicles, but the authors note that this technology is still not proven.
  • Reduce wasted energy from buildings. Energy consumed in buildings accounts for another quarter of global energy-related CO2 emissions, and presents some of “the lowest-hanging fruit” for climate action. Buildings are long-lasting and are rarely retrofitted with new technology. Adding just a few smart sensors to monitor air temperature, water temperature, and energy use, can reduce energy usage by 20 percent in a single building, and large-scale projects monitoring whole cities could have an even greater impact.
  • Geoengineer a more reflective Earth. This use-case is probably the most extreme and speculative of all those mentioned, but it’s one some scientists are hopeful about. If we can find ways to make clouds more reflective or create artificial clouds using aerosols, we could reflect more of the Sun’s heat back into space. That’s a big if though, and modeling the potential side-effects of any schemes is hugely important. AI could help with this, but the paper’s authors note there would still be significant “governance challenges” ahead.
  • Give individuals tools to reduce their carbon footprint. According to the paper’s authors, it’s a “common misconception that individuals cannot take meaningful action on climate change.” But people do need to know how they can help. Machine learning could help by calculating an individual’s carbon footprint and flagging small changes they could make to reduce it — like using public transport more; buying meat less often; or reducing electricity use in their house. Adding up individual actions can create a big cumulative effect.

Source: https://www.theverge.com/2019/6/25/18744034/ai-artificial-intelligence-ml-climate-change-fight-tackle

25 Jun 2019
Intelligence is not ‘artificial’: humans are in charge

Intelligence is not ‘artificial’: humans are in charge

Google CEO Sundar Pichai gave a surprising interview recently. Asked by CNN’s Poppy Harlow about a Brookings report predicting that 80 million American jobs would be lost by 2030 because of artificial intelligence, Pichai said, “It’s a bit tough to predict how all of this will play out.” Pichai seemed to say the future is uncertain, so there’s no sense in solving problems that may not occur. He added that Google could deal with any disruption caused by its technology development by “slowing down the pace,” as if Google could manage disruption merely by pacing itself — and that no disruption was imminent.

The term “artificial intelligence” often prompts this kind of hand waving. It has built-in deniability, with no definite meaning, and an uncertain impact always seeming to lie in the future. It also implies the intelligence belongs to machines, as if humans have no control. It distances Google and others from responsibility.

Artificial intelligence is here now — it is the software and hardware that surrounds everyone on earth. Humans are the architects, refining solutions of all kinds to make them perform intelligently. Designing systems that serve useful purposes is the intelligence; the rest is rote. It will be a long time before machines can identify new purposes and adapt solutions to them. For now, and for some time, machines will have considerable human help.

The term “advancing intelligence” might keep technology CEOs more accountable. Replacing artificial with advancing signifies the intelligence is human, not machine, and is guided by people working at technology companies. It widens the scope to the thousands of technologies — collectively intelligent — upon which people are already dependent, and signals that the future is a function of technology companies’ roadmaps by which their employees are (intelligently) building products to serve people in a multitude of ways. It also emphasizes advancement — social utility and its impact, not only the apparent aptitude of the machine.


Google’s CEO, then, could have acknowledged that Google is already a world leader in AI (advancing intelligence). AI is not only robots and autonomous vehicles, but information services that extend human intelligence — search, voice-recognition, mapping, news, and video services (YouTube), sharable documents, cloud storage, mobile access (Android), to name a few. With a mission statement “to organize the world’s information and make it universally accessible and useful,” Google has made a large fraction of the world’s information available to people with search, handling 6 billion requests every day.

Read more: https://www.bostonglobe.com/opinion/2019/06/24/intelligence-not-artificial-humans-are-charge/sQcUBzzpu8KtrXqHpKuOkM/story.html

24 Jun 2019
AI shows no signs of slowing down

AI shows no signs of slowing down

Despite artificial intelligence (AI) being woven tightly into the fabric of our daily lives – think Facebook algorithms and suggested email responses in Gmail as examples of AI in our daily life – the world has barely begun to scratch the surface of the extent of its capabilities.

Various industries are being transformed by AI and the emergence of new technologies, which means there are many new opportunities that can arise from this. This makes it a promising field for students to branch into to address the shortage of professionals in the field.

Countries such as China, Japan, Sweden, Australia, among many others, have recognised its importance and are pushing for it to be taught in universities to build a talent pipeline that’s able to tackle future demands and challenges with AI.

The Japan News reported that Japan plans to create a nationwide curriculum covering the basics of AI, which will be suitable for all of the country’s universities. The curriculum is expected to be introduced sometime next spring. The education ministry hopes some 500,000 students will study AI annually.

“Given the global trend of utilising AI for data analysis, Japan has lagged behind in fostering human resources related to artificial intelligence,” said the report.

“According to an Economy, Trade and Industry Ministry estimate, Japan had a shortage of 34,000 AI specialists as of 2018, and the shortage will further rise to a maximum of 124,000 people in 2030. There are also said to be very few top-level AI professionals in Japan.”

China’s government has also announced plans to unleash a massive number of university majors in AI and big data, cementing its position as a world leader in the field.

Meanwhile, Australia’s government is injecting over AUS$29 million to help grow and support the development of AI. Speaking to ABC News, University of New South Wales’ Professor Toby Walsh, an AI professor, said other nations such as the UK, France, Germany and China were pumping in billions of dollars towards AI and that Australia could not afford to be left behind.

As countries show their seriousness in embracing AI, reports also suggest that more and more companies are adopting or planning to adopt AI in their businesses.

A report by Webroot, an American cybersecurity company, found that approximately 74 percent of businesses across the US and Japan are already using some form of AI or machine learning (ML) to protect their organisations. In 2018, they found that 73 percent of respondents surveyed planned to use more AI/ML tools in 2019.

The World Economic Forum (WEF) notes that AI and robots could create as many jobs as they displace, estimating that some 58 million new jobs could be created by 2022.

As companies invest in AI, this can only mean demand for the right talent will also increase. A Deloitte report notes that a “Lack of AI/cognitive skills” was among the top-three concern for 31 percent of respondents who are early adopters of AI.

Some skills are needed more than others, with respondents reporting “the highest level of need for AI researchers to invent new kinds of AI algorithms and systems”.

It’s clear that the demand for AI professionals is already booming, and an increasing number of universities are already offering AI courses for students. All that’s left is for them to seize the opportunity.

Source: https://www.studyinternational.com/news/ai-shows-no-signs-of-slowing-down-heres-why-students-need-to-ride-the-wave/

23 Jun 2019
Enabling or Disruptive Technologies: Lessons, Risks and Issues

Enabling or Disruptive Technologies: Lessons, Risks and Issues

Our view of IT depends upon our perspective. Should we celebrate the democratisation of access to devices and various apps, and wider opportunities to participate as barriers to inclusion and competition are reduced, or be concerned about the loss of control of personal data, what organisations know about us and a concentration of rewards of entrepreneurship?

In themselves, information technologies are neutral. Whether they help us or harm us depends upon how we use them, who uses them, and for what purpose. We can use them to improve existing activities or to enable new business models.

Paul Strassman, a serial CIO, suggested the overall impact of early IT was also neutral [1; 2]. Well run businesses tended to be helped and to become more competitive. Badly run and struggling businesses tended to be harmed and to become even worse. In some organisations, people seem to spend all day sending each other email rather than sorting issues.

The potential for either helpful or harmful impacts is reflected in debates, for example, about whether the wider adoption of AI will increase or reduce employment? In reality it is likely to do both, creating opportunity for some and providing a challenge for others.

Within Xerox we had working AI environments in the 1980s. I recall visiting Cambridge University to find out how they were using our gift of AI workstations and software. I was told one benefit was leaving the machines on overnight, which kept damp off the walls.

If certain environments and applications developed at Xerox PARC could have been rolled out, they would have transformed how we work and learn. The potential of AI seems to be forever ahead of our willingness and ability to sensibly employ it. AI also appears to be forever accompanied by debates concerning the ability to control applications of it that can independently learn and evolve.

Features or consequences of IT are often double-edged. There is the paradox of people being more connected via social media while also being physically and psychologically more isolated. The convenience of further connectivity is accompanied by additional risks.

The Internet of Things opens new doors of vulnerability. People being too busy to update their software with new patches keeps these doors open for longer. The global criminal community is a major beneficiary of IT. According to Michael McGuire of the University of Surrey in a report for Bromium, the global cybercrime economy is now worth $1.5 TN [3].

Cyber criminals win hands down when it comes to operating flexibly and quickly changing to take advantage of new opportunities and technologies. Corporate procurement processes with their requirements for board approval, invitations to tender, consultant selection, project planning, budgeting and roll out implementation ensure their targets are usually way behind.

Boards, governance arrangements and collective decision making are all struggling to cope. Governments and regulators are also playing catch up.

Building higher defences may be good business for IT suppliers, but at some point will a critical mass of people and organisations collaborate to counter-attack? Might there be greater use of AI and other technologies to assess, predict, identify and respond to threats? Will those who are wary of surveillance, sharing information and Government monitoring support the steps needed to succeed? In the privacy versus security debate, can one have both?

Missed opportunities are legion. Visions of the transformational use of IT can be quickly killed by a few words of caution to an insecure decision maker. Where Government is involved, a budget can be cut when more has to be spent on other and current activities.

I recall the prospect of a network to enable new approaches to learning across London’s Docklands quickly disappearing. Just before a meeting to give in principle approval the Development Corporation was told it would need to pick up further infrastructure costs.

Innovation is often agonisingly slow and more talked about than practiced. Many boards are risk averse and influenced by vested interests. They protect existing activities rather than enable new business models.

Until relatively recently, some boards of retail chains were still approving long-term rental agreements with landlords. Retail stores have succumbed in droves to on-line rivals, yet in the 1980s certain natives in remote jungles sold their craft wares over the internet.

Innovations that do occur are often slow to spread. This can be particularly evident in areas of the public sector where many years can pass before a seemingly obvious innovation is adopted. Individuals and entrepreneurs often move much more quickly than organisations.

When chairing awards for innovation in electronic commerce and e-business I found one winning team from the NHS largely imitating a previous winner they had not heard off. The earlier innovators had moved on and their successors had reverted to previous practice.

In the 1990s I led a project for the European Commission to develop a European approach to re-engineering. We encountered a lazy and self-interested preference for improvement rather than transformation, particularly in monopolies and public bodies.

There was also a noticeable penchant for mega projects. Although again these were good business for suppliers of IT, some projects seemed doomed to fail from the beginning. Even if they were eventually delivered, by then the requirement would probably have moved on.

While process vision holder of complex transformation projects as energy markets opened up, I encountered a willingness to spend millions on new suites of processes and systems that were largely the same as those used by most competitors, but a reluctance to spend relatively small sums on practical performance support tools that would quickly transform how people undertook difficult jobs, differentiate and deliver multiple other benefits for both them and the organisations concerned, while providing huge returns on investment [4, 5, 6].

Too often, IT suppliers push their own particular systems and overly large, complex, time consuming and disruptive projects. They exaggerate their advantages and play down their limitations. They encourage dependence and secure multi-year income streams.

The least understanding of IT is sometimes found in boardrooms. Insecure directors do not know to whom to turn for independent and objective advice. They play it safe by opting for widely used offerings from established suppliers. They overlook more imaginative, cheaper and flexible options that would differentiate them without ‘locking them in’.

One Middle Eastern bank listened to my advice. They reduced the number of proposed projects. By just focusing on visible and easily updated steps to transform the experience of clients they quickly obtained a high market share of high-value customers.

Departmental corporate structures often prevent people from seeing the inter-relatedness of issues and events. Individual IT projects and other issues are considered in isolation.

I once ran a session for 40 senior people from Kodak at Chewton Glen. While aware of digital photography, as the market leader for film many of the participants were supremely confident. Some 8,800 quality improvement projects were underway across the corporation.

I gave them an exercise to identify and prioritise the issues or developments in the business environment that would determine whether the company lived or died. We got to issue eleven before one of the functional heads present claimed a related project group. Needless to say this then corporate giant quickly became a bit player as chemicals gave way to digital.

Many inter-related challenges and opportunities are not addressed because CEOs and boards do not have a single department or an objective and trusted adviser to refer them to, and/or a collective or collaborative response is needed. IT governance and decision making needs to improve. We must think longer-term and be more flexible, responsible and practical.

Responsible leaders and providers focus on the interests of an organisation and its customers and other stakeholders. They also take account of life-time, end-of-life, crawl-out and transition costs when taking decisions or advocating solutions.

Given the challenges faced by mankind there are many opportunities that can be pursued without exploiting insecurity and/or ignorance. We need lifestyle changes and innovative and sustainable IT applications that address environmental and climate change challenges.

Many people are increasingly dependent upon IT. Generation gaps in understanding and its use have emerged and are evolving. Young people around the world are worried about the implications of our use of finite resources [7].

Will information technologies as we know them survive? Will there be enough rare minerals to enable future mobile and other devices to be built?

Unless innovation occurs, will our children and grand-children have to scavenge for rare minerals in thrown away devices in dumps of our rubbish during extreme weather events? Will we exercise restraint today to transform their tomorrows?

Source: https://indiacsr.in/enabling-or-disruptive-technologies-lessons-risks-and-issues/

22 Jun 2019
4 steps to developing responsible AI

4 steps to developing responsible AI

Artificial intelligence (AI) is arguably the most disruptive technology of the information age. It stands to fundamentally transform society and the economy, changing the way people work and live. The rise of AI could have a more profound impact on humans than electricity.

But what will the new relationship between humans and intelligent machines look like? And how can we mitigate the potential negative consequences of AI? How should companies forge a new corporate social contract amid the changing relationship with customers, employees, government and the pubic?

In May, China announced its Beijing AI Principles, outlining considerations for AI research and development, use and governance.

In China, the zeitgeist around AI has been more intense than around other emerging technologies, as the country is positioned to harness the tremendous potential of AI as a means to enhance its competitiveness in technology and business.

According to Accenture research, AI has the potential to add as much as 1.6 percentage points to China’s economic growth rate by 2035, boosting productivity by as much as 27%.

In 2017, the central government launched a national policy on AI with significant funding. The country already tops the AI patent table and has attracted 60% of the world’s AI-related venture capital, according to Tsinghua University’s report.

We’re already seeing the impact of AI across many industries. For example, Ping An, a Chinese insurance company, evaluates borrowers’ risk through an AI app. On the other hand, AI has generated a plethora of fears about a dystopian future that have captured the popular imagination.

Indeed, the unintended consequences of disruptive technologies – whether from biased or misused data, the manipulation of news feeds and information, job displacement, a lack of transparency and accountability, or other issues – are a very real consideration and have eroded public trust in how these technologies are built and deployed.

However, we believe, and history has repeatedly shown, that new technologies provide incredible opportunities to solve the world’s most pressing challenges. As business leaders, it is our obligation to navigate responsibly and to mitigate risks for customers, employees, partners and society.

Although AI can be deployed to automate certain functions, the technology’s greatest power is in complementing and augmenting human capabilities. This creates a new approach to work and a new partnership between human and machine, as my colleague Paul Daugherty, Accenture’s Chief Technology and Innovation Officer, argues in his book, Human + Machine: Reimagining Work in the Age of AI.

Are business leaders around the world prepared to apply ethical and responsible governance on AI? From a 2018 global executive survey on Responsible AI by Accenture, in association with SAS, Intel and Forbes, 45% of executives agree that not enough is understood about the unintended consequences of AI.

Of the surveyed organizations, 72% already use AI in one or more business domains. Most of these organizations offer ethics training to their technology specialists. However, the remaining 30% either do not offer this kind of training, are unsure if they do, or are only just considering it.

As AI capabilities race ahead, government leaders, business leaders, academics and many others are more interested than ever in the ethics of AI as a practical matter, underlining the importance of having a strong ethical framework surrounding its use. But few really have the answer to developing ethical and responsible AI.

Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.

It is imperative for business leaders to understand AI and make a top-down commitment to the responsible use of AI. Central to this is taking a human-centric approach to AI thinking and development. It is not enough to have the correct data, or an algorithm that performs accurately. It is critical to incorporate systems of governance, design and training that provide a framework for successfully implementing AI in an organization.

A strong Responsible AI framework entails mitigating the risks of AI with imperatives that address four key areas:

  1. Governance

Establishing strong governance with clear ethical standards and accountability frameworks will allow your AI to flourish. Good governance on AI is based on fairness, accountability, transparency and explainability.

  1. Design

Create and implement solutions that comply with ethical AI design standards and make the process transparent; apply a framework of explainable AI; design a user interface that is collaborative, and enable trust in your AI from the outset by accounting for privacy, transparency and security from the earliest stage.

  1. Monitoring

Audit the performance of your AI against a set of key metrics. Make sure algorithmic accountability, bias and security metrics are included.

  1. Reskilling

Democratize the understanding of AI across your organization to break down barriers for individuals impacted by the technology; revisit organizational structures with an AI mindset; recruit and retain the talent for long-term AI impact.

The benefits and consequences of AI are still unfolding. China has a great opportunity to capitalize on AI in its development and shares a huge responsibility with other countries to help it deliver positive societal benefits on a global scale. We must work to ensure a sound global public policy environment that works to enable and encourage investment in the development and deployment of responsible AI.

Source: https://www.weforum.org/agenda/2019/06/4-steps-to-developing-responsible-ai/

20 Jun 2019
Artificial Intelligence needs to become less and less artificial

Artificial Intelligence needs to become less and less artificial

For AI to win people over, its applications must incorporate interventions aimed at allaying various fears generated by it.

AI (Artificial Intelligence) is everywhere and it’s here to stay. It now powers so many real-world applications, ranging from facial recognition to language translators and assistants like Siri and Alexa. Along with these consumer applications, companies across sectors are increasingly harnessing AI’s power for productivity growth and innovation.

There are many who believe that AI has the potential to become more significant than even the internet. Availability of enormous amount of data combined with huge leap in computational power and huge improvements in engineering skills should help AI, backed with deep learning, to make huge impact across various facets of human life.

Amid all the hype, genuine and inflated, around the world of AI, it is pertinent to ask an important question. Do humans really love AI? Are humans really happy that many of their daily tasks will now be taken care of by a machine? The adoption and so the future of AI is dependent on the answers to these questions.

In 2016, AlphaGo, an AI-based algorithm, trounced South Korean grandmaster Lee Sedol four games to one in Seoul, South Korea. This was the first time a computer program had beaten a top player in a full contest and was hailed as a landmark for AI. Today there are more efficient AI algorithms that can beat AlphaGo squarely. But the moot question is whether the human Go players would want to play more games with these machines. The superior computing power of AI has taken away even the remotest chance of a human winning a game against AlphaGo. It is unlikely that a human will want to play a game

that he is sure of losing, every time. This holds a valuable lesson about the future of AI. An AI product that makes humans look like losers will not have high adoption rates.

The last time there was a serious discussion about machines making humans redundant was at the beginning of the industrial revolution. Newly invented machines and industrial engineering principles put forward by F.W. Taylor treated humans as a replaceable parts of an assembly line. No one cared for the men who lost their jobs to machines, nor the men who worked on those machines. Workers in the world’s early factories faced long hours of work under extremely unhygienic conditions, and mostly lived in slums. This soon resulted in significant resistance to the introduction of machines and several labour riots.

Government soon intervened to provide basic rights and protection for workers. Statutory regulations forced factory owners to set up formal mechanisms to look into workers’ wages and welfare. Several new studies like Elton Mayo’s Hawthorne Studies debunked Taylor’s Scientific Management approach toward raising productivity and established that the major drivers of productivity and motivation were non-monetary factors. A host of new theories and management practices emerged that started treating workers as a resource, an asset. This human-centric approach played a significant role in making the industrial revolution a success.

Source:
https://www.livemint.com/opinion/columns/opinion-artificial-intelligence-needs-to-become-less-and-less-artificial-1560965545468.html

19 Jun 2019
Is AI superficial when it comes to using it for cybersecurity purposes?

Is AI superficial when it comes to using it for cybersecurity purposes?

Businesses use AI to improve their security while hackers are using it to launch even more sophisticated attacks

Data is central to today’s digital economy and it has intrinsic value to businesses and consumers. However, organisations worldwide are facing severe challenges when it comes to protecting data from cybercrime and data leakage.

Technology can of course provide many solutions to help protect data from leakage. Yet, some technologies such as artificial intelligence (AI) can also arm cybercriminals with new ways to expand upon malicious attacks.

To better understand whether AI is a help or a hindrance to cybersecurity, TechRadar Pro sat down with Ensighten’s Chief Revenue Officer, Ian Woolley.

BlackBerry signs $1.4bn AI cybersecurity deal
Microsoft: The future of security is AI
Staying one step ahead of the cyber-security hydra
What security risks are out there for businesses?
Businesses must take notice and realise that cybercrime is becoming increasingly common especially given more consumers are using online channels to transact and share data.

As a result, there have been plenty of cybersecurity incidents that have leaked businesses’ data – both its own and of its customers. Microsoft Office 365 is just one example with some of its users’ accounts being compromised by hackers – exposing personal content from emails as part of a data breach. Although the number of users affected has not been disclosed, Microsoft confirmed that around 6% of those involved would have had their emails hacked. It’s clear cybercriminals find value in any type of data for monetary return. For example, Facebook logins are sold for just $5.20 (£4.09) each on the dark web.

It’s clear that data breaches need to be prevented for various reasons – such as avoiding financial or reputational implications. IBM found the global average cost of a data breach is £3 million, and estimated that a breach of 50 million records or more can cost a company as much as £273 million.

Yet, despite many businesses best efforts towards security, it’s a tough playing field. Attackers are trading knowledge in underground marketplaces – allowing them to specialise in particular aspects of cybercrime such as stealing security credentials or hacking into accounts. Cybercriminals are highly knowledgeable of how to surpass security protocols.

What’s even more dangerous is the fact they are mirroring the way businesses work. Cybercriminals are identifying where to spend their time and effort based on the return-on-investment. For instance, if businesses only focus security on one channel, such as phones, criminals will turn their attention to pursue internet platforms, such as company websites. We’re seeing this behaviour in the banking and finance sector. Some attackers are turning away from internet and telephone banking to turn their attention towards mobile banking fraud. According to RSA, 60% percent of digital banking fraud now originates from the mobile channel.

Source:
https://www.techradar.com/news/is-ai-superficial-when-it-comes-to-using-it-for-cybersecurity-purposes