Category: Disruptive Technology

15 Jul 2019

Artificial intelligence (AI) in Construction Market to Hit Value of USD 3,161 Million By 2024

According to the report, the global AI-in-construction market was valued at USD 312 million in 2017 and is expected to reach USD 3,161 million by 2024, growing at a CAGR of 38.14% between 2018 and 2024.

New York, NY, July 14, 2019 (GLOBE NEWSWIRE) — Zion Market Research has published a new report titled “AI-In-Construction Market by Technology (Natural Language Processing and Machine Learning and Deep Learning), by Component (Solutions and Services), by Deployment (On-Premises and Cloud), and by Application (Project Management, Risk Management, Field Management, Supply Chain Management, and Schedule Management): Global Industry Perspective, Comprehensive Analysis, and Forecast, 2017—2024”.

According to the report, the global AI-in-construction market was valued at USD 312 million in 2017 and is expected to reach USD 3,161 million by 2024, growing at a CAGR of 38.14% between 2018 and 2024.

Artificial Intelligence allows computer systems to make intelligent decisions by applying the required skills. Artificial Intelligence has been beneficial in the development of applications that comprise machine vision for easy analysis and surveying of buildings and structures. Additionally, the development of creating information modeling is software that gives information on a construction project, warranty details regarding material used, and commissioning data. This has resulted in increased AI adoption by most of the construction start-ups globally for various applications.

Browse through 56 Tables & 29 Figures spread over 145 Pages and in-depth TOC on “Global AI-In-Construction Market: By Technology, Size, Share, Types, Trends, Industry Analysis and Forecast 2017—2024”.

Artificial Intelligence has the ability to perform tasks similar to that performed by human intelligence, such as planning, recognition, and decision making. The construction sector is adopting AI to obtain precise data and insights to increase productivity, operational efficiency, and ensure safety at work. AI operates on algorithms related to image recognition to find out search criteria. For instance, it includes hard hats and safety vests to search construction workers, those who are not wearing proper safety gears. The primary applications for AI-In-Construction market include planning, safety, monitoring and maintenance, and autonomous equipment.

AI’s capability in construction services and solutions to reduce production costs is the major factor expected to drive the global AI-In-Construction market. In addition, the need for safety measures on construction sites is also projected to drive this market’s growth. Furthermore, huge investments made by construction companies from the emerging economies globally in the adoption of the advanced AI technology for construction applications is also likely to contribute toward the global growth of the AI-In-Construction market. However, the low technological investments in R&D for developing new technologies might hamper this market. Nonetheless, the increasing demand for integrated AI in construction activities is estimated to create new market opportunities.

By technology, the AI-In-Construction market is divided into natural language processing and machine learning and deep learning. By component, this market includes solutions and services. By deployment type, the market is bifurcated into on-premises and cloud. A cloud deployment type is estimated to grow at a higher CAGR during the projected period, owing to its cost-effectiveness. Project management, field management, risk management, supply chain management, and schedule management comprise the application segment of the AI-In-Construction market.

North America dominated the global AI-In-Construction market in 2017, due to the lack of a skilled workforce that has driven the key construction enterprises to invest in robotics-based solutions. The real estate organizations are developing solutions that can detect the risks and perform the labor tasks repetitively, which can enable the non-experienced staff to complete the complex tasks. In addition, the high AI demand for various applications, such as field management, project management, and risk management, is likely to contribute toward this regional market’s growth.

Read more:

14 Jul 2019
Disruptive Technology, Part 2: Where’s the Risk?

Disruptive Technology, Part 2: Where’s the Risk?

The financial services industry is embracing disruptive technology, but executives are also aware of the potential risks it can bring.

This article is the second in a three-part series exploring executives’ perspectives on disruptive technologies in the financial services industry. The first installment looked at how the industry is using these technologies.

The financial services industry is embracing disruptive technology, but executives are also aware of the potential risks it can bring, according to a study conducted by ALM’s Corporate Counsel on behalf of Winston & Strawn.
The survey found that the level of perceived legal risk depends on the specific technology in question. AI is the area of greatest concern, with 51% of companies seeing it as a significant source of risk. Slightly fewer (50%) cited social banking and P2P lending, which bring the potential for shifting business models, while 42% cited blockchain, where there is still a lot to learn. “With distributed ledger and blockchain, there is a lot of promise and investment, but few successful use cases,” says Michael Loesch, co-chair of Winston’s Disruptive Technology Task Force. On the other end of the spectrum, only 34% see significant risk in facial recognition and biosecurity technologies, and 18% see no risk at all in that area.

Concerns about AI risk may stem from the fact that AI is a high-profile technology but one that still holds a number of unknowns—which may be troubling for companies that see it having an increasingly significant role throughout the business. In particular, bias in AI-enabled loan underwriting was cited by 45% of respondents as a risk—presumably because financial services companies are aware of studies that have shown that bias can creep into AI loan underwriting, highlighting the potential for unintended consequences with a powerful technology. They also know that it can be difficult to explain how AI systems—which are often opaque, “black box” technologies—produce their recommendations, which is likely to gain the attention of regulators.
Meanwhile, it is not always clear how evolving regulatory frameworks will affect the use of AI. For example, with AI-enabled personal assistants for customers, “banks are still asking what the restrictions are going to be on what the personal assistant can do,” says Winston litigation partner Danielle Williams. “Are they just able to read and relay account information? Will they execute transactions? There could be regulatory issues with every aspect of that.” AI used in internal processes, such as robotic process automation, could also increase legal risk if the way it is programmed introduces errors into back-office work.

“The ways in which AI is being deployed in the financial space, however, is of lower risk than in other industries where we are trusting AI to make potentially life-altering or threatening decisions,” remarked Kathi Vidal, Winston’s Silicon Valley managing partner and a former AI developer. “In the financial space, AI not only has the potential to replicate human decision making but also to improve it. In fraud detection, credit analysis, and other applications, we can train neural networks and other AI systems to better, and more blindly, analyze big data to render more accurate but also more equitable predictions,” concluded Vidal.
In time, more familiarity with disruptive technologies may actually increase awareness of potential risks. “Some institutions just aren’t using disruptive technologies to their fullest capacities yet,” says Basil Godellas, head of Winston’s Financial Services Regulatory Practice. “For example, respondents had fairly low concern about facial recognition, biometrics, and biosecurity solutions. But most institutions are just dipping their toes in the water with these technologies—and I think their concerns about risk may grow as they have more exposure to them.” Evolving legal frameworks may also increase those concerns. The 2008 Illinois Biometric Information Protection Act, for example, regulates how companies collect biometric information, such as fingerprints and retinal scans. More recently, a number of other states have passed or proposed similar biometric data privacy laws.

Beyond any specific technology, financial services companies clearly see risk in the cybersecurity and data privacy realm. (As mentioned above, respondents cited this as a top barrier to implementing disruptive technology.) The industry has long experience with the challenges of keeping sensitive data safe, as well as with the legal and regulatory costs of failing to do so. But disruptive technologies raise the security bar significantly because they rely on large amounts of quality data—often sensitive data about customers—to operate, provide services, and create insights. And that data has to be managed, protected, and shared safely with a growing number of applications.

At the same time, regulations around data privacy and cybersecurity are evolving. Take, for example, the EU’s General Data Protection Regulation, which went into effect last year and significantly strengthened privacy regulations—and penalties for noncompliance. Or consider the California Consumer Privacy Act of 2018, slated to go into effect in 2020, which has some of the most stringent privacy mandates in the United States. Such regulations, coupled with the advent of disruptive technologies, only make compliance more complex.

Regulators: Looking at Disruptive Technology
Executives worry about fintech and disruptive technologies bringing unwanted attention from a variety of legal and regulatory sources—indeed, 41% of respondents point to investigations and civil enforcement actions by federal agencies as the largest technology-related legal/regulatory threat. Many worry about actions by industry groups (32%), state regulators (28%), DOJ and state attorneys general (26%), and private civil plaintiffs (24%).

Meanwhile, nearly half say they are concerned with technology-related antitrust issues and the possibility of antitrust enforcement—driven in part by a sense that the rules are lagging behind advancing technology. “In financial services, there is an overarching concern that the antitrust laws as they are drafted today might not be sophisticated enough or flexible enough to deal with new disruptive technologies,” says Susannah Torpey, a partner at Winston & Strawn.

There are other antitrust implications to consider as well. In a technology-driven world, collaboration and working in partner ecosystems are both easier and more important. When setting up blockchain consortia or working with others in the industry to set technology standards, “you have to be very careful that this increase in coordination is well managed, so that you don’t find yourself exchanging information with your competitors that leads to anti-competitive effects in the market place,” commented Torpey.

Disruptive technologies essentially increase the importance of data in business, and AI specifically makes it easier to gather and use data from a wide variety of internal and external sources. Thus, these new technologies may increase the risk that regulators will consider data a source of market power when assessing mergers and acquisitions. And because disruptive technologies can drive innovations that quickly alter competitive dynamics, companies that succeed with new approaches may find themselves under increased scrutiny. “A startup financial services company that does something new might quickly gain dominant share of a niche market, which could put them at a much higher risk of an antitrust violation,” says Torpey.

In general, fintech and disruptive technology are moving quickly, and regulators sometimes struggle to keep up—which itself presents challenges to financial services companies trying to balance innovation and compliance. But regulators are paying close attention to these technologies. The U.S. Commodity Futures Trading Commission, for example, has established a LabTechCFTC program designed to keep the commission in close touch with technology innovators—and other agencies have set up similar programs. Perhaps with an eye toward those efforts, nearly seven out of 10 financial services companies believe that regulators are keeping up with the use of disruptive technologies in the industry. But in reality, that can be a challenge. “Regulators are doing positive things like LabTech to keep up with technology, but they are often still playing catch-up,” says Loesch. “They have to use the tools and laws that they have, and sometimes those are not fit for purposes in the disruptive technology space. There are tensions because the normal regulatory framework doesn’t always fit precisely with how the new technology operates. And that can lead to costly investigations and even enforcement actions.”


13 Jul 2019
Artificial Intelligence and Machine Learning Empowering Business Growth for Entrepreneurs Today

Artificial Intelligence and Machine Learning Empowering Business Growth for Entrepreneurs Today

In 2018 alone, AI-related startups and ventures have secured $9.3 billion in VC funding, according to a report from PwC and CB Insights. The Indian unicorns PayTm, Swiggy, and Oyo have been investing resources to gain AI capabilities and have acquired at least one AI company.

In 2018, VCs have funded Indian AI startups with USD 478.38 million in 111 funding rounds. One of the reasons why AI & machine learning-based technologies on the rise are that the competitive advantages one can develop using them, especially with the customer experience and cost optimization. In e-commerce, for example, understanding consumer behaviour and product demand and making the right offer at the right time can be the difference between winning or losing over the competition. 

Change in Working

AI in the early stages was mostly based on rule-based systems, whose ability to deliver value is limited by how well the rules are defined, which requires human expertise. In machine learning (ML) — a subset of AI — once a model is trained, the ML model can learn further through inferencing — where the ML model is put to work on real-time data. 

The ability of ML models to self-learn with minimal to no human interventions has been the key in gaining interest from entrepreneurs and innovators. Whether it’s shopping on Flipkart or watching movies on Netflix, the customers’ experiences are vastly touched my machine learning and its subset deep learning. Curating vast amount of content and making purchase recommendations has been one of the commercially well-recognized use cases that saw huge interest from tech giants as well as well-established startups. UBS estimates that AI as a standalone industry has the potential to reach a market cap of USD 120-180 billion by 2020. 

Health Care and AI

Healthcare is another field where AI and machine learning systems are expected to make a big impact. For example, a Bangalore based startup delivers precision medicine using AI and machine learning. Another Bangalore based startup analyzes medical data and generates reports using deep learning systems, which can make a huge difference in delivering timely patient-care. 

Other Sectors

Machine learning is the turnkey technology that impacts many industry sectors: robotics, retail, banking, finance, self-driving, fraud-detection, weather forecasting, finding medicine for HIV, examining extraterrestrial objects, and so on. That is, AI & ML provide massive opportunities for entrepreneurs to explore, experiment, and build new businesses. 

AI and machine learning applications need the power of massively parallel processing capabilities provided by GPUs. However, owning GPU hardware doesn’t justify 3-year amortization costs, especially for entrepreneurs who are starting out in the AI & ML space. Also, there aren’t many cost-effective GPU solutions available in the Indian market. Getting started with machine learning has become less difficult with the development and production-grade availability of Open Source frameworks.

In Conclusion

AI & ML space is still young and entrepreneurs have a massive opportunity to innovate and disrupt. There have been concerns that AI might replace our jobs on a massive scale. However, according to a study by UBS, in most areas, AI is poised to replace tasks, not jobs. 


11 Jul 2019
Artificial Intelligence And The Challenge Of Global Governance

Artificial Intelligence And The Challenge Of Global Governance

Digitalization is evolving from an economic challenge to a governance and political problem. Some studies suggest that by 2030, Artificial Intelligence (AI) might contribute up to EUR 13.33 trillion to the global economy (more than the current output of China and India combined). The essence of the political conflict that raises the issue of global governance is what type of actor (a state or a digital corporation) will lead this process, creating global asymmetry in terms of trade, information flows, social structures and political power. This means challenging the international system as we know it.

AI is generating new large-scale systems based on (1) services (such as traffic management and smart vehicles, international banking systems, and new healthcare ecosystems); (2) global value chains, the Internet of things (IoT) and robotics (Industry 4.0); and (3) electronics with a new generation of microprocessors and highly specialized chips. The “food” for AI is the Internet – a major source of data, computing power and telecommunications infrastructure.

Not all countries will benefit in the same way, since AI-driven wealth will be dependent on each country’s readiness to be “connected” to the Internet. This is the essence of the political problem between a non-territorial space based on large-scale computer networks and nation-states.

Both democratic and non-democratic governments struggle to assert authority over different dimensions. First, they need to regulate de facto global private monopolies (such as Google, Facebook, Apple and Amazon) that are setting new rules of competition, creating new technology markets and blurring boundaries between industries. Second, many countries perceive Internet governance as too US-centric. A good example is the governance of IP addresses, which created the precedent of having a “universal resource” managed by a private, US-based institution like the ICANN. Third, a vast majority of digital innovations and AI technologies and applications come from a unique public-private ecosystem in the United States: Silicon Valley. Finally, although the Internet is global, investment in infrastructure (such as 5G) requires huge investments driven by local (ex-public) telecom operators whose business models are less sustainable.

While the European Union struggles to regulate Silicon Valley’s global platforms, China has started to block them with a “digital wall,” promoting protectionism to strike and compete in the global-tech game. Examples are tech giants such as Tencent, Baidu, Alibaba and the impressive Digital Silk Road to connect the European Union and China with various types of infrastructure, including satellites, 5G and submarine cables. This project could be the infrastructure that will enable China, by 2030, to become “the world’s primary artificial intelligence innovation center, transforming the country into a leading innovation-style nation and the greatest economic power,” as China’s national AI plan states.


10 Jul 2019
Scientists Create an AI From a Sheet of Glass

Scientists Create an AI From a Sheet of Glass

AI Glass
It turns out that you don’t need a computer to create an artificial intelligence. In fact, you don’t even need electricity.

In an extraordinary bit of left-field research, scientists from the University of Wisconsin–Madison have found a way to create artificially intelligent glass that can recognize images without any need for sensors, circuits, or even a power source — and it could one day save your phone’s battery life.

“We’re always thinking about how we provide vision for machines in the future, and imagining application specific, mission-driven technologies,” researcher Zongfu Yu said in a press release. “This changes almost everything about how we design machine vision.”

Numbers Game
In a proof-of-concept study published on Monday in the journal Photonics Research, the researchers describe how they made a sheet of “smart” glass that could identify handwritten digits.

To accomplish that feat, they started by placing different sizes and shapes of air bubbles at specific spots within the glass. Then they added bits of strategically placed light-absorbing materials, including graphene.

When the team then wrote down a number, the light reflecting off the digit would enter one side of the glass. The bubbles and impurities would scatter the lightwaves in certain ways depending on the number until they reached one of 10 designated spots — each corresponding to a different digit — on the opposite side of the glass.

The glass could essentially tell the researcher what number it saw — at the speed of light and without the need for any traditional computing power source.

“We’re accustomed to digital computing, but this has broadened our view,” Yu said. “The wave dynamics of light propagation provide a new way to perform analog artificial neural computing.”

Face Time
Teaching machines to accurate “see” will be key to achieving our goals for artificial intelligence — machine vision plays a role in everything from autonomous cars to delivery robots.

This “smart” glass might not be able to complete calculations complex enough for those uses, but the team does have one possible application for it in mind: smartphone security.

Currently, when you attempt to unlock a phone using face ID, an AI within the device has to run a computation, draining battery power in the process. Affix a trained sheet of this smart glass to the front of the device, and it’ll be able to take over the task without pulling any power from the phone’s battery.

“We could potentially use the glass as a biometric lock, tuned to recognize only one person’s face,” Yu said. “Once built, it would last forever without needing power or internet, meaning it could keep something safe for you even after thousands of years.”


09 Jul 2019
Additional €100m in funding announced for disruptive technologies

Additional €100m in funding announced for disruptive technologies

Irish start-ups and more established tech companies will be able to tap an additional €100 million from a Government fund to work on projects focused on cutting edge technology.

The funding is being made available under the State’s Disruptive Technologies Innovation fund, a €500 million programme run by the Department of Business, Enterprise and Innovation with support from Enterprise Ireland.

The latest tranche is for three-year projects that involve partnerships between industry and researchers with a minimum ask of €1.5 million per application required.

Projects have to focused on select areas, such as robotics, artificial intelligence, augmented and virtual reality, smart manufacturing, and sustainable food production and processing.

Disruptive technologies are technologies that are seen to have the potential to significantly alter markets and/or the way that businesses operate. While they may involve new products or processes, such technologies can also involve the emergence of alternative business models.

Some €75 million in funding has already been allocated to 27 projects under the programme, covering everything from household electricity generation, sepsis treatments, coastal flooding supports and medical 3D printing.

Applications for the second round of funding must be submitted to the department by Wednesday, September 18th.


08 Jul 2019
Man Vs. Machine: The 6 Greatest AI Challenges To Showcase The Power Of Artificial Intelligence

Man Vs. Machine: The 6 Greatest AI Challenges To Showcase The Power Of Artificial Intelligence

As artificial intelligence (AI) research and development continues to strengthen, there have been some incredibly intriguing projects where machines battled man in tasks that were once thought the realm of humans. While not all were 100% successful, AI researchers and technology companies learned a lot about how to continue forward momentum as well as what a future might look like when machines and humans work alongside one another. Here are some of the highlights from when artificial intelligence battled humans.

World Champion chess player Garry Kasparov competed against artificial intelligence twice. In the first chess match-up between machine (IBM Deep Blue) and man (Kasparov) in 1996 Kasparov won. The next year, Deep Blue was victorious. When Deep Blue won, many talked that it was a sign that artificial intelligence was catching up to human intelligence and it inspired a documentary film called The Man vs. The Machine. Shortly after losing, Kasparov went on record to state he thought the IBM team had cheated; however, in an interview in 2016, Kasparov said he had analyzed the match and retracted his previous conclusion and cheating accusation.

In 2011, IBM Watson took on Ken Jennings and Brad Rutter, two of the most successful contestants of the game show Jeopardy who had collectively won $5 million during their reigns as Jeopardy champions. Watson won! To prepare for the competition, Watson played 100 games against past winners. The computer was the size of a room, was named after IBM’s founder Thomas J. Watson and required a powerful and noisy cooling system to keep its servers from overheating. Deep Blue and Watson were products that came from IBM’s Grand Challenge initiatives that pit man against machines. Since Jeopardy has a unique format where contestants provide the answers to the “clues” they are given, Watson first had to learn how to untangle the language to determine what was being asked even before it could do the work to figure out how to respond—a significant feat for natural language processing that resulted in IBM developing DeepQA, a software structure to do just that.

Could artificial intelligence play Atari games better than humans? DeepMind Technologies took on this challenge, and in 2013 it applied its deep learning model to seven Atari 2600 games. This endeavor had to overcome the challenge of reinforcement learning to control agents directly from vision and speech inputs. The breakthroughs in computer vision and speech recognition allowed the innovators at DeepMind Technologies to develop a convolutional neural network for reinforcement learning to enable a machine to master several Atari games using only raw pixels as input and in a few games have better results than humans.

Next up in our review of man versus machine is the achievements of AlphaGo, a machine that is able to learn for itself what knowledge is. The supercomputer was able to learn 3,000 years of human knowledge in a mere 40 days prompting some to claim it was “one of the greatest advances ever in artificial intelligence.” The system had already learned how to beat the world champion of Go, an ancient board game that was once thought to be impossible for a machine to decipher. The film about the experience is now available on Netflix. AlphaGo’s success, when not being constrained by human knowledge, presents the possibility of the system being used to solve some of the world’s most challenging problems such as in healthcare or energy or environmental concerns.

In another test of artificial intelligence capabilities, DeepMind sought out a more complex game for artificial intelligence to battle that required the use of different features of intelligence that are necessary to solve scientific and real-world problems. They found the next challenge in StarCraft II, a real-time strategy game created by Blizzard Entertainment that features multi-layered gameplay. AlphaStar was the first artificial intelligence to defeat professional players of the game by using its deep neural network that was trained from raw game data by reinforcement and supervised learning.

Project Debater, a project from IBM, tackles another area of expertise for artificial intelligence—debating humans on complex topics. This skill involves dissecting your opponent’s arguments and finding ways to appeal to their emotions (or the audience’s emotions)—something that would seem like a uniquely human ability to do. Even though Miss Project Debater lost when it faced off against one of the world’s leading debate champions, it was still an impressive display of artificial intelligence capabilities. To succeed at a debate, AI needs to rely on facts and logic, be able to make sense of an opponent’s line of reasoning and to navigate human language fully which has been one of the most challenging feats of all for AI to master. While not 100% successful, Project Debater gave a good glimpse of what’s possible in the future where machines can augment human intelligence in powerful ways.


06 Jul 2019
AI in government: What should it look like and how do we get there?

AI in government: What should it look like and how do we get there?

With today’s widespread focus on artificial intelligence, it is hard to imagine that this is not the first time AI has held a prominent place in the zeitgeist. Back in the mid-1980s, during the first computer revolution, AI was gaining ground as a significant field of research that could revolutionize the world. Then it stalled and the so-called “AI winter” began. The ideas were right, but computing technology was just not there.

The 1980s might not seem that long ago to those of us who were in high school or college then (wasn’t that just yesterday?), but in technological terms, it was an epoch ago. Fast forward to today. AI has come out of hibernation with a vengeance, propelled by enormous advances in computing capacity, and with many promises to society.

In few places is AI ready to make as profound an impact than in the missions of federal agencies. Yet while AI is gaining traction in the minds of federal managers, agencies still face several challenges — both technical and in mindset — that must be overcome first.

Federal momentum

If 2018 was the year AI entered the federal collective consciousness, 2019 is the year government starts seriously considering where and how it can help. In February 2019, President Donald Trump signed an executive order calling on the government to invest in, support and accelerate the use of AI initiatives in federal applications.

A month later, fiscal 2020 budget proposals released from the White House showed the federal government was preparing to allocate some $4.9 billion into unclassified AI and machine learning research. A week after that, the administration launched to be “the hub of all the AI projects being done across the agencies.” The federal AI/ML agenda is beginning to coalesce, but it is still without defined form.

Insight by the Trezza Media Group: Technology experts share emergency communications and public safety strategies in this free webinar.

Defining artificial intelligence
One of the challenges that came with the rapid re-entrance of AI into the federal mind is a muddying and confusion of the terms. The term “AI” is now being used to describe solutions on a long continuum of automation capabilities. Indeed, it is easy to see why: the Defense Advanced Research Projects Agency’s (DARPA) use and development of AI should be very different from another agency’s use of AI on a continuum of machine learning, such as the Agriculture Department’s National Institute of Food and Agriculture. Nevertheless, the first step agencies should take is to clearly identify how they define AI and what types of AI are appropriate for their various mission objectives.

For many government missions, the most useful AI isn’t what researchers call artificial general intelligence — that is AI that will be designed to think and reason for itself. Instead, the government should look for solutions that leverage AI intended to make human decisions in a singular high-volume but low-involvement function. In other words, the mundane tasks that, if automated, would free up resources for other more challenging or creative jobs.

A textbook example of this is in IT operations and cybersecurity, where AI is able to look at machine data from the world around them — log files from IT systems, Internet of Things (IoT) data, user behavior activity, etc. — and then use that to derive insights and automate decisions that are going to provide the most value.

Leveraged against the growing trend of improving citizen experience and engagement, such AI might be deployed to automate certain approvals, validate claims, accelerate permit processing or more. In the near future, I’d expect to see AI participating in everything from education loans to tax returns, drastically increasing the ability for the government to react to citizen demands, reduce inefficiencies and more. The applications of AI are far-reaching and widely open but first comes a little homework.

Find your dark data
As tempting as it is to deploy a solution that includes AI in its feature set and call it a day, deriving real value from AI is going to rely on quality data to train and support it. However, that is one area still challenging agencies — and the private sector as well.

A recent survey showed that public sector technologists and leaders estimated 56% of their data was still “dark” or “grey,” meaning it was unknown or unusable within the organization. In other words, agencies are missing more than half the picture of their operations. Magnified by the force multiplier that is data insights and AI, and it is hard to put a number on the value agencies are missing.

Subscribe to Federal News Network’s newsletters and be first to know the most important issues facing federal managers and government agencies.

Getting a better hold on this dark data is a crucial first step agencies must take before adopting AI. Which brings me to my next point…

Don’t over-design to specific use cases
It is tempting, particularly in government, to follow this approach: Understand what data you have, think through the use cases and how it can be deployed, then solicit requests for proposals (RFPs) for a broad solution to achieve that mission. This approach has served government well for decades, but is simply not agile enough for the anticipated developments in machine learning and applications of AI, especially with the associated complexity and dynamism of data.

Data is growing too fast and becoming too complex to build and rely on a rigid data architecture designed to ingest, analyze, process and support specific data for a mission. This approach limits agility, flexibility and creativity as it forces data to live in formats contrary to its nature.

A more robust approach would be to understand what data you have access to, collect as much of this data as is feasible and then consider what uses there might be for it. Chances are collecting data you didn’t know you had or needed will reveal new uses for that data, new insights into the mission and ultimately better mission success. Indeed, numerous organizations that have taken this approach have proven that the effort will not be wasted.

For example, the National Ignition Facility at Lawrence Livermore National Laboratory leverages this approach to technology to enable a smooth user experience for world-renowned scientists and engineers, conducting experiments that ensure the country’s continued competitive advantage in scientific research. While collecting vast amounts of data from a wide range of sensors, including cameras, thermometers and motors, NIF applies algorithms in real time to identify anomalies before they become problems. As a result, NIF engineers can detect when these sensors begin to decay and perform predictive maintenance, avoiding unscheduled downtime for groundbreaking scientific experiments. A machine learning toolkit comes in handy when you don’t know the value of your data yet!

Just as federal agencies were driven to the cloud a decade ago but were unsure of how to proceed and how to leverage it, so too, agencies are being driven to artificial intelligence: uncertain of how to proceed and how to leverage it.

But as with cloud adoption, by all accounts a mainstay of government technology policy today, the best way to begin your data-driven journey and path toward AI is to dig into the data and start using and experimenting with it. Collect more than you need and do not worry about if it will be useful up front. The uses will make themselves clear and the value will come.


04 Jul 2019
Digitalization: Key to the Future of Electronics Manufacturing?

Digitalization: Key to the Future of Electronics Manufacturing?

Electronics manufacturers have long relied on automation to streamline their production processes. The concept of the smart factory takes this one step further with artificial intelligence (AI), robotics, analytics, big data, and the Internet of Things (IoT) promising an even greater level of autonomy, agility, and connectivity.

IoT sensors and devices can be used to connect our machines and provide detailed information about their condition. The integration of robots on the factory floor can facilitate unmanned production and increase productivity. Sophisticated analytics—together with AI and machine learning—are taking on many of a factory’s routine tasks and providing invaluable data to make real-time decisions.

The potential of the smart factory also extends far beyond the factory floor to encompass the manufacturer’s supply chain, customer service, after sales, office administration, and HR. As the latest Manufacturer’s Annual Manufacturing Report 2019 highlights, smart factories offer an exciting range of transformational digital technologies that promise to improve efficiency, cut costs, and give companies a competitive edge. According to the report, the global worth of IoT is estimated to reach a massive £4.7 trillion by 2025.

But while many organisations say they are keen to capitalise on the newest digital technologies, surprisingly few are actively taking them up. How successfully they are being implemented is still largely reliant on an organisation having robust leadership and a clear strategy. So, what are some of the key themes electronics manufacturers can take away from the 2019 Manufacturer’s Report?

Digital Technology Has the Power to Drive Efficiency

Driving business efficiency is an ongoing challenge for U.K. manufacturing. The report also reveals that there’s resounding agreement on the value of adopting digital technologies to increase business prosperity.

Seventy four percent of manufacturers believe that new smart technologies will be pivotal in helping them transform their operations, whether through improving design and production processes (77%), streamlining their internal company processes (74%), helping them communicate more effectively with their supply chain (42%), or helping them deliver a better customer purchasing experience (40%).

Predictive maintenance is also considered to be one of the key technologies that can be harnessed to proactively identify problems, increase machine life, reduce machine downtime, and boost electronics manufacturers’ operating profitability.

Investment Is Still a Hurdle for Many

But while the vast majority of manufacturers recognise the value of new digital technologies, it’s clear that when it comes to adopting them, there are a fair few obstacles to overcome.

Just over one-quarter of respondents (26%) stated that while the adoption of digitalisation was on their wish list, they were unsure as to what they need to do to implement it. Surprisingly, too, one in four manufacturers said that they currently have no digital plans in place. Among the main reasons for their reluctance to act were not having a coherent digital strategy and difficulty in understanding the practical applications that the new technology would offer within their organisation.

For many businesses too, there is still a reticence to break the traditional boundaries between different functions, to seek out the new skilled workforce that they need, or to buy into the concept of flexible and continuous improvement.

There’s Good News for Humans

While the fear of robots stealing our jobs may still be a pervading one in popular culture, the 2019 Manufacturing Report suggests that within the manufacturing industry, there’s no need to be so worried.

In fact, 91% of respondents say they believe that their workforce is more engaged when working alongside machines rather than operating them. Further, 90% say that the data they will be able to gather from connected machines will be hugely influential in reducing costs and informing their decision making. There is also strong agreement that while digitalisation may lead to some reduction in the workforce, it is just as likely to help companies achieve more with the workforce that they currently have.


03 Jul 2019

LGIM unveils thematic ETFs focused on disruptive tech

Three new Ucits funds will focus on opportunities created by innovations in AI, healthcare and clean water

Legal & General Investment Management (LGIM) has expanded its range of thematic exchange traded funds (ETFs) with the launch of three new funds focused on investment opportunities created by innovations in artificial intelligence, healthcare breakthroughs and clean water.

LGIM’s existing thematic and core ETFs, which include the €800m L&G Cyber Security Ucits ETF and €813m L&G Robo Global Robotics and Automation Ucits ETF.

The ETFs are designed to invest in markets that are experiencing the potential for rapid growth due to significant changes driven by technological advancements, the group said.

“To provide investors with exposure to these themes, LGIM has developed unique indices with industry experts at Robo Global and Global Water Intelligence,” it said.

The bottom-up and concentrated approach helps investors to minimise overlap and over-exposure across their existing portfolios, the group said. Across LGIM’s five pre-existing thematic ETFs, the average overlap in holdings with the MSCI World index is just 1.92% and the average weighting to small and mid-caps is 56%.

Artificial Intelligence technologies

Artificial Intelligence (AI) technologies could increase global GDP by €13.8trn by 2030 and the L&G Artificial Intelligence Ucits ETF will seek returns from the global market for AI solutions and applications, in which revenues are projected to grow from €14.3bn in 2018 to €27.6bn in 2025.

In terms of index exposure, the fund will include companies building AI engine and platform solutions, as well as those applying AI capabilities for the purpose of digital transformation.

Taking advantage of the opportunity in healthcare, which accounted for 18% of US GDP in 2017 and is anticipated to be an $8.7 trillion global industry by 2020, the L&G Healthcare Breakthrough Ucits ETF will be formed of companies involved in the digitalisation of the healthcare supply chain, advances in healthcare robotics and next-generation diagnostic tools.

By 2025, 1.8bn people will be living in countries or regions with absolute water scarcity; two-thirds of the world’s population could be living under water-stressed conditions4. Investors in the L&G Clean Water Ucits ETF will seek returns from an index which is focused on companies integral to the world’s management of water, including those engaged in water production, processing and the provision of other related services.

Howie Li, head of ETFs at LGIM, said: “All sectors and businesses are being transformed by disruptive technology and we’re seeing increasing demand from investors looking to access these themes in a cost-efficient way. These new funds offer investors the opportunity to hold companies directly benefiting from these themes, with bespoke indices constructed around active selection and research and implemented in a systematic and rules-based way.”